repo stringlengths 7 55 | path stringlengths 4 223 | func_name stringlengths 1 134 | original_string stringlengths 75 104k | language stringclasses 1 value | code stringlengths 75 104k | code_tokens listlengths 19 28.4k | docstring stringlengths 1 46.9k | docstring_tokens listlengths 1 1.97k | sha stringlengths 40 40 | url stringlengths 87 315 | partition stringclasses 1 value |
|---|---|---|---|---|---|---|---|---|---|---|---|
apple/turicreate | src/unity/python/turicreate/config/__init__.py | set_runtime_config | def set_runtime_config(name, value):
"""
Configures system behavior at runtime. These configuration values are also
read from environment variables at program startup if available. See
:py:func:`turicreate.config.get_runtime_config()` to get the current values for
each variable.
Note that defaults may change across versions and the names
of performance tuning constants may also change as improved algorithms
are developed and implemented.
Parameters
----------
name : string
A string referring to runtime configuration variable.
value
The value to set the variable to.
Raises
------
RuntimeError
If the key does not exist, or if the value cannot be changed to the
requested value.
Notes
-----
The following section documents all the Turi Create environment variables
that can be configured.
**Basic Configuration Variables**
- *TURI_NUM_GPUS*: Number of GPUs to use when applicable. Set to 0 to force
CPU use in all situations.
- *TURI_CACHE_FILE_LOCATIONS*: The directory in which intermediate
SFrames/SArray are stored. For instance "/var/tmp". Multiple
directories can be specified separated by a colon (ex: "/var/tmp:/tmp")
in which case intermediate SFrames will be striped across both
directories (useful for specifying multiple disks). Defaults to /var/tmp
if the directory exists, /tmp otherwise.
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY*: The maximum amount of memory which
will be occupied by *all* intermediate SFrames/SArrays. Once this limit
is exceeded, SFrames/SArrays will be flushed out to temporary storage (as
specified by `TURI_CACHE_FILE_LOCATIONS`). On large systems increasing
this as well as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE` can improve
performance significantly. Defaults to 2147483648 bytes (2GB).
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE*: The maximum amount of
memory which will be occupied by any individual intermediate
SFrame/SArray. Once this limit is exceeded, the SFrame/SArray will be
flushed out to temporary storage (as specified by
`TURI_CACHE_FILE_LOCATIONS`). On large systems, increasing this as well
as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY` can improve performance
significantly for large SFrames. Defaults to 134217728 bytes (128MB).
**S3 Configuration**
- *TURI_S3_ENDPOINT*: The S3 Endpoint to connect to. If not specified AWS
S3 is assumed.
**SSL Configuration**
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_FILE*: The location of an SSL
certificate file used to validate HTTPS / S3 connections. Defaults to the
the Python certifi package certificates.
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_DIR*: The location of an SSL
certificate directory used to validate HTTPS / S3 connections. Defaults
to the operating system certificates.
- *TURI_FILEIO_INSECURE_SSL_CERTIFICATE_CHECKS*: If set to a non-zero
value, disables all SSL certificate validation. Defaults to False.
**Sort Performance Configuration**
- *TURI_SFRAME_SORT_PIVOT_ESTIMATION_SAMPLE_SIZE*: The number of random
rows to sample from the SFrame to estimate the sort pivots used to
partition the sort. Defaults to 2000000.
- *TURI_SFRAME_SORT_BUFFER_SIZE*: The maximum estimated memory consumption
sort is allowed to use. Increasing this will increase the size of each
sort partition, and will increase performance with increased memory
consumption. Defaults to 2GB.
**Join Performance Configuration**
- *TURI_SFRAME_JOIN_BUFFER_NUM_CELLS*: The maximum number of cells to
buffer in memory. Increasing this will increase the size of each join
partition and will increase performance with increased memory
consumption. If you have very large cells (very long strings for
instance), decreasing this value will help decrease memory consumption.
Defaults to 52428800.
**Groupby Aggregate Performance Configuration**
- *TURI_SFRAME_GROUPBY_BUFFER_NUM_ROWS*: The number of groupby keys cached
in memory. Increasing this will increase performance with increased
memory consumption. Defaults to 1048576.
**Advanced Configuration Variables**
- *TURI_SFRAME_FILE_HANDLE_POOL_SIZE*: The maximum number of file handles
to use when reading SFrames/SArrays. Once this limit is exceeded, file
handles will be recycled, reducing performance. This limit should be
rarely approached by most SFrame/SArray operations. Large SGraphs however
may create a large a number of SFrames in which case increasing this
limit may improve performance (You may also need to increase the system
file handle limit with "ulimit -n"). Defaults to 128.
"""
from .._connect import main as _glconnect
unity = _glconnect.get_unity()
ret = unity.set_global(name, value)
if ret != "":
raise RuntimeError(ret) | python | def set_runtime_config(name, value):
"""
Configures system behavior at runtime. These configuration values are also
read from environment variables at program startup if available. See
:py:func:`turicreate.config.get_runtime_config()` to get the current values for
each variable.
Note that defaults may change across versions and the names
of performance tuning constants may also change as improved algorithms
are developed and implemented.
Parameters
----------
name : string
A string referring to runtime configuration variable.
value
The value to set the variable to.
Raises
------
RuntimeError
If the key does not exist, or if the value cannot be changed to the
requested value.
Notes
-----
The following section documents all the Turi Create environment variables
that can be configured.
**Basic Configuration Variables**
- *TURI_NUM_GPUS*: Number of GPUs to use when applicable. Set to 0 to force
CPU use in all situations.
- *TURI_CACHE_FILE_LOCATIONS*: The directory in which intermediate
SFrames/SArray are stored. For instance "/var/tmp". Multiple
directories can be specified separated by a colon (ex: "/var/tmp:/tmp")
in which case intermediate SFrames will be striped across both
directories (useful for specifying multiple disks). Defaults to /var/tmp
if the directory exists, /tmp otherwise.
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY*: The maximum amount of memory which
will be occupied by *all* intermediate SFrames/SArrays. Once this limit
is exceeded, SFrames/SArrays will be flushed out to temporary storage (as
specified by `TURI_CACHE_FILE_LOCATIONS`). On large systems increasing
this as well as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE` can improve
performance significantly. Defaults to 2147483648 bytes (2GB).
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE*: The maximum amount of
memory which will be occupied by any individual intermediate
SFrame/SArray. Once this limit is exceeded, the SFrame/SArray will be
flushed out to temporary storage (as specified by
`TURI_CACHE_FILE_LOCATIONS`). On large systems, increasing this as well
as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY` can improve performance
significantly for large SFrames. Defaults to 134217728 bytes (128MB).
**S3 Configuration**
- *TURI_S3_ENDPOINT*: The S3 Endpoint to connect to. If not specified AWS
S3 is assumed.
**SSL Configuration**
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_FILE*: The location of an SSL
certificate file used to validate HTTPS / S3 connections. Defaults to the
the Python certifi package certificates.
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_DIR*: The location of an SSL
certificate directory used to validate HTTPS / S3 connections. Defaults
to the operating system certificates.
- *TURI_FILEIO_INSECURE_SSL_CERTIFICATE_CHECKS*: If set to a non-zero
value, disables all SSL certificate validation. Defaults to False.
**Sort Performance Configuration**
- *TURI_SFRAME_SORT_PIVOT_ESTIMATION_SAMPLE_SIZE*: The number of random
rows to sample from the SFrame to estimate the sort pivots used to
partition the sort. Defaults to 2000000.
- *TURI_SFRAME_SORT_BUFFER_SIZE*: The maximum estimated memory consumption
sort is allowed to use. Increasing this will increase the size of each
sort partition, and will increase performance with increased memory
consumption. Defaults to 2GB.
**Join Performance Configuration**
- *TURI_SFRAME_JOIN_BUFFER_NUM_CELLS*: The maximum number of cells to
buffer in memory. Increasing this will increase the size of each join
partition and will increase performance with increased memory
consumption. If you have very large cells (very long strings for
instance), decreasing this value will help decrease memory consumption.
Defaults to 52428800.
**Groupby Aggregate Performance Configuration**
- *TURI_SFRAME_GROUPBY_BUFFER_NUM_ROWS*: The number of groupby keys cached
in memory. Increasing this will increase performance with increased
memory consumption. Defaults to 1048576.
**Advanced Configuration Variables**
- *TURI_SFRAME_FILE_HANDLE_POOL_SIZE*: The maximum number of file handles
to use when reading SFrames/SArrays. Once this limit is exceeded, file
handles will be recycled, reducing performance. This limit should be
rarely approached by most SFrame/SArray operations. Large SGraphs however
may create a large a number of SFrames in which case increasing this
limit may improve performance (You may also need to increase the system
file handle limit with "ulimit -n"). Defaults to 128.
"""
from .._connect import main as _glconnect
unity = _glconnect.get_unity()
ret = unity.set_global(name, value)
if ret != "":
raise RuntimeError(ret) | [
"def",
"set_runtime_config",
"(",
"name",
",",
"value",
")",
":",
"from",
".",
".",
"_connect",
"import",
"main",
"as",
"_glconnect",
"unity",
"=",
"_glconnect",
".",
"get_unity",
"(",
")",
"ret",
"=",
"unity",
".",
"set_global",
"(",
"name",
",",
"value",
")",
"if",
"ret",
"!=",
"\"\"",
":",
"raise",
"RuntimeError",
"(",
"ret",
")"
] | Configures system behavior at runtime. These configuration values are also
read from environment variables at program startup if available. See
:py:func:`turicreate.config.get_runtime_config()` to get the current values for
each variable.
Note that defaults may change across versions and the names
of performance tuning constants may also change as improved algorithms
are developed and implemented.
Parameters
----------
name : string
A string referring to runtime configuration variable.
value
The value to set the variable to.
Raises
------
RuntimeError
If the key does not exist, or if the value cannot be changed to the
requested value.
Notes
-----
The following section documents all the Turi Create environment variables
that can be configured.
**Basic Configuration Variables**
- *TURI_NUM_GPUS*: Number of GPUs to use when applicable. Set to 0 to force
CPU use in all situations.
- *TURI_CACHE_FILE_LOCATIONS*: The directory in which intermediate
SFrames/SArray are stored. For instance "/var/tmp". Multiple
directories can be specified separated by a colon (ex: "/var/tmp:/tmp")
in which case intermediate SFrames will be striped across both
directories (useful for specifying multiple disks). Defaults to /var/tmp
if the directory exists, /tmp otherwise.
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY*: The maximum amount of memory which
will be occupied by *all* intermediate SFrames/SArrays. Once this limit
is exceeded, SFrames/SArrays will be flushed out to temporary storage (as
specified by `TURI_CACHE_FILE_LOCATIONS`). On large systems increasing
this as well as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE` can improve
performance significantly. Defaults to 2147483648 bytes (2GB).
- *TURI_FILEIO_MAXIMUM_CACHE_CAPACITY_PER_FILE*: The maximum amount of
memory which will be occupied by any individual intermediate
SFrame/SArray. Once this limit is exceeded, the SFrame/SArray will be
flushed out to temporary storage (as specified by
`TURI_CACHE_FILE_LOCATIONS`). On large systems, increasing this as well
as `TURI_FILEIO_MAXIMUM_CACHE_CAPACITY` can improve performance
significantly for large SFrames. Defaults to 134217728 bytes (128MB).
**S3 Configuration**
- *TURI_S3_ENDPOINT*: The S3 Endpoint to connect to. If not specified AWS
S3 is assumed.
**SSL Configuration**
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_FILE*: The location of an SSL
certificate file used to validate HTTPS / S3 connections. Defaults to the
the Python certifi package certificates.
- *TURI_FILEIO_ALTERNATIVE_SSL_CERT_DIR*: The location of an SSL
certificate directory used to validate HTTPS / S3 connections. Defaults
to the operating system certificates.
- *TURI_FILEIO_INSECURE_SSL_CERTIFICATE_CHECKS*: If set to a non-zero
value, disables all SSL certificate validation. Defaults to False.
**Sort Performance Configuration**
- *TURI_SFRAME_SORT_PIVOT_ESTIMATION_SAMPLE_SIZE*: The number of random
rows to sample from the SFrame to estimate the sort pivots used to
partition the sort. Defaults to 2000000.
- *TURI_SFRAME_SORT_BUFFER_SIZE*: The maximum estimated memory consumption
sort is allowed to use. Increasing this will increase the size of each
sort partition, and will increase performance with increased memory
consumption. Defaults to 2GB.
**Join Performance Configuration**
- *TURI_SFRAME_JOIN_BUFFER_NUM_CELLS*: The maximum number of cells to
buffer in memory. Increasing this will increase the size of each join
partition and will increase performance with increased memory
consumption. If you have very large cells (very long strings for
instance), decreasing this value will help decrease memory consumption.
Defaults to 52428800.
**Groupby Aggregate Performance Configuration**
- *TURI_SFRAME_GROUPBY_BUFFER_NUM_ROWS*: The number of groupby keys cached
in memory. Increasing this will increase performance with increased
memory consumption. Defaults to 1048576.
**Advanced Configuration Variables**
- *TURI_SFRAME_FILE_HANDLE_POOL_SIZE*: The maximum number of file handles
to use when reading SFrames/SArrays. Once this limit is exceeded, file
handles will be recycled, reducing performance. This limit should be
rarely approached by most SFrame/SArray operations. Large SGraphs however
may create a large a number of SFrames in which case increasing this
limit may improve performance (You may also need to increase the system
file handle limit with "ulimit -n"). Defaults to 128. | [
"Configures",
"system",
"behavior",
"at",
"runtime",
".",
"These",
"configuration",
"values",
"are",
"also",
"read",
"from",
"environment",
"variables",
"at",
"program",
"startup",
"if",
"available",
".",
"See",
":",
"py",
":",
"func",
":",
"turicreate",
".",
"config",
".",
"get_runtime_config",
"()",
"to",
"get",
"the",
"current",
"values",
"for",
"each",
"variable",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/config/__init__.py#L191-L306 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | load_sgraph | def load_sgraph(filename, format='binary', delimiter='auto'):
"""
Load SGraph from text file or previously saved SGraph binary.
Parameters
----------
filename : string
Location of the file. Can be a local path or a remote URL.
format : {'binary', 'snap', 'csv', 'tsv'}, optional
Format to of the file to load.
- 'binary': native graph format obtained from `SGraph.save`.
- 'snap': tab or space separated edge list format with comments, used in
the `Stanford Network Analysis Platform <http://snap.stanford.edu/snap/>`_.
- 'csv': comma-separated edge list without header or comments.
- 'tsv': tab-separated edge list without header or comments.
delimiter : str, optional
Specifying the Delimiter used in 'snap', 'csv' or 'tsv' format. Those
format has default delimiter, but sometimes it is useful to
overwrite the default delimiter.
Returns
-------
out : SGraph
Loaded SGraph.
See Also
--------
SGraph, SGraph.save
Examples
--------
>>> g = turicreate.SGraph().add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph')
"""
if not format in ['binary', 'snap', 'csv', 'tsv']:
raise ValueError('Invalid format: %s' % format)
with cython_context():
g = None
if format is 'binary':
proxy = glconnect.get_unity().load_graph(_make_internal_url(filename))
g = SGraph(_proxy=proxy)
elif format is 'snap':
if delimiter == 'auto':
delimiter = '\t'
sf = SFrame.read_csv(filename, comment_char='#', delimiter=delimiter,
header=False, column_type_hints=int)
g = SGraph().add_edges(sf, 'X1', 'X2')
elif format is 'csv':
if delimiter == 'auto':
delimiter = ','
sf = SFrame.read_csv(filename, header=False, delimiter=delimiter)
g = SGraph().add_edges(sf, 'X1', 'X2')
elif format is 'tsv':
if delimiter == 'auto':
delimiter = '\t'
sf = SFrame.read_csv(filename, header=False, delimiter=delimiter)
g = SGraph().add_edges(sf, 'X1', 'X2')
g.summary() # materialize
return g | python | def load_sgraph(filename, format='binary', delimiter='auto'):
"""
Load SGraph from text file or previously saved SGraph binary.
Parameters
----------
filename : string
Location of the file. Can be a local path or a remote URL.
format : {'binary', 'snap', 'csv', 'tsv'}, optional
Format to of the file to load.
- 'binary': native graph format obtained from `SGraph.save`.
- 'snap': tab or space separated edge list format with comments, used in
the `Stanford Network Analysis Platform <http://snap.stanford.edu/snap/>`_.
- 'csv': comma-separated edge list without header or comments.
- 'tsv': tab-separated edge list without header or comments.
delimiter : str, optional
Specifying the Delimiter used in 'snap', 'csv' or 'tsv' format. Those
format has default delimiter, but sometimes it is useful to
overwrite the default delimiter.
Returns
-------
out : SGraph
Loaded SGraph.
See Also
--------
SGraph, SGraph.save
Examples
--------
>>> g = turicreate.SGraph().add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph')
"""
if not format in ['binary', 'snap', 'csv', 'tsv']:
raise ValueError('Invalid format: %s' % format)
with cython_context():
g = None
if format is 'binary':
proxy = glconnect.get_unity().load_graph(_make_internal_url(filename))
g = SGraph(_proxy=proxy)
elif format is 'snap':
if delimiter == 'auto':
delimiter = '\t'
sf = SFrame.read_csv(filename, comment_char='#', delimiter=delimiter,
header=False, column_type_hints=int)
g = SGraph().add_edges(sf, 'X1', 'X2')
elif format is 'csv':
if delimiter == 'auto':
delimiter = ','
sf = SFrame.read_csv(filename, header=False, delimiter=delimiter)
g = SGraph().add_edges(sf, 'X1', 'X2')
elif format is 'tsv':
if delimiter == 'auto':
delimiter = '\t'
sf = SFrame.read_csv(filename, header=False, delimiter=delimiter)
g = SGraph().add_edges(sf, 'X1', 'X2')
g.summary() # materialize
return g | [
"def",
"load_sgraph",
"(",
"filename",
",",
"format",
"=",
"'binary'",
",",
"delimiter",
"=",
"'auto'",
")",
":",
"if",
"not",
"format",
"in",
"[",
"'binary'",
",",
"'snap'",
",",
"'csv'",
",",
"'tsv'",
"]",
":",
"raise",
"ValueError",
"(",
"'Invalid format: %s'",
"%",
"format",
")",
"with",
"cython_context",
"(",
")",
":",
"g",
"=",
"None",
"if",
"format",
"is",
"'binary'",
":",
"proxy",
"=",
"glconnect",
".",
"get_unity",
"(",
")",
".",
"load_graph",
"(",
"_make_internal_url",
"(",
"filename",
")",
")",
"g",
"=",
"SGraph",
"(",
"_proxy",
"=",
"proxy",
")",
"elif",
"format",
"is",
"'snap'",
":",
"if",
"delimiter",
"==",
"'auto'",
":",
"delimiter",
"=",
"'\\t'",
"sf",
"=",
"SFrame",
".",
"read_csv",
"(",
"filename",
",",
"comment_char",
"=",
"'#'",
",",
"delimiter",
"=",
"delimiter",
",",
"header",
"=",
"False",
",",
"column_type_hints",
"=",
"int",
")",
"g",
"=",
"SGraph",
"(",
")",
".",
"add_edges",
"(",
"sf",
",",
"'X1'",
",",
"'X2'",
")",
"elif",
"format",
"is",
"'csv'",
":",
"if",
"delimiter",
"==",
"'auto'",
":",
"delimiter",
"=",
"','",
"sf",
"=",
"SFrame",
".",
"read_csv",
"(",
"filename",
",",
"header",
"=",
"False",
",",
"delimiter",
"=",
"delimiter",
")",
"g",
"=",
"SGraph",
"(",
")",
".",
"add_edges",
"(",
"sf",
",",
"'X1'",
",",
"'X2'",
")",
"elif",
"format",
"is",
"'tsv'",
":",
"if",
"delimiter",
"==",
"'auto'",
":",
"delimiter",
"=",
"'\\t'",
"sf",
"=",
"SFrame",
".",
"read_csv",
"(",
"filename",
",",
"header",
"=",
"False",
",",
"delimiter",
"=",
"delimiter",
")",
"g",
"=",
"SGraph",
"(",
")",
".",
"add_edges",
"(",
"sf",
",",
"'X1'",
",",
"'X2'",
")",
"g",
".",
"summary",
"(",
")",
"# materialize",
"return",
"g"
] | Load SGraph from text file or previously saved SGraph binary.
Parameters
----------
filename : string
Location of the file. Can be a local path or a remote URL.
format : {'binary', 'snap', 'csv', 'tsv'}, optional
Format to of the file to load.
- 'binary': native graph format obtained from `SGraph.save`.
- 'snap': tab or space separated edge list format with comments, used in
the `Stanford Network Analysis Platform <http://snap.stanford.edu/snap/>`_.
- 'csv': comma-separated edge list without header or comments.
- 'tsv': tab-separated edge list without header or comments.
delimiter : str, optional
Specifying the Delimiter used in 'snap', 'csv' or 'tsv' format. Those
format has default delimiter, but sometimes it is useful to
overwrite the default delimiter.
Returns
-------
out : SGraph
Loaded SGraph.
See Also
--------
SGraph, SGraph.save
Examples
--------
>>> g = turicreate.SGraph().add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph') | [
"Load",
"SGraph",
"from",
"text",
"file",
"or",
"previously",
"saved",
"SGraph",
"binary",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1153-L1221 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _vertex_list_to_dataframe | def _vertex_list_to_dataframe(ls, id_column_name):
"""
Convert a list of vertices into dataframe.
"""
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
df = pd.DataFrame({id_column_name: [v.vid for v in ls]})
for c in cols:
df[c] = [v.attr.get(c) for v in ls]
return df | python | def _vertex_list_to_dataframe(ls, id_column_name):
"""
Convert a list of vertices into dataframe.
"""
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
df = pd.DataFrame({id_column_name: [v.vid for v in ls]})
for c in cols:
df[c] = [v.attr.get(c) for v in ls]
return df | [
"def",
"_vertex_list_to_dataframe",
"(",
"ls",
",",
"id_column_name",
")",
":",
"assert",
"HAS_PANDAS",
",",
"'Cannot use dataframe because Pandas is not available or version is too low.'",
"cols",
"=",
"reduce",
"(",
"set",
".",
"union",
",",
"(",
"set",
"(",
"v",
".",
"attr",
".",
"keys",
"(",
")",
")",
"for",
"v",
"in",
"ls",
")",
")",
"df",
"=",
"pd",
".",
"DataFrame",
"(",
"{",
"id_column_name",
":",
"[",
"v",
".",
"vid",
"for",
"v",
"in",
"ls",
"]",
"}",
")",
"for",
"c",
"in",
"cols",
":",
"df",
"[",
"c",
"]",
"=",
"[",
"v",
".",
"attr",
".",
"get",
"(",
"c",
")",
"for",
"v",
"in",
"ls",
"]",
"return",
"df"
] | Convert a list of vertices into dataframe. | [
"Convert",
"a",
"list",
"of",
"vertices",
"into",
"dataframe",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1229-L1238 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _vertex_list_to_sframe | def _vertex_list_to_sframe(ls, id_column_name):
"""
Convert a list of vertices into an SFrame.
"""
sf = SFrame()
if type(ls) == list:
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
sf[id_column_name] = [v.vid for v in ls]
for c in cols:
sf[c] = [v.attr.get(c) for v in ls]
elif type(ls) == Vertex:
sf[id_column_name] = [ls.vid]
for col, val in ls.attr.iteritems():
sf[col] = [val]
else:
raise TypeError('Vertices type {} is Not supported.'.format(type(ls)))
return sf | python | def _vertex_list_to_sframe(ls, id_column_name):
"""
Convert a list of vertices into an SFrame.
"""
sf = SFrame()
if type(ls) == list:
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
sf[id_column_name] = [v.vid for v in ls]
for c in cols:
sf[c] = [v.attr.get(c) for v in ls]
elif type(ls) == Vertex:
sf[id_column_name] = [ls.vid]
for col, val in ls.attr.iteritems():
sf[col] = [val]
else:
raise TypeError('Vertices type {} is Not supported.'.format(type(ls)))
return sf | [
"def",
"_vertex_list_to_sframe",
"(",
"ls",
",",
"id_column_name",
")",
":",
"sf",
"=",
"SFrame",
"(",
")",
"if",
"type",
"(",
"ls",
")",
"==",
"list",
":",
"cols",
"=",
"reduce",
"(",
"set",
".",
"union",
",",
"(",
"set",
"(",
"v",
".",
"attr",
".",
"keys",
"(",
")",
")",
"for",
"v",
"in",
"ls",
")",
")",
"sf",
"[",
"id_column_name",
"]",
"=",
"[",
"v",
".",
"vid",
"for",
"v",
"in",
"ls",
"]",
"for",
"c",
"in",
"cols",
":",
"sf",
"[",
"c",
"]",
"=",
"[",
"v",
".",
"attr",
".",
"get",
"(",
"c",
")",
"for",
"v",
"in",
"ls",
"]",
"elif",
"type",
"(",
"ls",
")",
"==",
"Vertex",
":",
"sf",
"[",
"id_column_name",
"]",
"=",
"[",
"ls",
".",
"vid",
"]",
"for",
"col",
",",
"val",
"in",
"ls",
".",
"attr",
".",
"iteritems",
"(",
")",
":",
"sf",
"[",
"col",
"]",
"=",
"[",
"val",
"]",
"else",
":",
"raise",
"TypeError",
"(",
"'Vertices type {} is Not supported.'",
".",
"format",
"(",
"type",
"(",
"ls",
")",
")",
")",
"return",
"sf"
] | Convert a list of vertices into an SFrame. | [
"Convert",
"a",
"list",
"of",
"vertices",
"into",
"an",
"SFrame",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1240-L1260 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _edge_list_to_dataframe | def _edge_list_to_dataframe(ls, src_column_name, dst_column_name):
"""
Convert a list of edges into dataframe.
"""
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
cols = reduce(set.union, (set(e.attr.keys()) for e in ls))
df = pd.DataFrame({
src_column_name: [e.src_vid for e in ls],
dst_column_name: [e.dst_vid for e in ls]})
for c in cols:
df[c] = [e.attr.get(c) for e in ls]
return df | python | def _edge_list_to_dataframe(ls, src_column_name, dst_column_name):
"""
Convert a list of edges into dataframe.
"""
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
cols = reduce(set.union, (set(e.attr.keys()) for e in ls))
df = pd.DataFrame({
src_column_name: [e.src_vid for e in ls],
dst_column_name: [e.dst_vid for e in ls]})
for c in cols:
df[c] = [e.attr.get(c) for e in ls]
return df | [
"def",
"_edge_list_to_dataframe",
"(",
"ls",
",",
"src_column_name",
",",
"dst_column_name",
")",
":",
"assert",
"HAS_PANDAS",
",",
"'Cannot use dataframe because Pandas is not available or version is too low.'",
"cols",
"=",
"reduce",
"(",
"set",
".",
"union",
",",
"(",
"set",
"(",
"e",
".",
"attr",
".",
"keys",
"(",
")",
")",
"for",
"e",
"in",
"ls",
")",
")",
"df",
"=",
"pd",
".",
"DataFrame",
"(",
"{",
"src_column_name",
":",
"[",
"e",
".",
"src_vid",
"for",
"e",
"in",
"ls",
"]",
",",
"dst_column_name",
":",
"[",
"e",
".",
"dst_vid",
"for",
"e",
"in",
"ls",
"]",
"}",
")",
"for",
"c",
"in",
"cols",
":",
"df",
"[",
"c",
"]",
"=",
"[",
"e",
".",
"attr",
".",
"get",
"(",
"c",
")",
"for",
"e",
"in",
"ls",
"]",
"return",
"df"
] | Convert a list of edges into dataframe. | [
"Convert",
"a",
"list",
"of",
"edges",
"into",
"dataframe",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1262-L1273 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _edge_list_to_sframe | def _edge_list_to_sframe(ls, src_column_name, dst_column_name):
"""
Convert a list of edges into an SFrame.
"""
sf = SFrame()
if type(ls) == list:
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
sf[src_column_name] = [e.src_vid for e in ls]
sf[dst_column_name] = [e.dst_vid for e in ls]
for c in cols:
sf[c] = [e.attr.get(c) for e in ls]
elif type(ls) == Edge:
sf[src_column_name] = [ls.src_vid]
sf[dst_column_name] = [ls.dst_vid]
else:
raise TypeError('Edges type {} is Not supported.'.format(type(ls)))
return sf | python | def _edge_list_to_sframe(ls, src_column_name, dst_column_name):
"""
Convert a list of edges into an SFrame.
"""
sf = SFrame()
if type(ls) == list:
cols = reduce(set.union, (set(v.attr.keys()) for v in ls))
sf[src_column_name] = [e.src_vid for e in ls]
sf[dst_column_name] = [e.dst_vid for e in ls]
for c in cols:
sf[c] = [e.attr.get(c) for e in ls]
elif type(ls) == Edge:
sf[src_column_name] = [ls.src_vid]
sf[dst_column_name] = [ls.dst_vid]
else:
raise TypeError('Edges type {} is Not supported.'.format(type(ls)))
return sf | [
"def",
"_edge_list_to_sframe",
"(",
"ls",
",",
"src_column_name",
",",
"dst_column_name",
")",
":",
"sf",
"=",
"SFrame",
"(",
")",
"if",
"type",
"(",
"ls",
")",
"==",
"list",
":",
"cols",
"=",
"reduce",
"(",
"set",
".",
"union",
",",
"(",
"set",
"(",
"v",
".",
"attr",
".",
"keys",
"(",
")",
")",
"for",
"v",
"in",
"ls",
")",
")",
"sf",
"[",
"src_column_name",
"]",
"=",
"[",
"e",
".",
"src_vid",
"for",
"e",
"in",
"ls",
"]",
"sf",
"[",
"dst_column_name",
"]",
"=",
"[",
"e",
".",
"dst_vid",
"for",
"e",
"in",
"ls",
"]",
"for",
"c",
"in",
"cols",
":",
"sf",
"[",
"c",
"]",
"=",
"[",
"e",
".",
"attr",
".",
"get",
"(",
"c",
")",
"for",
"e",
"in",
"ls",
"]",
"elif",
"type",
"(",
"ls",
")",
"==",
"Edge",
":",
"sf",
"[",
"src_column_name",
"]",
"=",
"[",
"ls",
".",
"src_vid",
"]",
"sf",
"[",
"dst_column_name",
"]",
"=",
"[",
"ls",
".",
"dst_vid",
"]",
"else",
":",
"raise",
"TypeError",
"(",
"'Edges type {} is Not supported.'",
".",
"format",
"(",
"type",
"(",
"ls",
")",
")",
")",
"return",
"sf"
] | Convert a list of edges into an SFrame. | [
"Convert",
"a",
"list",
"of",
"edges",
"into",
"an",
"SFrame",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1275-L1295 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _dataframe_to_vertex_list | def _dataframe_to_vertex_list(df):
"""
Convert dataframe into list of vertices, assuming that vertex ids are stored in _VID_COLUMN.
"""
cols = df.columns
if len(cols):
assert _VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _VID_COLUMN
df = df[cols].T
ret = [Vertex(None, _series=df[col]) for col in df]
return ret
else:
return [] | python | def _dataframe_to_vertex_list(df):
"""
Convert dataframe into list of vertices, assuming that vertex ids are stored in _VID_COLUMN.
"""
cols = df.columns
if len(cols):
assert _VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _VID_COLUMN
df = df[cols].T
ret = [Vertex(None, _series=df[col]) for col in df]
return ret
else:
return [] | [
"def",
"_dataframe_to_vertex_list",
"(",
"df",
")",
":",
"cols",
"=",
"df",
".",
"columns",
"if",
"len",
"(",
"cols",
")",
":",
"assert",
"_VID_COLUMN",
"in",
"cols",
",",
"\"Vertex DataFrame must contain column %s\"",
"%",
"_VID_COLUMN",
"df",
"=",
"df",
"[",
"cols",
"]",
".",
"T",
"ret",
"=",
"[",
"Vertex",
"(",
"None",
",",
"_series",
"=",
"df",
"[",
"col",
"]",
")",
"for",
"col",
"in",
"df",
"]",
"return",
"ret",
"else",
":",
"return",
"[",
"]"
] | Convert dataframe into list of vertices, assuming that vertex ids are stored in _VID_COLUMN. | [
"Convert",
"dataframe",
"into",
"list",
"of",
"vertices",
"assuming",
"that",
"vertex",
"ids",
"are",
"stored",
"in",
"_VID_COLUMN",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1297-L1308 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _dataframe_to_edge_list | def _dataframe_to_edge_list(df):
"""
Convert dataframe into list of edges, assuming that source and target ids are stored in _SRC_VID_COLUMN, and _DST_VID_COLUMN respectively.
"""
cols = df.columns
if len(cols):
assert _SRC_VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _SRC_VID_COLUMN
assert _DST_VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _DST_VID_COLUMN
df = df[cols].T
ret = [Edge(None, None, _series=df[col]) for col in df]
return ret
else:
return [] | python | def _dataframe_to_edge_list(df):
"""
Convert dataframe into list of edges, assuming that source and target ids are stored in _SRC_VID_COLUMN, and _DST_VID_COLUMN respectively.
"""
cols = df.columns
if len(cols):
assert _SRC_VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _SRC_VID_COLUMN
assert _DST_VID_COLUMN in cols, "Vertex DataFrame must contain column %s" % _DST_VID_COLUMN
df = df[cols].T
ret = [Edge(None, None, _series=df[col]) for col in df]
return ret
else:
return [] | [
"def",
"_dataframe_to_edge_list",
"(",
"df",
")",
":",
"cols",
"=",
"df",
".",
"columns",
"if",
"len",
"(",
"cols",
")",
":",
"assert",
"_SRC_VID_COLUMN",
"in",
"cols",
",",
"\"Vertex DataFrame must contain column %s\"",
"%",
"_SRC_VID_COLUMN",
"assert",
"_DST_VID_COLUMN",
"in",
"cols",
",",
"\"Vertex DataFrame must contain column %s\"",
"%",
"_DST_VID_COLUMN",
"df",
"=",
"df",
"[",
"cols",
"]",
".",
"T",
"ret",
"=",
"[",
"Edge",
"(",
"None",
",",
"None",
",",
"_series",
"=",
"df",
"[",
"col",
"]",
")",
"for",
"col",
"in",
"df",
"]",
"return",
"ret",
"else",
":",
"return",
"[",
"]"
] | Convert dataframe into list of edges, assuming that source and target ids are stored in _SRC_VID_COLUMN, and _DST_VID_COLUMN respectively. | [
"Convert",
"dataframe",
"into",
"list",
"of",
"edges",
"assuming",
"that",
"source",
"and",
"target",
"ids",
"are",
"stored",
"in",
"_SRC_VID_COLUMN",
"and",
"_DST_VID_COLUMN",
"respectively",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1311-L1323 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _vertex_data_to_sframe | def _vertex_data_to_sframe(data, vid_field):
"""
Convert data into a vertex data sframe. Using vid_field to identify the id
column. The returned sframe will have id column name '__id'.
"""
if isinstance(data, SFrame):
# '__id' already in the sframe, and it is ok to not specify vid_field
if vid_field is None and _VID_COLUMN in data.column_names():
return data
if vid_field is None:
raise ValueError("vid_field must be specified for SFrame input")
data_copy = copy.copy(data)
data_copy.rename({vid_field: _VID_COLUMN}, inplace=True)
return data_copy
if type(data) == Vertex or type(data) == list:
return _vertex_list_to_sframe(data, '__id')
elif HAS_PANDAS and type(data) == pd.DataFrame:
if vid_field is None:
# using the dataframe index as vertex id
if data.index.is_unique:
if not ("index" in data.columns):
# pandas reset_index() will insert a new column of name "index".
sf = SFrame(data.reset_index()) # "index"
sf.rename({'index': _VID_COLUMN}, inplace=True)
return sf
else:
# pandas reset_index() will insert a new column of name "level_0" if there exists a column named "index".
sf = SFrame(data.reset_index()) # "level_0"
sf.rename({'level_0': _VID_COLUMN}, inplace=True)
return sf
else:
raise ValueError("Index of the vertices dataframe is not unique, \
try specifying vid_field name to use a column for vertex ids.")
else:
sf = SFrame(data)
if _VID_COLUMN in sf.column_names():
raise ValueError('%s reserved vid column name already exists in the SFrame' % _VID_COLUMN)
sf.rename({vid_field: _VID_COLUMN}, inplace=True)
return sf
else:
raise TypeError('Vertices type %s is Not supported.' % str(type(data))) | python | def _vertex_data_to_sframe(data, vid_field):
"""
Convert data into a vertex data sframe. Using vid_field to identify the id
column. The returned sframe will have id column name '__id'.
"""
if isinstance(data, SFrame):
# '__id' already in the sframe, and it is ok to not specify vid_field
if vid_field is None and _VID_COLUMN in data.column_names():
return data
if vid_field is None:
raise ValueError("vid_field must be specified for SFrame input")
data_copy = copy.copy(data)
data_copy.rename({vid_field: _VID_COLUMN}, inplace=True)
return data_copy
if type(data) == Vertex or type(data) == list:
return _vertex_list_to_sframe(data, '__id')
elif HAS_PANDAS and type(data) == pd.DataFrame:
if vid_field is None:
# using the dataframe index as vertex id
if data.index.is_unique:
if not ("index" in data.columns):
# pandas reset_index() will insert a new column of name "index".
sf = SFrame(data.reset_index()) # "index"
sf.rename({'index': _VID_COLUMN}, inplace=True)
return sf
else:
# pandas reset_index() will insert a new column of name "level_0" if there exists a column named "index".
sf = SFrame(data.reset_index()) # "level_0"
sf.rename({'level_0': _VID_COLUMN}, inplace=True)
return sf
else:
raise ValueError("Index of the vertices dataframe is not unique, \
try specifying vid_field name to use a column for vertex ids.")
else:
sf = SFrame(data)
if _VID_COLUMN in sf.column_names():
raise ValueError('%s reserved vid column name already exists in the SFrame' % _VID_COLUMN)
sf.rename({vid_field: _VID_COLUMN}, inplace=True)
return sf
else:
raise TypeError('Vertices type %s is Not supported.' % str(type(data))) | [
"def",
"_vertex_data_to_sframe",
"(",
"data",
",",
"vid_field",
")",
":",
"if",
"isinstance",
"(",
"data",
",",
"SFrame",
")",
":",
"# '__id' already in the sframe, and it is ok to not specify vid_field",
"if",
"vid_field",
"is",
"None",
"and",
"_VID_COLUMN",
"in",
"data",
".",
"column_names",
"(",
")",
":",
"return",
"data",
"if",
"vid_field",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"vid_field must be specified for SFrame input\"",
")",
"data_copy",
"=",
"copy",
".",
"copy",
"(",
"data",
")",
"data_copy",
".",
"rename",
"(",
"{",
"vid_field",
":",
"_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"data_copy",
"if",
"type",
"(",
"data",
")",
"==",
"Vertex",
"or",
"type",
"(",
"data",
")",
"==",
"list",
":",
"return",
"_vertex_list_to_sframe",
"(",
"data",
",",
"'__id'",
")",
"elif",
"HAS_PANDAS",
"and",
"type",
"(",
"data",
")",
"==",
"pd",
".",
"DataFrame",
":",
"if",
"vid_field",
"is",
"None",
":",
"# using the dataframe index as vertex id",
"if",
"data",
".",
"index",
".",
"is_unique",
":",
"if",
"not",
"(",
"\"index\"",
"in",
"data",
".",
"columns",
")",
":",
"# pandas reset_index() will insert a new column of name \"index\".",
"sf",
"=",
"SFrame",
"(",
"data",
".",
"reset_index",
"(",
")",
")",
"# \"index\"",
"sf",
".",
"rename",
"(",
"{",
"'index'",
":",
"_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"sf",
"else",
":",
"# pandas reset_index() will insert a new column of name \"level_0\" if there exists a column named \"index\".",
"sf",
"=",
"SFrame",
"(",
"data",
".",
"reset_index",
"(",
")",
")",
"# \"level_0\"",
"sf",
".",
"rename",
"(",
"{",
"'level_0'",
":",
"_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"sf",
"else",
":",
"raise",
"ValueError",
"(",
"\"Index of the vertices dataframe is not unique, \\\n try specifying vid_field name to use a column for vertex ids.\"",
")",
"else",
":",
"sf",
"=",
"SFrame",
"(",
"data",
")",
"if",
"_VID_COLUMN",
"in",
"sf",
".",
"column_names",
"(",
")",
":",
"raise",
"ValueError",
"(",
"'%s reserved vid column name already exists in the SFrame'",
"%",
"_VID_COLUMN",
")",
"sf",
".",
"rename",
"(",
"{",
"vid_field",
":",
"_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"sf",
"else",
":",
"raise",
"TypeError",
"(",
"'Vertices type %s is Not supported.'",
"%",
"str",
"(",
"type",
"(",
"data",
")",
")",
")"
] | Convert data into a vertex data sframe. Using vid_field to identify the id
column. The returned sframe will have id column name '__id'. | [
"Convert",
"data",
"into",
"a",
"vertex",
"data",
"sframe",
".",
"Using",
"vid_field",
"to",
"identify",
"the",
"id",
"column",
".",
"The",
"returned",
"sframe",
"will",
"have",
"id",
"column",
"name",
"__id",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1326-L1368 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | _edge_data_to_sframe | def _edge_data_to_sframe(data, src_field, dst_field):
"""
Convert data into an edge data sframe. Using src_field and dst_field to
identify the source and target id column. The returned sframe will have id
column name '__src_id', '__dst_id'
"""
if isinstance(data, SFrame):
# '__src_vid' and '__dst_vid' already in the sframe, and
# it is ok to not specify src_field and dst_field
if src_field is None and dst_field is None and \
_SRC_VID_COLUMN in data.column_names() and \
_DST_VID_COLUMN in data.column_names():
return data
if src_field is None:
raise ValueError("src_field must be specified for SFrame input")
if dst_field is None:
raise ValueError("dst_field must be specified for SFrame input")
data_copy = copy.copy(data)
if src_field == _DST_VID_COLUMN and dst_field == _SRC_VID_COLUMN:
# special case when src_field = "__dst_id" and dst_field = "__src_id"
# directly renaming will cause name collision
dst_id_column = data_copy[_DST_VID_COLUMN]
del data_copy[_DST_VID_COLUMN]
data_copy.rename({_SRC_VID_COLUMN: _DST_VID_COLUMN}, inplace=True)
data_copy[_SRC_VID_COLUMN] = dst_id_column
else:
data_copy.rename({src_field: _SRC_VID_COLUMN, dst_field: _DST_VID_COLUMN}, inplace=True)
return data_copy
elif HAS_PANDAS and type(data) == pd.DataFrame:
if src_field is None:
raise ValueError("src_field must be specified for Pandas input")
if dst_field is None:
raise ValueError("dst_field must be specified for Pandas input")
sf = SFrame(data)
if src_field == _DST_VID_COLUMN and dst_field == _SRC_VID_COLUMN:
# special case when src_field = "__dst_id" and dst_field = "__src_id"
# directly renaming will cause name collision
dst_id_column = data_copy[_DST_VID_COLUMN]
del sf[_DST_VID_COLUMN]
sf.rename({_SRC_VID_COLUMN: _DST_VID_COLUMN}, inplace=True)
sf[_SRC_VID_COLUMN] = dst_id_column
else:
sf.rename({src_field: _SRC_VID_COLUMN, dst_field: _DST_VID_COLUMN}, inplace=True)
return sf
elif type(data) == Edge:
return _edge_list_to_sframe([data], _SRC_VID_COLUMN, _DST_VID_COLUMN)
elif type(data) == list:
return _edge_list_to_sframe(data, _SRC_VID_COLUMN, _DST_VID_COLUMN)
else:
raise TypeError('Edges type %s is Not supported.' % str(type(data))) | python | def _edge_data_to_sframe(data, src_field, dst_field):
"""
Convert data into an edge data sframe. Using src_field and dst_field to
identify the source and target id column. The returned sframe will have id
column name '__src_id', '__dst_id'
"""
if isinstance(data, SFrame):
# '__src_vid' and '__dst_vid' already in the sframe, and
# it is ok to not specify src_field and dst_field
if src_field is None and dst_field is None and \
_SRC_VID_COLUMN in data.column_names() and \
_DST_VID_COLUMN in data.column_names():
return data
if src_field is None:
raise ValueError("src_field must be specified for SFrame input")
if dst_field is None:
raise ValueError("dst_field must be specified for SFrame input")
data_copy = copy.copy(data)
if src_field == _DST_VID_COLUMN and dst_field == _SRC_VID_COLUMN:
# special case when src_field = "__dst_id" and dst_field = "__src_id"
# directly renaming will cause name collision
dst_id_column = data_copy[_DST_VID_COLUMN]
del data_copy[_DST_VID_COLUMN]
data_copy.rename({_SRC_VID_COLUMN: _DST_VID_COLUMN}, inplace=True)
data_copy[_SRC_VID_COLUMN] = dst_id_column
else:
data_copy.rename({src_field: _SRC_VID_COLUMN, dst_field: _DST_VID_COLUMN}, inplace=True)
return data_copy
elif HAS_PANDAS and type(data) == pd.DataFrame:
if src_field is None:
raise ValueError("src_field must be specified for Pandas input")
if dst_field is None:
raise ValueError("dst_field must be specified for Pandas input")
sf = SFrame(data)
if src_field == _DST_VID_COLUMN and dst_field == _SRC_VID_COLUMN:
# special case when src_field = "__dst_id" and dst_field = "__src_id"
# directly renaming will cause name collision
dst_id_column = data_copy[_DST_VID_COLUMN]
del sf[_DST_VID_COLUMN]
sf.rename({_SRC_VID_COLUMN: _DST_VID_COLUMN}, inplace=True)
sf[_SRC_VID_COLUMN] = dst_id_column
else:
sf.rename({src_field: _SRC_VID_COLUMN, dst_field: _DST_VID_COLUMN}, inplace=True)
return sf
elif type(data) == Edge:
return _edge_list_to_sframe([data], _SRC_VID_COLUMN, _DST_VID_COLUMN)
elif type(data) == list:
return _edge_list_to_sframe(data, _SRC_VID_COLUMN, _DST_VID_COLUMN)
else:
raise TypeError('Edges type %s is Not supported.' % str(type(data))) | [
"def",
"_edge_data_to_sframe",
"(",
"data",
",",
"src_field",
",",
"dst_field",
")",
":",
"if",
"isinstance",
"(",
"data",
",",
"SFrame",
")",
":",
"# '__src_vid' and '__dst_vid' already in the sframe, and",
"# it is ok to not specify src_field and dst_field",
"if",
"src_field",
"is",
"None",
"and",
"dst_field",
"is",
"None",
"and",
"_SRC_VID_COLUMN",
"in",
"data",
".",
"column_names",
"(",
")",
"and",
"_DST_VID_COLUMN",
"in",
"data",
".",
"column_names",
"(",
")",
":",
"return",
"data",
"if",
"src_field",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"src_field must be specified for SFrame input\"",
")",
"if",
"dst_field",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"dst_field must be specified for SFrame input\"",
")",
"data_copy",
"=",
"copy",
".",
"copy",
"(",
"data",
")",
"if",
"src_field",
"==",
"_DST_VID_COLUMN",
"and",
"dst_field",
"==",
"_SRC_VID_COLUMN",
":",
"# special case when src_field = \"__dst_id\" and dst_field = \"__src_id\"",
"# directly renaming will cause name collision",
"dst_id_column",
"=",
"data_copy",
"[",
"_DST_VID_COLUMN",
"]",
"del",
"data_copy",
"[",
"_DST_VID_COLUMN",
"]",
"data_copy",
".",
"rename",
"(",
"{",
"_SRC_VID_COLUMN",
":",
"_DST_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"data_copy",
"[",
"_SRC_VID_COLUMN",
"]",
"=",
"dst_id_column",
"else",
":",
"data_copy",
".",
"rename",
"(",
"{",
"src_field",
":",
"_SRC_VID_COLUMN",
",",
"dst_field",
":",
"_DST_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"data_copy",
"elif",
"HAS_PANDAS",
"and",
"type",
"(",
"data",
")",
"==",
"pd",
".",
"DataFrame",
":",
"if",
"src_field",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"src_field must be specified for Pandas input\"",
")",
"if",
"dst_field",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"\"dst_field must be specified for Pandas input\"",
")",
"sf",
"=",
"SFrame",
"(",
"data",
")",
"if",
"src_field",
"==",
"_DST_VID_COLUMN",
"and",
"dst_field",
"==",
"_SRC_VID_COLUMN",
":",
"# special case when src_field = \"__dst_id\" and dst_field = \"__src_id\"",
"# directly renaming will cause name collision",
"dst_id_column",
"=",
"data_copy",
"[",
"_DST_VID_COLUMN",
"]",
"del",
"sf",
"[",
"_DST_VID_COLUMN",
"]",
"sf",
".",
"rename",
"(",
"{",
"_SRC_VID_COLUMN",
":",
"_DST_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"sf",
"[",
"_SRC_VID_COLUMN",
"]",
"=",
"dst_id_column",
"else",
":",
"sf",
".",
"rename",
"(",
"{",
"src_field",
":",
"_SRC_VID_COLUMN",
",",
"dst_field",
":",
"_DST_VID_COLUMN",
"}",
",",
"inplace",
"=",
"True",
")",
"return",
"sf",
"elif",
"type",
"(",
"data",
")",
"==",
"Edge",
":",
"return",
"_edge_list_to_sframe",
"(",
"[",
"data",
"]",
",",
"_SRC_VID_COLUMN",
",",
"_DST_VID_COLUMN",
")",
"elif",
"type",
"(",
"data",
")",
"==",
"list",
":",
"return",
"_edge_list_to_sframe",
"(",
"data",
",",
"_SRC_VID_COLUMN",
",",
"_DST_VID_COLUMN",
")",
"else",
":",
"raise",
"TypeError",
"(",
"'Edges type %s is Not supported.'",
"%",
"str",
"(",
"type",
"(",
"data",
")",
")",
")"
] | Convert data into an edge data sframe. Using src_field and dst_field to
identify the source and target id column. The returned sframe will have id
column name '__src_id', '__dst_id' | [
"Convert",
"data",
"into",
"an",
"edge",
"data",
"sframe",
".",
"Using",
"src_field",
"and",
"dst_field",
"to",
"identify",
"the",
"source",
"and",
"target",
"id",
"column",
".",
"The",
"returned",
"sframe",
"will",
"have",
"id",
"column",
"name",
"__src_id",
"__dst_id"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1371-L1424 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.get_vertices | def get_vertices(self, ids=[], fields={}, format='sframe'):
"""
get_vertices(self, ids=list(), fields={}, format='sframe')
Return a collection of vertices and their attributes.
Parameters
----------
ids : list [int | float | str] or SArray
List of vertex IDs to retrieve. Only vertices in this list will be
returned. Also accepts a single vertex id.
fields : dict | pandas.DataFrame
Dictionary specifying equality constraint on field values. For
example ``{'gender': 'M'}``, returns only vertices whose 'gender'
field is 'M'. ``None`` can be used to designate a wild card. For
example, {'relationship': None} will find all vertices with the
field 'relationship' regardless of the value.
format : {'sframe', 'list'}
Output format. The SFrame output (default) contains a column
``__src_id`` with vertex IDs and a column for each vertex attribute.
List output returns a list of Vertex objects.
Returns
-------
out : SFrame or list [Vertex]
An SFrame or list of Vertex objects.
See Also
--------
vertices, get_edges
Examples
--------
Return all vertices in the graph.
>>> from turicreate import SGraph, Vertex
>>> g = SGraph().add_vertices([Vertex(0, attr={'gender': 'M'}),
Vertex(1, attr={'gender': 'F'}),
Vertex(2, attr={'gender': 'F'})])
>>> g.get_vertices()
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
| 1 | F |
+------+--------+
Return vertices 0 and 2.
>>> g.get_vertices(ids=[0, 2])
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
+------+--------+
Return vertices with the vertex attribute "gender" equal to "M".
>>> g.get_vertices(fields={'gender': 'M'})
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
+------+--------+
"""
if not _is_non_string_iterable(ids):
ids = [ids]
if type(ids) not in (list, SArray):
raise TypeError('ids must be list or SArray type')
with cython_context():
sf = SFrame(_proxy=self.__proxy__.get_vertices(ids, fields))
if (format == 'sframe'):
return sf
elif (format == 'dataframe'):
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
if sf.num_rows() == 0:
return pd.DataFrame()
else:
df = sf.head(sf.num_rows()).to_dataframe()
return df.set_index('__id')
elif (format == 'list'):
return _dataframe_to_vertex_list(sf.to_dataframe())
else:
raise ValueError("Invalid format specifier") | python | def get_vertices(self, ids=[], fields={}, format='sframe'):
"""
get_vertices(self, ids=list(), fields={}, format='sframe')
Return a collection of vertices and their attributes.
Parameters
----------
ids : list [int | float | str] or SArray
List of vertex IDs to retrieve. Only vertices in this list will be
returned. Also accepts a single vertex id.
fields : dict | pandas.DataFrame
Dictionary specifying equality constraint on field values. For
example ``{'gender': 'M'}``, returns only vertices whose 'gender'
field is 'M'. ``None`` can be used to designate a wild card. For
example, {'relationship': None} will find all vertices with the
field 'relationship' regardless of the value.
format : {'sframe', 'list'}
Output format. The SFrame output (default) contains a column
``__src_id`` with vertex IDs and a column for each vertex attribute.
List output returns a list of Vertex objects.
Returns
-------
out : SFrame or list [Vertex]
An SFrame or list of Vertex objects.
See Also
--------
vertices, get_edges
Examples
--------
Return all vertices in the graph.
>>> from turicreate import SGraph, Vertex
>>> g = SGraph().add_vertices([Vertex(0, attr={'gender': 'M'}),
Vertex(1, attr={'gender': 'F'}),
Vertex(2, attr={'gender': 'F'})])
>>> g.get_vertices()
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
| 1 | F |
+------+--------+
Return vertices 0 and 2.
>>> g.get_vertices(ids=[0, 2])
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
+------+--------+
Return vertices with the vertex attribute "gender" equal to "M".
>>> g.get_vertices(fields={'gender': 'M'})
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
+------+--------+
"""
if not _is_non_string_iterable(ids):
ids = [ids]
if type(ids) not in (list, SArray):
raise TypeError('ids must be list or SArray type')
with cython_context():
sf = SFrame(_proxy=self.__proxy__.get_vertices(ids, fields))
if (format == 'sframe'):
return sf
elif (format == 'dataframe'):
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
if sf.num_rows() == 0:
return pd.DataFrame()
else:
df = sf.head(sf.num_rows()).to_dataframe()
return df.set_index('__id')
elif (format == 'list'):
return _dataframe_to_vertex_list(sf.to_dataframe())
else:
raise ValueError("Invalid format specifier") | [
"def",
"get_vertices",
"(",
"self",
",",
"ids",
"=",
"[",
"]",
",",
"fields",
"=",
"{",
"}",
",",
"format",
"=",
"'sframe'",
")",
":",
"if",
"not",
"_is_non_string_iterable",
"(",
"ids",
")",
":",
"ids",
"=",
"[",
"ids",
"]",
"if",
"type",
"(",
"ids",
")",
"not",
"in",
"(",
"list",
",",
"SArray",
")",
":",
"raise",
"TypeError",
"(",
"'ids must be list or SArray type'",
")",
"with",
"cython_context",
"(",
")",
":",
"sf",
"=",
"SFrame",
"(",
"_proxy",
"=",
"self",
".",
"__proxy__",
".",
"get_vertices",
"(",
"ids",
",",
"fields",
")",
")",
"if",
"(",
"format",
"==",
"'sframe'",
")",
":",
"return",
"sf",
"elif",
"(",
"format",
"==",
"'dataframe'",
")",
":",
"assert",
"HAS_PANDAS",
",",
"'Cannot use dataframe because Pandas is not available or version is too low.'",
"if",
"sf",
".",
"num_rows",
"(",
")",
"==",
"0",
":",
"return",
"pd",
".",
"DataFrame",
"(",
")",
"else",
":",
"df",
"=",
"sf",
".",
"head",
"(",
"sf",
".",
"num_rows",
"(",
")",
")",
".",
"to_dataframe",
"(",
")",
"return",
"df",
".",
"set_index",
"(",
"'__id'",
")",
"elif",
"(",
"format",
"==",
"'list'",
")",
":",
"return",
"_dataframe_to_vertex_list",
"(",
"sf",
".",
"to_dataframe",
"(",
")",
")",
"else",
":",
"raise",
"ValueError",
"(",
"\"Invalid format specifier\"",
")"
] | get_vertices(self, ids=list(), fields={}, format='sframe')
Return a collection of vertices and their attributes.
Parameters
----------
ids : list [int | float | str] or SArray
List of vertex IDs to retrieve. Only vertices in this list will be
returned. Also accepts a single vertex id.
fields : dict | pandas.DataFrame
Dictionary specifying equality constraint on field values. For
example ``{'gender': 'M'}``, returns only vertices whose 'gender'
field is 'M'. ``None`` can be used to designate a wild card. For
example, {'relationship': None} will find all vertices with the
field 'relationship' regardless of the value.
format : {'sframe', 'list'}
Output format. The SFrame output (default) contains a column
``__src_id`` with vertex IDs and a column for each vertex attribute.
List output returns a list of Vertex objects.
Returns
-------
out : SFrame or list [Vertex]
An SFrame or list of Vertex objects.
See Also
--------
vertices, get_edges
Examples
--------
Return all vertices in the graph.
>>> from turicreate import SGraph, Vertex
>>> g = SGraph().add_vertices([Vertex(0, attr={'gender': 'M'}),
Vertex(1, attr={'gender': 'F'}),
Vertex(2, attr={'gender': 'F'})])
>>> g.get_vertices()
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
| 1 | F |
+------+--------+
Return vertices 0 and 2.
>>> g.get_vertices(ids=[0, 2])
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
| 2 | F |
+------+--------+
Return vertices with the vertex attribute "gender" equal to "M".
>>> g.get_vertices(fields={'gender': 'M'})
+------+--------+
| __id | gender |
+------+--------+
| 0 | M |
+------+--------+ | [
"get_vertices",
"(",
"self",
"ids",
"=",
"list",
"()",
"fields",
"=",
"{}",
"format",
"=",
"sframe",
")",
"Return",
"a",
"collection",
"of",
"vertices",
"and",
"their",
"attributes",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L387-L478 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.get_edges | def get_edges(self, src_ids=[], dst_ids=[], fields={}, format='sframe'):
"""
get_edges(self, src_ids=list(), dst_ids=list(), fields={}, format='sframe')
Return a collection of edges and their attributes. This function is used
to find edges by vertex IDs, filter on edge attributes, or list in-out
neighbors of vertex sets.
Parameters
----------
src_ids, dst_ids : list or SArray, optional
Parallel arrays of vertex IDs, with each pair corresponding to an
edge to fetch. Only edges in this list are returned. ``None`` can be
used to designate a wild card. For instance, ``src_ids=[1, 2,
None]``, ``dst_ids=[3, None, 5]`` will fetch the edge 1->3, all
outgoing edges of 2 and all incoming edges of 5. src_id and dst_id
may be left empty, which implies an array of all wild cards.
fields : dict, optional
Dictionary specifying equality constraints on field values. For
example, ``{'relationship': 'following'}``, returns only edges whose
'relationship' field equals 'following'. ``None`` can be used as a
value to designate a wild card. e.g. ``{'relationship': None}`` will
find all edges with the field 'relationship' regardless of the
value.
format : {'sframe', 'list'}, optional
Output format. The 'sframe' output (default) contains columns
__src_id and __dst_id with edge vertex IDs and a column for each
edge attribute. List output returns a list of Edge objects.
Returns
-------
out : SFrame | list [Edge]
An SFrame or list of edges.
See Also
--------
edges, get_vertices
Examples
--------
Return all edges in the graph.
>>> from turicreate import SGraph, Edge
>>> g = SGraph().add_edges([Edge(0, 1, attr={'rating': 5}),
Edge(0, 2, attr={'rating': 2}),
Edge(1, 2)])
>>> g.get_edges(src_ids=[None], dst_ids=[None])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 2 | 2 |
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+
Return edges with the attribute "rating" of 5.
>>> g.get_edges(fields={'rating': 5})
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
+----------+----------+--------+
Return edges 0 --> 1 and 1 --> 2 (if present in the graph).
>>> g.get_edges(src_ids=[0, 1], dst_ids=[1, 2])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+
"""
if not _is_non_string_iterable(src_ids):
src_ids = [src_ids]
if not _is_non_string_iterable(dst_ids):
dst_ids = [dst_ids]
if type(src_ids) not in (list, SArray):
raise TypeError('src_ids must be list or SArray type')
if type(dst_ids) not in (list, SArray):
raise TypeError('dst_ids must be list or SArray type')
# implicit Nones
if len(src_ids) == 0 and len(dst_ids) > 0:
src_ids = [None] * len(dst_ids)
# implicit Nones
if len(dst_ids) == 0 and len(src_ids) > 0:
dst_ids = [None] * len(src_ids)
with cython_context():
sf = SFrame(_proxy=self.__proxy__.get_edges(src_ids, dst_ids, fields))
if (format == 'sframe'):
return sf
if (format == 'dataframe'):
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
if sf.num_rows() == 0:
return pd.DataFrame()
else:
return sf.head(sf.num_rows()).to_dataframe()
elif (format == 'list'):
return _dataframe_to_edge_list(sf.to_dataframe())
else:
raise ValueError("Invalid format specifier") | python | def get_edges(self, src_ids=[], dst_ids=[], fields={}, format='sframe'):
"""
get_edges(self, src_ids=list(), dst_ids=list(), fields={}, format='sframe')
Return a collection of edges and their attributes. This function is used
to find edges by vertex IDs, filter on edge attributes, or list in-out
neighbors of vertex sets.
Parameters
----------
src_ids, dst_ids : list or SArray, optional
Parallel arrays of vertex IDs, with each pair corresponding to an
edge to fetch. Only edges in this list are returned. ``None`` can be
used to designate a wild card. For instance, ``src_ids=[1, 2,
None]``, ``dst_ids=[3, None, 5]`` will fetch the edge 1->3, all
outgoing edges of 2 and all incoming edges of 5. src_id and dst_id
may be left empty, which implies an array of all wild cards.
fields : dict, optional
Dictionary specifying equality constraints on field values. For
example, ``{'relationship': 'following'}``, returns only edges whose
'relationship' field equals 'following'. ``None`` can be used as a
value to designate a wild card. e.g. ``{'relationship': None}`` will
find all edges with the field 'relationship' regardless of the
value.
format : {'sframe', 'list'}, optional
Output format. The 'sframe' output (default) contains columns
__src_id and __dst_id with edge vertex IDs and a column for each
edge attribute. List output returns a list of Edge objects.
Returns
-------
out : SFrame | list [Edge]
An SFrame or list of edges.
See Also
--------
edges, get_vertices
Examples
--------
Return all edges in the graph.
>>> from turicreate import SGraph, Edge
>>> g = SGraph().add_edges([Edge(0, 1, attr={'rating': 5}),
Edge(0, 2, attr={'rating': 2}),
Edge(1, 2)])
>>> g.get_edges(src_ids=[None], dst_ids=[None])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 2 | 2 |
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+
Return edges with the attribute "rating" of 5.
>>> g.get_edges(fields={'rating': 5})
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
+----------+----------+--------+
Return edges 0 --> 1 and 1 --> 2 (if present in the graph).
>>> g.get_edges(src_ids=[0, 1], dst_ids=[1, 2])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+
"""
if not _is_non_string_iterable(src_ids):
src_ids = [src_ids]
if not _is_non_string_iterable(dst_ids):
dst_ids = [dst_ids]
if type(src_ids) not in (list, SArray):
raise TypeError('src_ids must be list or SArray type')
if type(dst_ids) not in (list, SArray):
raise TypeError('dst_ids must be list or SArray type')
# implicit Nones
if len(src_ids) == 0 and len(dst_ids) > 0:
src_ids = [None] * len(dst_ids)
# implicit Nones
if len(dst_ids) == 0 and len(src_ids) > 0:
dst_ids = [None] * len(src_ids)
with cython_context():
sf = SFrame(_proxy=self.__proxy__.get_edges(src_ids, dst_ids, fields))
if (format == 'sframe'):
return sf
if (format == 'dataframe'):
assert HAS_PANDAS, 'Cannot use dataframe because Pandas is not available or version is too low.'
if sf.num_rows() == 0:
return pd.DataFrame()
else:
return sf.head(sf.num_rows()).to_dataframe()
elif (format == 'list'):
return _dataframe_to_edge_list(sf.to_dataframe())
else:
raise ValueError("Invalid format specifier") | [
"def",
"get_edges",
"(",
"self",
",",
"src_ids",
"=",
"[",
"]",
",",
"dst_ids",
"=",
"[",
"]",
",",
"fields",
"=",
"{",
"}",
",",
"format",
"=",
"'sframe'",
")",
":",
"if",
"not",
"_is_non_string_iterable",
"(",
"src_ids",
")",
":",
"src_ids",
"=",
"[",
"src_ids",
"]",
"if",
"not",
"_is_non_string_iterable",
"(",
"dst_ids",
")",
":",
"dst_ids",
"=",
"[",
"dst_ids",
"]",
"if",
"type",
"(",
"src_ids",
")",
"not",
"in",
"(",
"list",
",",
"SArray",
")",
":",
"raise",
"TypeError",
"(",
"'src_ids must be list or SArray type'",
")",
"if",
"type",
"(",
"dst_ids",
")",
"not",
"in",
"(",
"list",
",",
"SArray",
")",
":",
"raise",
"TypeError",
"(",
"'dst_ids must be list or SArray type'",
")",
"# implicit Nones",
"if",
"len",
"(",
"src_ids",
")",
"==",
"0",
"and",
"len",
"(",
"dst_ids",
")",
">",
"0",
":",
"src_ids",
"=",
"[",
"None",
"]",
"*",
"len",
"(",
"dst_ids",
")",
"# implicit Nones",
"if",
"len",
"(",
"dst_ids",
")",
"==",
"0",
"and",
"len",
"(",
"src_ids",
")",
">",
"0",
":",
"dst_ids",
"=",
"[",
"None",
"]",
"*",
"len",
"(",
"src_ids",
")",
"with",
"cython_context",
"(",
")",
":",
"sf",
"=",
"SFrame",
"(",
"_proxy",
"=",
"self",
".",
"__proxy__",
".",
"get_edges",
"(",
"src_ids",
",",
"dst_ids",
",",
"fields",
")",
")",
"if",
"(",
"format",
"==",
"'sframe'",
")",
":",
"return",
"sf",
"if",
"(",
"format",
"==",
"'dataframe'",
")",
":",
"assert",
"HAS_PANDAS",
",",
"'Cannot use dataframe because Pandas is not available or version is too low.'",
"if",
"sf",
".",
"num_rows",
"(",
")",
"==",
"0",
":",
"return",
"pd",
".",
"DataFrame",
"(",
")",
"else",
":",
"return",
"sf",
".",
"head",
"(",
"sf",
".",
"num_rows",
"(",
")",
")",
".",
"to_dataframe",
"(",
")",
"elif",
"(",
"format",
"==",
"'list'",
")",
":",
"return",
"_dataframe_to_edge_list",
"(",
"sf",
".",
"to_dataframe",
"(",
")",
")",
"else",
":",
"raise",
"ValueError",
"(",
"\"Invalid format specifier\"",
")"
] | get_edges(self, src_ids=list(), dst_ids=list(), fields={}, format='sframe')
Return a collection of edges and their attributes. This function is used
to find edges by vertex IDs, filter on edge attributes, or list in-out
neighbors of vertex sets.
Parameters
----------
src_ids, dst_ids : list or SArray, optional
Parallel arrays of vertex IDs, with each pair corresponding to an
edge to fetch. Only edges in this list are returned. ``None`` can be
used to designate a wild card. For instance, ``src_ids=[1, 2,
None]``, ``dst_ids=[3, None, 5]`` will fetch the edge 1->3, all
outgoing edges of 2 and all incoming edges of 5. src_id and dst_id
may be left empty, which implies an array of all wild cards.
fields : dict, optional
Dictionary specifying equality constraints on field values. For
example, ``{'relationship': 'following'}``, returns only edges whose
'relationship' field equals 'following'. ``None`` can be used as a
value to designate a wild card. e.g. ``{'relationship': None}`` will
find all edges with the field 'relationship' regardless of the
value.
format : {'sframe', 'list'}, optional
Output format. The 'sframe' output (default) contains columns
__src_id and __dst_id with edge vertex IDs and a column for each
edge attribute. List output returns a list of Edge objects.
Returns
-------
out : SFrame | list [Edge]
An SFrame or list of edges.
See Also
--------
edges, get_vertices
Examples
--------
Return all edges in the graph.
>>> from turicreate import SGraph, Edge
>>> g = SGraph().add_edges([Edge(0, 1, attr={'rating': 5}),
Edge(0, 2, attr={'rating': 2}),
Edge(1, 2)])
>>> g.get_edges(src_ids=[None], dst_ids=[None])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 2 | 2 |
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+
Return edges with the attribute "rating" of 5.
>>> g.get_edges(fields={'rating': 5})
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
+----------+----------+--------+
Return edges 0 --> 1 and 1 --> 2 (if present in the graph).
>>> g.get_edges(src_ids=[0, 1], dst_ids=[1, 2])
+----------+----------+--------+
| __src_id | __dst_id | rating |
+----------+----------+--------+
| 0 | 1 | 5 |
| 1 | 2 | None |
+----------+----------+--------+ | [
"get_edges",
"(",
"self",
"src_ids",
"=",
"list",
"()",
"dst_ids",
"=",
"list",
"()",
"fields",
"=",
"{}",
"format",
"=",
"sframe",
")",
"Return",
"a",
"collection",
"of",
"edges",
"and",
"their",
"attributes",
".",
"This",
"function",
"is",
"used",
"to",
"find",
"edges",
"by",
"vertex",
"IDs",
"filter",
"on",
"edge",
"attributes",
"or",
"list",
"in",
"-",
"out",
"neighbors",
"of",
"vertex",
"sets",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L480-L587 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.add_vertices | def add_vertices(self, vertices, vid_field=None):
"""
Add vertices to the SGraph. Vertices should be input as a list of
:class:`~turicreate.Vertex` objects, an :class:`~turicreate.SFrame`, or a
pandas DataFrame. If vertices are specified by SFrame or DataFrame,
``vid_field`` specifies which column contains the vertex ID. Remaining
columns are assumed to hold additional vertex attributes. If these
attributes are not already present in the graph's vertex data, they are
added, with existing vertices acquiring the value ``None``.
Parameters
----------
vertices : Vertex | list [Vertex] | pandas.DataFrame | SFrame
Vertex data. If the vertices are in an SFrame or DataFrame, then
``vid_field`` specifies the column containing the vertex IDs.
Additional columns are treated as vertex attributes.
vid_field : string, optional
Column in the DataFrame or SFrame to use as vertex ID. Required if
vertices is an SFrame. If ``vertices`` is a DataFrame and
``vid_field`` is not specified, the row index is used as vertex ID.
Returns
-------
out : SGraph
A new SGraph with vertices added.
See Also
--------
vertices, SFrame, add_edges
Notes
-----
- If vertices are added with indices that already exist in the graph,
they are overwritten completely. All attributes for these vertices
will conform to the specification in this method.
Examples
--------
>>> from turicreate import SGraph, Vertex, SFrame
>>> g = SGraph()
Add a single vertex.
>>> g = g.add_vertices(Vertex(0, attr={'breed': 'labrador'}))
Add a list of vertices.
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add vertices from an SFrame.
>>> sf_vert = SFrame({'id': [0, 1, 2], 'breed':['lab', 'lab', 'vizsla']})
>>> g = g.add_vertices(sf_vert, vid_field='id')
"""
sf = _vertex_data_to_sframe(vertices, vid_field)
with cython_context():
proxy = self.__proxy__.add_vertices(sf.__proxy__, _VID_COLUMN)
return SGraph(_proxy=proxy) | python | def add_vertices(self, vertices, vid_field=None):
"""
Add vertices to the SGraph. Vertices should be input as a list of
:class:`~turicreate.Vertex` objects, an :class:`~turicreate.SFrame`, or a
pandas DataFrame. If vertices are specified by SFrame or DataFrame,
``vid_field`` specifies which column contains the vertex ID. Remaining
columns are assumed to hold additional vertex attributes. If these
attributes are not already present in the graph's vertex data, they are
added, with existing vertices acquiring the value ``None``.
Parameters
----------
vertices : Vertex | list [Vertex] | pandas.DataFrame | SFrame
Vertex data. If the vertices are in an SFrame or DataFrame, then
``vid_field`` specifies the column containing the vertex IDs.
Additional columns are treated as vertex attributes.
vid_field : string, optional
Column in the DataFrame or SFrame to use as vertex ID. Required if
vertices is an SFrame. If ``vertices`` is a DataFrame and
``vid_field`` is not specified, the row index is used as vertex ID.
Returns
-------
out : SGraph
A new SGraph with vertices added.
See Also
--------
vertices, SFrame, add_edges
Notes
-----
- If vertices are added with indices that already exist in the graph,
they are overwritten completely. All attributes for these vertices
will conform to the specification in this method.
Examples
--------
>>> from turicreate import SGraph, Vertex, SFrame
>>> g = SGraph()
Add a single vertex.
>>> g = g.add_vertices(Vertex(0, attr={'breed': 'labrador'}))
Add a list of vertices.
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add vertices from an SFrame.
>>> sf_vert = SFrame({'id': [0, 1, 2], 'breed':['lab', 'lab', 'vizsla']})
>>> g = g.add_vertices(sf_vert, vid_field='id')
"""
sf = _vertex_data_to_sframe(vertices, vid_field)
with cython_context():
proxy = self.__proxy__.add_vertices(sf.__proxy__, _VID_COLUMN)
return SGraph(_proxy=proxy) | [
"def",
"add_vertices",
"(",
"self",
",",
"vertices",
",",
"vid_field",
"=",
"None",
")",
":",
"sf",
"=",
"_vertex_data_to_sframe",
"(",
"vertices",
",",
"vid_field",
")",
"with",
"cython_context",
"(",
")",
":",
"proxy",
"=",
"self",
".",
"__proxy__",
".",
"add_vertices",
"(",
"sf",
".",
"__proxy__",
",",
"_VID_COLUMN",
")",
"return",
"SGraph",
"(",
"_proxy",
"=",
"proxy",
")"
] | Add vertices to the SGraph. Vertices should be input as a list of
:class:`~turicreate.Vertex` objects, an :class:`~turicreate.SFrame`, or a
pandas DataFrame. If vertices are specified by SFrame or DataFrame,
``vid_field`` specifies which column contains the vertex ID. Remaining
columns are assumed to hold additional vertex attributes. If these
attributes are not already present in the graph's vertex data, they are
added, with existing vertices acquiring the value ``None``.
Parameters
----------
vertices : Vertex | list [Vertex] | pandas.DataFrame | SFrame
Vertex data. If the vertices are in an SFrame or DataFrame, then
``vid_field`` specifies the column containing the vertex IDs.
Additional columns are treated as vertex attributes.
vid_field : string, optional
Column in the DataFrame or SFrame to use as vertex ID. Required if
vertices is an SFrame. If ``vertices`` is a DataFrame and
``vid_field`` is not specified, the row index is used as vertex ID.
Returns
-------
out : SGraph
A new SGraph with vertices added.
See Also
--------
vertices, SFrame, add_edges
Notes
-----
- If vertices are added with indices that already exist in the graph,
they are overwritten completely. All attributes for these vertices
will conform to the specification in this method.
Examples
--------
>>> from turicreate import SGraph, Vertex, SFrame
>>> g = SGraph()
Add a single vertex.
>>> g = g.add_vertices(Vertex(0, attr={'breed': 'labrador'}))
Add a list of vertices.
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add vertices from an SFrame.
>>> sf_vert = SFrame({'id': [0, 1, 2], 'breed':['lab', 'lab', 'vizsla']})
>>> g = g.add_vertices(sf_vert, vid_field='id') | [
"Add",
"vertices",
"to",
"the",
"SGraph",
".",
"Vertices",
"should",
"be",
"input",
"as",
"a",
"list",
"of",
":",
"class",
":",
"~turicreate",
".",
"Vertex",
"objects",
"an",
":",
"class",
":",
"~turicreate",
".",
"SFrame",
"or",
"a",
"pandas",
"DataFrame",
".",
"If",
"vertices",
"are",
"specified",
"by",
"SFrame",
"or",
"DataFrame",
"vid_field",
"specifies",
"which",
"column",
"contains",
"the",
"vertex",
"ID",
".",
"Remaining",
"columns",
"are",
"assumed",
"to",
"hold",
"additional",
"vertex",
"attributes",
".",
"If",
"these",
"attributes",
"are",
"not",
"already",
"present",
"in",
"the",
"graph",
"s",
"vertex",
"data",
"they",
"are",
"added",
"with",
"existing",
"vertices",
"acquiring",
"the",
"value",
"None",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L589-L652 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.add_edges | def add_edges(self, edges, src_field=None, dst_field=None):
"""
Add edges to the SGraph. Edges should be input as a list of
:class:`~turicreate.Edge` objects, an :class:`~turicreate.SFrame`, or a
Pandas DataFrame. If the new edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns that
contain the source and destination vertex IDs; additional columns are
treated as edge attributes. If these attributes are not already present
in the graph's edge data, they are added, with existing edges acquiring
the value ``None``.
Parameters
----------
edges : Edge | list [Edge] | pandas.DataFrame | SFrame
Edge data. If the edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns
that contain the source and destination vertex IDs. Additional
columns are treated as edge attributes.
src_field : string, optional
Column in the SFrame or DataFrame to use as source vertex IDs. Not
required if ``edges`` is a list.
dst_field : string, optional
Column in the SFrame or Pandas DataFrame to use as destination
vertex IDs. Not required if ``edges`` is a list.
Returns
-------
out : SGraph
A new SGraph with `edges` added.
See Also
--------
edges, SFrame, add_vertices
Notes
-----
- If an edge is added whose source and destination IDs match edges that
already exist in the graph, a new edge is added to the graph. This
contrasts with :py:func:`add_vertices`, which overwrites existing
vertices.
Examples
--------
>>> from turicreate import SGraph, Vertex, Edge, SFrame
>>> g = SGraph()
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add a single edge.
>>> g = g.add_edges(Edge(1, 2))
Add a list of edges.
>>> g = g.add_edges([Edge(0, 2), Edge(1, 2)])
Add edges from an SFrame.
>>> sf_edge = SFrame({'source': [0, 1], 'dest': [2, 2]})
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest')
"""
sf = _edge_data_to_sframe(edges, src_field, dst_field)
with cython_context():
proxy = self.__proxy__.add_edges(sf.__proxy__, _SRC_VID_COLUMN, _DST_VID_COLUMN)
return SGraph(_proxy=proxy) | python | def add_edges(self, edges, src_field=None, dst_field=None):
"""
Add edges to the SGraph. Edges should be input as a list of
:class:`~turicreate.Edge` objects, an :class:`~turicreate.SFrame`, or a
Pandas DataFrame. If the new edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns that
contain the source and destination vertex IDs; additional columns are
treated as edge attributes. If these attributes are not already present
in the graph's edge data, they are added, with existing edges acquiring
the value ``None``.
Parameters
----------
edges : Edge | list [Edge] | pandas.DataFrame | SFrame
Edge data. If the edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns
that contain the source and destination vertex IDs. Additional
columns are treated as edge attributes.
src_field : string, optional
Column in the SFrame or DataFrame to use as source vertex IDs. Not
required if ``edges`` is a list.
dst_field : string, optional
Column in the SFrame or Pandas DataFrame to use as destination
vertex IDs. Not required if ``edges`` is a list.
Returns
-------
out : SGraph
A new SGraph with `edges` added.
See Also
--------
edges, SFrame, add_vertices
Notes
-----
- If an edge is added whose source and destination IDs match edges that
already exist in the graph, a new edge is added to the graph. This
contrasts with :py:func:`add_vertices`, which overwrites existing
vertices.
Examples
--------
>>> from turicreate import SGraph, Vertex, Edge, SFrame
>>> g = SGraph()
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add a single edge.
>>> g = g.add_edges(Edge(1, 2))
Add a list of edges.
>>> g = g.add_edges([Edge(0, 2), Edge(1, 2)])
Add edges from an SFrame.
>>> sf_edge = SFrame({'source': [0, 1], 'dest': [2, 2]})
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest')
"""
sf = _edge_data_to_sframe(edges, src_field, dst_field)
with cython_context():
proxy = self.__proxy__.add_edges(sf.__proxy__, _SRC_VID_COLUMN, _DST_VID_COLUMN)
return SGraph(_proxy=proxy) | [
"def",
"add_edges",
"(",
"self",
",",
"edges",
",",
"src_field",
"=",
"None",
",",
"dst_field",
"=",
"None",
")",
":",
"sf",
"=",
"_edge_data_to_sframe",
"(",
"edges",
",",
"src_field",
",",
"dst_field",
")",
"with",
"cython_context",
"(",
")",
":",
"proxy",
"=",
"self",
".",
"__proxy__",
".",
"add_edges",
"(",
"sf",
".",
"__proxy__",
",",
"_SRC_VID_COLUMN",
",",
"_DST_VID_COLUMN",
")",
"return",
"SGraph",
"(",
"_proxy",
"=",
"proxy",
")"
] | Add edges to the SGraph. Edges should be input as a list of
:class:`~turicreate.Edge` objects, an :class:`~turicreate.SFrame`, or a
Pandas DataFrame. If the new edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns that
contain the source and destination vertex IDs; additional columns are
treated as edge attributes. If these attributes are not already present
in the graph's edge data, they are added, with existing edges acquiring
the value ``None``.
Parameters
----------
edges : Edge | list [Edge] | pandas.DataFrame | SFrame
Edge data. If the edges are in an SFrame or DataFrame, then
``src_field`` and ``dst_field`` are required to specify the columns
that contain the source and destination vertex IDs. Additional
columns are treated as edge attributes.
src_field : string, optional
Column in the SFrame or DataFrame to use as source vertex IDs. Not
required if ``edges`` is a list.
dst_field : string, optional
Column in the SFrame or Pandas DataFrame to use as destination
vertex IDs. Not required if ``edges`` is a list.
Returns
-------
out : SGraph
A new SGraph with `edges` added.
See Also
--------
edges, SFrame, add_vertices
Notes
-----
- If an edge is added whose source and destination IDs match edges that
already exist in the graph, a new edge is added to the graph. This
contrasts with :py:func:`add_vertices`, which overwrites existing
vertices.
Examples
--------
>>> from turicreate import SGraph, Vertex, Edge, SFrame
>>> g = SGraph()
>>> verts = [Vertex(0, attr={'breed': 'labrador'}),
Vertex(1, attr={'breed': 'labrador'}),
Vertex(2, attr={'breed': 'vizsla'})]
>>> g = g.add_vertices(verts)
Add a single edge.
>>> g = g.add_edges(Edge(1, 2))
Add a list of edges.
>>> g = g.add_edges([Edge(0, 2), Edge(1, 2)])
Add edges from an SFrame.
>>> sf_edge = SFrame({'source': [0, 1], 'dest': [2, 2]})
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest') | [
"Add",
"edges",
"to",
"the",
"SGraph",
".",
"Edges",
"should",
"be",
"input",
"as",
"a",
"list",
"of",
":",
"class",
":",
"~turicreate",
".",
"Edge",
"objects",
"an",
":",
"class",
":",
"~turicreate",
".",
"SFrame",
"or",
"a",
"Pandas",
"DataFrame",
".",
"If",
"the",
"new",
"edges",
"are",
"in",
"an",
"SFrame",
"or",
"DataFrame",
"then",
"src_field",
"and",
"dst_field",
"are",
"required",
"to",
"specify",
"the",
"columns",
"that",
"contain",
"the",
"source",
"and",
"destination",
"vertex",
"IDs",
";",
"additional",
"columns",
"are",
"treated",
"as",
"edge",
"attributes",
".",
"If",
"these",
"attributes",
"are",
"not",
"already",
"present",
"in",
"the",
"graph",
"s",
"edge",
"data",
"they",
"are",
"added",
"with",
"existing",
"edges",
"acquiring",
"the",
"value",
"None",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L654-L724 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.select_fields | def select_fields(self, fields):
"""
Return a new SGraph with only the selected fields. Other fields are
discarded, while fields that do not exist in the SGraph are ignored.
Parameters
----------
fields : string | list [string]
A single field name or a list of field names to select.
Returns
-------
out : SGraph
A new graph whose vertex and edge data are projected to the selected
fields.
See Also
--------
get_fields, get_vertex_fields, get_edge_fields
Examples
--------
>>> from turicreate import SGraph, Vertex
>>> verts = [Vertex(0, attr={'breed': 'labrador', 'age': 5}),
Vertex(1, attr={'breed': 'labrador', 'age': 3}),
Vertex(2, attr={'breed': 'vizsla', 'age': 8})]
>>> g = SGraph()
>>> g = g.add_vertices(verts)
>>> g2 = g.select_fields(fields=['breed'])
"""
if (type(fields) is str):
fields = [fields]
if not isinstance(fields, list) or not all(type(x) is str for x in fields):
raise TypeError('\"fields\" must be a str or list[str]')
vfields = self.__proxy__.get_vertex_fields()
efields = self.__proxy__.get_edge_fields()
selected_vfields = []
selected_efields = []
for f in fields:
found = False
if f in vfields:
selected_vfields.append(f)
found = True
if f in efields:
selected_efields.append(f)
found = True
if not found:
raise ValueError('Field \'%s\' not in graph' % f)
with cython_context():
proxy = self.__proxy__
proxy = proxy.select_vertex_fields(selected_vfields)
proxy = proxy.select_edge_fields(selected_efields)
return SGraph(_proxy=proxy) | python | def select_fields(self, fields):
"""
Return a new SGraph with only the selected fields. Other fields are
discarded, while fields that do not exist in the SGraph are ignored.
Parameters
----------
fields : string | list [string]
A single field name or a list of field names to select.
Returns
-------
out : SGraph
A new graph whose vertex and edge data are projected to the selected
fields.
See Also
--------
get_fields, get_vertex_fields, get_edge_fields
Examples
--------
>>> from turicreate import SGraph, Vertex
>>> verts = [Vertex(0, attr={'breed': 'labrador', 'age': 5}),
Vertex(1, attr={'breed': 'labrador', 'age': 3}),
Vertex(2, attr={'breed': 'vizsla', 'age': 8})]
>>> g = SGraph()
>>> g = g.add_vertices(verts)
>>> g2 = g.select_fields(fields=['breed'])
"""
if (type(fields) is str):
fields = [fields]
if not isinstance(fields, list) or not all(type(x) is str for x in fields):
raise TypeError('\"fields\" must be a str or list[str]')
vfields = self.__proxy__.get_vertex_fields()
efields = self.__proxy__.get_edge_fields()
selected_vfields = []
selected_efields = []
for f in fields:
found = False
if f in vfields:
selected_vfields.append(f)
found = True
if f in efields:
selected_efields.append(f)
found = True
if not found:
raise ValueError('Field \'%s\' not in graph' % f)
with cython_context():
proxy = self.__proxy__
proxy = proxy.select_vertex_fields(selected_vfields)
proxy = proxy.select_edge_fields(selected_efields)
return SGraph(_proxy=proxy) | [
"def",
"select_fields",
"(",
"self",
",",
"fields",
")",
":",
"if",
"(",
"type",
"(",
"fields",
")",
"is",
"str",
")",
":",
"fields",
"=",
"[",
"fields",
"]",
"if",
"not",
"isinstance",
"(",
"fields",
",",
"list",
")",
"or",
"not",
"all",
"(",
"type",
"(",
"x",
")",
"is",
"str",
"for",
"x",
"in",
"fields",
")",
":",
"raise",
"TypeError",
"(",
"'\\\"fields\\\" must be a str or list[str]'",
")",
"vfields",
"=",
"self",
".",
"__proxy__",
".",
"get_vertex_fields",
"(",
")",
"efields",
"=",
"self",
".",
"__proxy__",
".",
"get_edge_fields",
"(",
")",
"selected_vfields",
"=",
"[",
"]",
"selected_efields",
"=",
"[",
"]",
"for",
"f",
"in",
"fields",
":",
"found",
"=",
"False",
"if",
"f",
"in",
"vfields",
":",
"selected_vfields",
".",
"append",
"(",
"f",
")",
"found",
"=",
"True",
"if",
"f",
"in",
"efields",
":",
"selected_efields",
".",
"append",
"(",
"f",
")",
"found",
"=",
"True",
"if",
"not",
"found",
":",
"raise",
"ValueError",
"(",
"'Field \\'%s\\' not in graph'",
"%",
"f",
")",
"with",
"cython_context",
"(",
")",
":",
"proxy",
"=",
"self",
".",
"__proxy__",
"proxy",
"=",
"proxy",
".",
"select_vertex_fields",
"(",
"selected_vfields",
")",
"proxy",
"=",
"proxy",
".",
"select_edge_fields",
"(",
"selected_efields",
")",
"return",
"SGraph",
"(",
"_proxy",
"=",
"proxy",
")"
] | Return a new SGraph with only the selected fields. Other fields are
discarded, while fields that do not exist in the SGraph are ignored.
Parameters
----------
fields : string | list [string]
A single field name or a list of field names to select.
Returns
-------
out : SGraph
A new graph whose vertex and edge data are projected to the selected
fields.
See Also
--------
get_fields, get_vertex_fields, get_edge_fields
Examples
--------
>>> from turicreate import SGraph, Vertex
>>> verts = [Vertex(0, attr={'breed': 'labrador', 'age': 5}),
Vertex(1, attr={'breed': 'labrador', 'age': 3}),
Vertex(2, attr={'breed': 'vizsla', 'age': 8})]
>>> g = SGraph()
>>> g = g.add_vertices(verts)
>>> g2 = g.select_fields(fields=['breed']) | [
"Return",
"a",
"new",
"SGraph",
"with",
"only",
"the",
"selected",
"fields",
".",
"Other",
"fields",
"are",
"discarded",
"while",
"fields",
"that",
"do",
"not",
"exist",
"in",
"the",
"SGraph",
"are",
"ignored",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L811-L866 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.triple_apply | def triple_apply(self, triple_apply_fn, mutated_fields, input_fields=None):
'''
Apply a transform function to each edge and its associated source and
target vertices in parallel. Each edge is visited once and in parallel.
Modification to vertex data is protected by lock. The effect on the
returned SGraph is equivalent to the following pseudocode:
>>> PARALLEL FOR (source, edge, target) AS triple in G:
... LOCK (triple.source, triple.target)
... (source, edge, target) = triple_apply_fn(triple)
... UNLOCK (triple.source, triple.target)
... END PARALLEL FOR
Parameters
----------
triple_apply_fn : function : (dict, dict, dict) -> (dict, dict, dict)
The function to apply to each triple of (source_vertex, edge,
target_vertex). This function must take as input a tuple of
(source_data, edge_data, target_data) and return a tuple of
(new_source_data, new_edge_data, new_target_data). All variables in
the both tuples must be of dict type.
This can also be a toolkit extension function which is compiled
as a native shared library using SDK.
mutated_fields : list[str] | str
Fields that ``triple_apply_fn`` will mutate. Note: columns that are
actually mutated by the triple apply function but not specified in
``mutated_fields`` will have undetermined effects.
input_fields : list[str] | str, optional
Fields that ``triple_apply_fn`` will have access to.
The default is ``None``, which grants access to all fields.
``mutated_fields`` will always be included in ``input_fields``.
Returns
-------
out : SGraph
A new SGraph with updated vertex and edge data. Only fields
specified in the ``mutated_fields`` parameter are updated.
Notes
-----
- ``triple_apply`` does not currently support creating new fields in the
lambda function.
Examples
--------
Import turicreate and set up the graph.
>>> edges = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(edges, src_field='source', dst_field='dest')
>>> g.vertices['degree'] = 0
Define the function to apply to each (source_node, edge, target_node)
triple.
>>> def degree_count_fn (src, edge, dst):
src['degree'] += 1
dst['degree'] += 1
return (src, edge, dst)
Apply the function to the SGraph.
>>> g = g.triple_apply(degree_count_fn, mutated_fields=['degree'])
Using native toolkit extension function:
.. code-block:: c++
#include <turicreate/sdk/toolkit_function_macros.hpp>
#include <vector>
using namespace turi;
std::vector<variant_type> connected_components_parameterized(
std::map<std::string, flexible_type>& src,
std::map<std::string, flexible_type>& edge,
std::map<std::string, flexible_type>& dst,
std::string column) {
if (src[column] < dst[column]) dst[column] = src[column];
else src[column] = dst[column];
return {to_variant(src), to_variant(edge), to_variant(dst)};
}
BEGIN_FUNCTION_REGISTRATION
REGISTER_FUNCTION(connected_components_parameterized, "src", "edge", "dst", "column");
END_FUNCTION_REGISTRATION
compiled into example.so
>>> from example import connected_components_parameterized as cc
>>> e = tc.SFrame({'__src_id':[1,2,3,4,5], '__dst_id':[3,1,2,5,4]})
>>> g = tc.SGraph().add_edges(e)
>>> g.vertices['cid'] = g.vertices['__id']
>>> for i in range(2):
... g = g.triple_apply(lambda src, edge, dst: cc(src, edge, dst, 'cid'), ['cid'], ['cid'])
>>> g.vertices['cid']
dtype: int
Rows: 5
[4, 1, 1, 1, 4]
'''
assert inspect.isfunction(triple_apply_fn), "Input must be a function"
if not (type(mutated_fields) is list or type(mutated_fields) is str):
raise TypeError('mutated_fields must be str or list of str')
if not (input_fields is None or type(input_fields) is list or type(input_fields) is str):
raise TypeError('input_fields must be str or list of str')
if type(mutated_fields) == str:
mutated_fields = [mutated_fields]
if len(mutated_fields) is 0:
raise ValueError('mutated_fields cannot be empty')
for f in ['__id', '__src_id', '__dst_id']:
if f in mutated_fields:
raise ValueError('mutated_fields cannot contain %s' % f)
all_fields = self.get_fields()
if not set(mutated_fields).issubset(set(all_fields)):
extra_fields = list(set(mutated_fields).difference(set(all_fields)))
raise ValueError('graph does not contain fields: %s' % str(extra_fields))
# select input fields
if input_fields is None:
input_fields = self.get_fields()
elif type(input_fields) is str:
input_fields = [input_fields]
# make input fields a superset of mutated_fields
input_fields_set = set(input_fields + mutated_fields)
input_fields = [x for x in self.get_fields() if x in input_fields_set]
g = self.select_fields(input_fields)
nativefn = None
try:
from .. import extensions
nativefn = extensions._build_native_function_call(triple_apply_fn)
except:
# failure are fine. we just fall out into the next few phases
pass
if nativefn is not None:
with cython_context():
return SGraph(_proxy=g.__proxy__.lambda_triple_apply_native(nativefn, mutated_fields))
else:
with cython_context():
return SGraph(_proxy=g.__proxy__.lambda_triple_apply(triple_apply_fn, mutated_fields)) | python | def triple_apply(self, triple_apply_fn, mutated_fields, input_fields=None):
'''
Apply a transform function to each edge and its associated source and
target vertices in parallel. Each edge is visited once and in parallel.
Modification to vertex data is protected by lock. The effect on the
returned SGraph is equivalent to the following pseudocode:
>>> PARALLEL FOR (source, edge, target) AS triple in G:
... LOCK (triple.source, triple.target)
... (source, edge, target) = triple_apply_fn(triple)
... UNLOCK (triple.source, triple.target)
... END PARALLEL FOR
Parameters
----------
triple_apply_fn : function : (dict, dict, dict) -> (dict, dict, dict)
The function to apply to each triple of (source_vertex, edge,
target_vertex). This function must take as input a tuple of
(source_data, edge_data, target_data) and return a tuple of
(new_source_data, new_edge_data, new_target_data). All variables in
the both tuples must be of dict type.
This can also be a toolkit extension function which is compiled
as a native shared library using SDK.
mutated_fields : list[str] | str
Fields that ``triple_apply_fn`` will mutate. Note: columns that are
actually mutated by the triple apply function but not specified in
``mutated_fields`` will have undetermined effects.
input_fields : list[str] | str, optional
Fields that ``triple_apply_fn`` will have access to.
The default is ``None``, which grants access to all fields.
``mutated_fields`` will always be included in ``input_fields``.
Returns
-------
out : SGraph
A new SGraph with updated vertex and edge data. Only fields
specified in the ``mutated_fields`` parameter are updated.
Notes
-----
- ``triple_apply`` does not currently support creating new fields in the
lambda function.
Examples
--------
Import turicreate and set up the graph.
>>> edges = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(edges, src_field='source', dst_field='dest')
>>> g.vertices['degree'] = 0
Define the function to apply to each (source_node, edge, target_node)
triple.
>>> def degree_count_fn (src, edge, dst):
src['degree'] += 1
dst['degree'] += 1
return (src, edge, dst)
Apply the function to the SGraph.
>>> g = g.triple_apply(degree_count_fn, mutated_fields=['degree'])
Using native toolkit extension function:
.. code-block:: c++
#include <turicreate/sdk/toolkit_function_macros.hpp>
#include <vector>
using namespace turi;
std::vector<variant_type> connected_components_parameterized(
std::map<std::string, flexible_type>& src,
std::map<std::string, flexible_type>& edge,
std::map<std::string, flexible_type>& dst,
std::string column) {
if (src[column] < dst[column]) dst[column] = src[column];
else src[column] = dst[column];
return {to_variant(src), to_variant(edge), to_variant(dst)};
}
BEGIN_FUNCTION_REGISTRATION
REGISTER_FUNCTION(connected_components_parameterized, "src", "edge", "dst", "column");
END_FUNCTION_REGISTRATION
compiled into example.so
>>> from example import connected_components_parameterized as cc
>>> e = tc.SFrame({'__src_id':[1,2,3,4,5], '__dst_id':[3,1,2,5,4]})
>>> g = tc.SGraph().add_edges(e)
>>> g.vertices['cid'] = g.vertices['__id']
>>> for i in range(2):
... g = g.triple_apply(lambda src, edge, dst: cc(src, edge, dst, 'cid'), ['cid'], ['cid'])
>>> g.vertices['cid']
dtype: int
Rows: 5
[4, 1, 1, 1, 4]
'''
assert inspect.isfunction(triple_apply_fn), "Input must be a function"
if not (type(mutated_fields) is list or type(mutated_fields) is str):
raise TypeError('mutated_fields must be str or list of str')
if not (input_fields is None or type(input_fields) is list or type(input_fields) is str):
raise TypeError('input_fields must be str or list of str')
if type(mutated_fields) == str:
mutated_fields = [mutated_fields]
if len(mutated_fields) is 0:
raise ValueError('mutated_fields cannot be empty')
for f in ['__id', '__src_id', '__dst_id']:
if f in mutated_fields:
raise ValueError('mutated_fields cannot contain %s' % f)
all_fields = self.get_fields()
if not set(mutated_fields).issubset(set(all_fields)):
extra_fields = list(set(mutated_fields).difference(set(all_fields)))
raise ValueError('graph does not contain fields: %s' % str(extra_fields))
# select input fields
if input_fields is None:
input_fields = self.get_fields()
elif type(input_fields) is str:
input_fields = [input_fields]
# make input fields a superset of mutated_fields
input_fields_set = set(input_fields + mutated_fields)
input_fields = [x for x in self.get_fields() if x in input_fields_set]
g = self.select_fields(input_fields)
nativefn = None
try:
from .. import extensions
nativefn = extensions._build_native_function_call(triple_apply_fn)
except:
# failure are fine. we just fall out into the next few phases
pass
if nativefn is not None:
with cython_context():
return SGraph(_proxy=g.__proxy__.lambda_triple_apply_native(nativefn, mutated_fields))
else:
with cython_context():
return SGraph(_proxy=g.__proxy__.lambda_triple_apply(triple_apply_fn, mutated_fields)) | [
"def",
"triple_apply",
"(",
"self",
",",
"triple_apply_fn",
",",
"mutated_fields",
",",
"input_fields",
"=",
"None",
")",
":",
"assert",
"inspect",
".",
"isfunction",
"(",
"triple_apply_fn",
")",
",",
"\"Input must be a function\"",
"if",
"not",
"(",
"type",
"(",
"mutated_fields",
")",
"is",
"list",
"or",
"type",
"(",
"mutated_fields",
")",
"is",
"str",
")",
":",
"raise",
"TypeError",
"(",
"'mutated_fields must be str or list of str'",
")",
"if",
"not",
"(",
"input_fields",
"is",
"None",
"or",
"type",
"(",
"input_fields",
")",
"is",
"list",
"or",
"type",
"(",
"input_fields",
")",
"is",
"str",
")",
":",
"raise",
"TypeError",
"(",
"'input_fields must be str or list of str'",
")",
"if",
"type",
"(",
"mutated_fields",
")",
"==",
"str",
":",
"mutated_fields",
"=",
"[",
"mutated_fields",
"]",
"if",
"len",
"(",
"mutated_fields",
")",
"is",
"0",
":",
"raise",
"ValueError",
"(",
"'mutated_fields cannot be empty'",
")",
"for",
"f",
"in",
"[",
"'__id'",
",",
"'__src_id'",
",",
"'__dst_id'",
"]",
":",
"if",
"f",
"in",
"mutated_fields",
":",
"raise",
"ValueError",
"(",
"'mutated_fields cannot contain %s'",
"%",
"f",
")",
"all_fields",
"=",
"self",
".",
"get_fields",
"(",
")",
"if",
"not",
"set",
"(",
"mutated_fields",
")",
".",
"issubset",
"(",
"set",
"(",
"all_fields",
")",
")",
":",
"extra_fields",
"=",
"list",
"(",
"set",
"(",
"mutated_fields",
")",
".",
"difference",
"(",
"set",
"(",
"all_fields",
")",
")",
")",
"raise",
"ValueError",
"(",
"'graph does not contain fields: %s'",
"%",
"str",
"(",
"extra_fields",
")",
")",
"# select input fields",
"if",
"input_fields",
"is",
"None",
":",
"input_fields",
"=",
"self",
".",
"get_fields",
"(",
")",
"elif",
"type",
"(",
"input_fields",
")",
"is",
"str",
":",
"input_fields",
"=",
"[",
"input_fields",
"]",
"# make input fields a superset of mutated_fields",
"input_fields_set",
"=",
"set",
"(",
"input_fields",
"+",
"mutated_fields",
")",
"input_fields",
"=",
"[",
"x",
"for",
"x",
"in",
"self",
".",
"get_fields",
"(",
")",
"if",
"x",
"in",
"input_fields_set",
"]",
"g",
"=",
"self",
".",
"select_fields",
"(",
"input_fields",
")",
"nativefn",
"=",
"None",
"try",
":",
"from",
".",
".",
"import",
"extensions",
"nativefn",
"=",
"extensions",
".",
"_build_native_function_call",
"(",
"triple_apply_fn",
")",
"except",
":",
"# failure are fine. we just fall out into the next few phases",
"pass",
"if",
"nativefn",
"is",
"not",
"None",
":",
"with",
"cython_context",
"(",
")",
":",
"return",
"SGraph",
"(",
"_proxy",
"=",
"g",
".",
"__proxy__",
".",
"lambda_triple_apply_native",
"(",
"nativefn",
",",
"mutated_fields",
")",
")",
"else",
":",
"with",
"cython_context",
"(",
")",
":",
"return",
"SGraph",
"(",
"_proxy",
"=",
"g",
".",
"__proxy__",
".",
"lambda_triple_apply",
"(",
"triple_apply_fn",
",",
"mutated_fields",
")",
")"
] | Apply a transform function to each edge and its associated source and
target vertices in parallel. Each edge is visited once and in parallel.
Modification to vertex data is protected by lock. The effect on the
returned SGraph is equivalent to the following pseudocode:
>>> PARALLEL FOR (source, edge, target) AS triple in G:
... LOCK (triple.source, triple.target)
... (source, edge, target) = triple_apply_fn(triple)
... UNLOCK (triple.source, triple.target)
... END PARALLEL FOR
Parameters
----------
triple_apply_fn : function : (dict, dict, dict) -> (dict, dict, dict)
The function to apply to each triple of (source_vertex, edge,
target_vertex). This function must take as input a tuple of
(source_data, edge_data, target_data) and return a tuple of
(new_source_data, new_edge_data, new_target_data). All variables in
the both tuples must be of dict type.
This can also be a toolkit extension function which is compiled
as a native shared library using SDK.
mutated_fields : list[str] | str
Fields that ``triple_apply_fn`` will mutate. Note: columns that are
actually mutated by the triple apply function but not specified in
``mutated_fields`` will have undetermined effects.
input_fields : list[str] | str, optional
Fields that ``triple_apply_fn`` will have access to.
The default is ``None``, which grants access to all fields.
``mutated_fields`` will always be included in ``input_fields``.
Returns
-------
out : SGraph
A new SGraph with updated vertex and edge data. Only fields
specified in the ``mutated_fields`` parameter are updated.
Notes
-----
- ``triple_apply`` does not currently support creating new fields in the
lambda function.
Examples
--------
Import turicreate and set up the graph.
>>> edges = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(edges, src_field='source', dst_field='dest')
>>> g.vertices['degree'] = 0
Define the function to apply to each (source_node, edge, target_node)
triple.
>>> def degree_count_fn (src, edge, dst):
src['degree'] += 1
dst['degree'] += 1
return (src, edge, dst)
Apply the function to the SGraph.
>>> g = g.triple_apply(degree_count_fn, mutated_fields=['degree'])
Using native toolkit extension function:
.. code-block:: c++
#include <turicreate/sdk/toolkit_function_macros.hpp>
#include <vector>
using namespace turi;
std::vector<variant_type> connected_components_parameterized(
std::map<std::string, flexible_type>& src,
std::map<std::string, flexible_type>& edge,
std::map<std::string, flexible_type>& dst,
std::string column) {
if (src[column] < dst[column]) dst[column] = src[column];
else src[column] = dst[column];
return {to_variant(src), to_variant(edge), to_variant(dst)};
}
BEGIN_FUNCTION_REGISTRATION
REGISTER_FUNCTION(connected_components_parameterized, "src", "edge", "dst", "column");
END_FUNCTION_REGISTRATION
compiled into example.so
>>> from example import connected_components_parameterized as cc
>>> e = tc.SFrame({'__src_id':[1,2,3,4,5], '__dst_id':[3,1,2,5,4]})
>>> g = tc.SGraph().add_edges(e)
>>> g.vertices['cid'] = g.vertices['__id']
>>> for i in range(2):
... g = g.triple_apply(lambda src, edge, dst: cc(src, edge, dst, 'cid'), ['cid'], ['cid'])
>>> g.vertices['cid']
dtype: int
Rows: 5
[4, 1, 1, 1, 4] | [
"Apply",
"a",
"transform",
"function",
"to",
"each",
"edge",
"and",
"its",
"associated",
"source",
"and",
"target",
"vertices",
"in",
"parallel",
".",
"Each",
"edge",
"is",
"visited",
"once",
"and",
"in",
"parallel",
".",
"Modification",
"to",
"vertex",
"data",
"is",
"protected",
"by",
"lock",
".",
"The",
"effect",
"on",
"the",
"returned",
"SGraph",
"is",
"equivalent",
"to",
"the",
"following",
"pseudocode",
":"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L868-L1012 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.save | def save(self, filename, format='auto'):
"""
Save the SGraph to disk. If the graph is saved in binary format, the
graph can be re-loaded using the :py:func:`load_sgraph` method.
Alternatively, the SGraph can be saved in JSON format for a
human-readable and portable representation.
Parameters
----------
filename : string
Filename to use when saving the file. It can be either a local or
remote url.
format : {'auto', 'binary', 'json'}, optional
File format. If not specified, the format is detected automatically
based on the filename. Note that JSON format graphs cannot be
re-loaded with :py:func:`load_sgraph`.
See Also
--------
load_sgraph
Examples
--------
>>> g = turicreate.SGraph()
>>> g = g.add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph')
Save in JSON format.
>>> g.save('mygraph.json', format='json')
"""
if format is 'auto':
if filename.endswith(('.json', '.json.gz')):
format = 'json'
else:
format = 'binary'
if format not in ['binary', 'json', 'csv']:
raise ValueError('Invalid format: %s. Supported formats are: %s'
% (format, ['binary', 'json', 'csv']))
with cython_context():
self.__proxy__.save_graph(_make_internal_url(filename), format) | python | def save(self, filename, format='auto'):
"""
Save the SGraph to disk. If the graph is saved in binary format, the
graph can be re-loaded using the :py:func:`load_sgraph` method.
Alternatively, the SGraph can be saved in JSON format for a
human-readable and portable representation.
Parameters
----------
filename : string
Filename to use when saving the file. It can be either a local or
remote url.
format : {'auto', 'binary', 'json'}, optional
File format. If not specified, the format is detected automatically
based on the filename. Note that JSON format graphs cannot be
re-loaded with :py:func:`load_sgraph`.
See Also
--------
load_sgraph
Examples
--------
>>> g = turicreate.SGraph()
>>> g = g.add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph')
Save in JSON format.
>>> g.save('mygraph.json', format='json')
"""
if format is 'auto':
if filename.endswith(('.json', '.json.gz')):
format = 'json'
else:
format = 'binary'
if format not in ['binary', 'json', 'csv']:
raise ValueError('Invalid format: %s. Supported formats are: %s'
% (format, ['binary', 'json', 'csv']))
with cython_context():
self.__proxy__.save_graph(_make_internal_url(filename), format) | [
"def",
"save",
"(",
"self",
",",
"filename",
",",
"format",
"=",
"'auto'",
")",
":",
"if",
"format",
"is",
"'auto'",
":",
"if",
"filename",
".",
"endswith",
"(",
"(",
"'.json'",
",",
"'.json.gz'",
")",
")",
":",
"format",
"=",
"'json'",
"else",
":",
"format",
"=",
"'binary'",
"if",
"format",
"not",
"in",
"[",
"'binary'",
",",
"'json'",
",",
"'csv'",
"]",
":",
"raise",
"ValueError",
"(",
"'Invalid format: %s. Supported formats are: %s'",
"%",
"(",
"format",
",",
"[",
"'binary'",
",",
"'json'",
",",
"'csv'",
"]",
")",
")",
"with",
"cython_context",
"(",
")",
":",
"self",
".",
"__proxy__",
".",
"save_graph",
"(",
"_make_internal_url",
"(",
"filename",
")",
",",
"format",
")"
] | Save the SGraph to disk. If the graph is saved in binary format, the
graph can be re-loaded using the :py:func:`load_sgraph` method.
Alternatively, the SGraph can be saved in JSON format for a
human-readable and portable representation.
Parameters
----------
filename : string
Filename to use when saving the file. It can be either a local or
remote url.
format : {'auto', 'binary', 'json'}, optional
File format. If not specified, the format is detected automatically
based on the filename. Note that JSON format graphs cannot be
re-loaded with :py:func:`load_sgraph`.
See Also
--------
load_sgraph
Examples
--------
>>> g = turicreate.SGraph()
>>> g = g.add_vertices([turicreate.Vertex(i) for i in range(5)])
Save and load in binary format.
>>> g.save('mygraph')
>>> g2 = turicreate.load_sgraph('mygraph')
Save in JSON format.
>>> g.save('mygraph.json', format='json') | [
"Save",
"the",
"SGraph",
"to",
"disk",
".",
"If",
"the",
"graph",
"is",
"saved",
"in",
"binary",
"format",
"the",
"graph",
"can",
"be",
"re",
"-",
"loaded",
"using",
"the",
":",
"py",
":",
"func",
":",
"load_sgraph",
"method",
".",
"Alternatively",
"the",
"SGraph",
"can",
"be",
"saved",
"in",
"JSON",
"format",
"for",
"a",
"human",
"-",
"readable",
"and",
"portable",
"representation",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1014-L1061 | train |
apple/turicreate | src/unity/python/turicreate/data_structures/sgraph.py | SGraph.get_neighborhood | def get_neighborhood(self, ids, radius=1, full_subgraph=True):
"""
Retrieve the graph neighborhood around a set of vertices, ignoring edge
directions. Note that setting radius greater than two often results in a
time-consuming query for a very large subgraph.
Parameters
----------
ids : list [int | float | str]
List of target vertex IDs.
radius : int, optional
Radius of the neighborhood. Every vertex in the returned subgraph is
reachable from at least one of the target vertices on a path of
length no longer than ``radius``. Setting radius larger than 2 may
result in a very large subgraph.
full_subgraph : bool, optional
If True, return all edges between vertices in the returned
neighborhood. The result is also known as the subgraph induced by
the target nodes' neighbors, or the egocentric network for the
target nodes. If False, return only edges on paths of length <=
``radius`` from the target node, also known as the reachability
graph.
Returns
-------
out : Graph
The subgraph with the neighborhoods around the target vertices.
See Also
--------
get_edges, get_vertices
References
----------
- Marsden, P. (2002) `Egocentric and sociocentric measures of network
centrality <http://www.sciencedirect.com/science/article/pii/S03788733
02000163>`_.
- `Wikipedia - Reachability <http://en.wikipedia.org/wiki/Reachability>`_
Examples
--------
>>> sf_edge = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest')
>>> subgraph = g.get_neighborhood(ids=[1, 7], radius=2,
full_subgraph=True)
"""
verts = ids
## find the vertices within radius (and the path edges)
for i in range(radius):
edges_out = self.get_edges(src_ids=verts)
edges_in = self.get_edges(dst_ids=verts)
verts = list(edges_in['__src_id']) + list(edges_in['__dst_id']) + \
list(edges_out['__src_id']) + list(edges_out['__dst_id'])
verts = list(set(verts))
## make a new graph to return and add the vertices
g = SGraph()
g = g.add_vertices(self.get_vertices(verts), vid_field='__id')
## add the requested edge set
if full_subgraph is True:
induced_edge_out = self.get_edges(src_ids=verts)
induced_edge_in = self.get_edges(dst_ids=verts)
df_induced = induced_edge_out.append(induced_edge_in)
df_induced = df_induced.groupby(df_induced.column_names(), {})
verts_sa = SArray(list(verts))
edges = df_induced.filter_by(verts_sa, "__src_id")
edges = edges.filter_by(verts_sa, "__dst_id")
else:
path_edges = edges_out.append(edges_in)
edges = path_edges.groupby(path_edges.column_names(), {})
g = g.add_edges(edges, src_field='__src_id', dst_field='__dst_id')
return g | python | def get_neighborhood(self, ids, radius=1, full_subgraph=True):
"""
Retrieve the graph neighborhood around a set of vertices, ignoring edge
directions. Note that setting radius greater than two often results in a
time-consuming query for a very large subgraph.
Parameters
----------
ids : list [int | float | str]
List of target vertex IDs.
radius : int, optional
Radius of the neighborhood. Every vertex in the returned subgraph is
reachable from at least one of the target vertices on a path of
length no longer than ``radius``. Setting radius larger than 2 may
result in a very large subgraph.
full_subgraph : bool, optional
If True, return all edges between vertices in the returned
neighborhood. The result is also known as the subgraph induced by
the target nodes' neighbors, or the egocentric network for the
target nodes. If False, return only edges on paths of length <=
``radius`` from the target node, also known as the reachability
graph.
Returns
-------
out : Graph
The subgraph with the neighborhoods around the target vertices.
See Also
--------
get_edges, get_vertices
References
----------
- Marsden, P. (2002) `Egocentric and sociocentric measures of network
centrality <http://www.sciencedirect.com/science/article/pii/S03788733
02000163>`_.
- `Wikipedia - Reachability <http://en.wikipedia.org/wiki/Reachability>`_
Examples
--------
>>> sf_edge = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest')
>>> subgraph = g.get_neighborhood(ids=[1, 7], radius=2,
full_subgraph=True)
"""
verts = ids
## find the vertices within radius (and the path edges)
for i in range(radius):
edges_out = self.get_edges(src_ids=verts)
edges_in = self.get_edges(dst_ids=verts)
verts = list(edges_in['__src_id']) + list(edges_in['__dst_id']) + \
list(edges_out['__src_id']) + list(edges_out['__dst_id'])
verts = list(set(verts))
## make a new graph to return and add the vertices
g = SGraph()
g = g.add_vertices(self.get_vertices(verts), vid_field='__id')
## add the requested edge set
if full_subgraph is True:
induced_edge_out = self.get_edges(src_ids=verts)
induced_edge_in = self.get_edges(dst_ids=verts)
df_induced = induced_edge_out.append(induced_edge_in)
df_induced = df_induced.groupby(df_induced.column_names(), {})
verts_sa = SArray(list(verts))
edges = df_induced.filter_by(verts_sa, "__src_id")
edges = edges.filter_by(verts_sa, "__dst_id")
else:
path_edges = edges_out.append(edges_in)
edges = path_edges.groupby(path_edges.column_names(), {})
g = g.add_edges(edges, src_field='__src_id', dst_field='__dst_id')
return g | [
"def",
"get_neighborhood",
"(",
"self",
",",
"ids",
",",
"radius",
"=",
"1",
",",
"full_subgraph",
"=",
"True",
")",
":",
"verts",
"=",
"ids",
"## find the vertices within radius (and the path edges)",
"for",
"i",
"in",
"range",
"(",
"radius",
")",
":",
"edges_out",
"=",
"self",
".",
"get_edges",
"(",
"src_ids",
"=",
"verts",
")",
"edges_in",
"=",
"self",
".",
"get_edges",
"(",
"dst_ids",
"=",
"verts",
")",
"verts",
"=",
"list",
"(",
"edges_in",
"[",
"'__src_id'",
"]",
")",
"+",
"list",
"(",
"edges_in",
"[",
"'__dst_id'",
"]",
")",
"+",
"list",
"(",
"edges_out",
"[",
"'__src_id'",
"]",
")",
"+",
"list",
"(",
"edges_out",
"[",
"'__dst_id'",
"]",
")",
"verts",
"=",
"list",
"(",
"set",
"(",
"verts",
")",
")",
"## make a new graph to return and add the vertices",
"g",
"=",
"SGraph",
"(",
")",
"g",
"=",
"g",
".",
"add_vertices",
"(",
"self",
".",
"get_vertices",
"(",
"verts",
")",
",",
"vid_field",
"=",
"'__id'",
")",
"## add the requested edge set",
"if",
"full_subgraph",
"is",
"True",
":",
"induced_edge_out",
"=",
"self",
".",
"get_edges",
"(",
"src_ids",
"=",
"verts",
")",
"induced_edge_in",
"=",
"self",
".",
"get_edges",
"(",
"dst_ids",
"=",
"verts",
")",
"df_induced",
"=",
"induced_edge_out",
".",
"append",
"(",
"induced_edge_in",
")",
"df_induced",
"=",
"df_induced",
".",
"groupby",
"(",
"df_induced",
".",
"column_names",
"(",
")",
",",
"{",
"}",
")",
"verts_sa",
"=",
"SArray",
"(",
"list",
"(",
"verts",
")",
")",
"edges",
"=",
"df_induced",
".",
"filter_by",
"(",
"verts_sa",
",",
"\"__src_id\"",
")",
"edges",
"=",
"edges",
".",
"filter_by",
"(",
"verts_sa",
",",
"\"__dst_id\"",
")",
"else",
":",
"path_edges",
"=",
"edges_out",
".",
"append",
"(",
"edges_in",
")",
"edges",
"=",
"path_edges",
".",
"groupby",
"(",
"path_edges",
".",
"column_names",
"(",
")",
",",
"{",
"}",
")",
"g",
"=",
"g",
".",
"add_edges",
"(",
"edges",
",",
"src_field",
"=",
"'__src_id'",
",",
"dst_field",
"=",
"'__dst_id'",
")",
"return",
"g"
] | Retrieve the graph neighborhood around a set of vertices, ignoring edge
directions. Note that setting radius greater than two often results in a
time-consuming query for a very large subgraph.
Parameters
----------
ids : list [int | float | str]
List of target vertex IDs.
radius : int, optional
Radius of the neighborhood. Every vertex in the returned subgraph is
reachable from at least one of the target vertices on a path of
length no longer than ``radius``. Setting radius larger than 2 may
result in a very large subgraph.
full_subgraph : bool, optional
If True, return all edges between vertices in the returned
neighborhood. The result is also known as the subgraph induced by
the target nodes' neighbors, or the egocentric network for the
target nodes. If False, return only edges on paths of length <=
``radius`` from the target node, also known as the reachability
graph.
Returns
-------
out : Graph
The subgraph with the neighborhoods around the target vertices.
See Also
--------
get_edges, get_vertices
References
----------
- Marsden, P. (2002) `Egocentric and sociocentric measures of network
centrality <http://www.sciencedirect.com/science/article/pii/S03788733
02000163>`_.
- `Wikipedia - Reachability <http://en.wikipedia.org/wiki/Reachability>`_
Examples
--------
>>> sf_edge = turicreate.SFrame({'source': range(9), 'dest': range(1, 10)})
>>> g = turicreate.SGraph()
>>> g = g.add_edges(sf_edge, src_field='source', dst_field='dest')
>>> subgraph = g.get_neighborhood(ids=[1, 7], radius=2,
full_subgraph=True) | [
"Retrieve",
"the",
"graph",
"neighborhood",
"around",
"a",
"set",
"of",
"vertices",
"ignoring",
"edge",
"directions",
".",
"Note",
"that",
"setting",
"radius",
"greater",
"than",
"two",
"often",
"results",
"in",
"a",
"time",
"-",
"consuming",
"query",
"for",
"a",
"very",
"large",
"subgraph",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/data_structures/sgraph.py#L1063-L1145 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py | create | def create(dataset, target,
features=None, max_iterations=10,
validation_set='auto',
class_weights = None,
max_depth=6, step_size=0.3,
min_loss_reduction=0.0, min_child_weight=0.1,
row_subsample=1.0, column_subsample=1.0,
verbose=True,
random_seed = None,
metric='auto',
**kwargs):
"""
Create a (binary or multi-class) classifier model of type
:class:`~turicreate.boosted_trees_classifier.BoostedTreesClassifier` using
gradient boosted trees (sometimes known as GBMs).
Parameters
----------
dataset : SFrame
A training dataset containing feature columns and a target column.
target : str
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in alphabetical order of the variable values.
For example, a target variable with 'cat', 'dog', and 'foosa' as possible
values is mapped to 0, 1, and, 2 respectively.
features : list[str], optional
A list of columns names of features used for training the model.
Defaults to None, which uses all columns in the SFrame ``dataset``
excepting the target column..
max_iterations : int, optional
The maximum number of iterations for boosting. Each iteration results
in the creation of an extra tree.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
For each row of the progress table, the chosen metrics are computed
for both the provided training dataset and the validation_set. The
format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. This is computed once per full iteration. Large
differences in model accuracy between the training data and validation
data is indicative of overfitting. The default value is 'auto'.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If provided, the dictionary must contain a key for each class
label. The value can be any positive number greater than 1e-20. Weights
are interpreted as relative to each other. So setting the weights to be
2.0 for the positive class and 1.0 for the negative class has the same
effect as setting them to be 20.0 and 10.0, respectively. If set to
`None`, all classes are taken to have weight 1.0. The `auto` mode sets
the class weight to be inversely proportional to the number of examples
in the training data with the given class.
max_depth : float, optional
Maximum depth of a tree. Must be at least 1.
step_size : float, [0,1], optional
Step size (shrinkage) used in update to prevents overfitting. It
shrinks the prediction of each weak learner to make the boosting
process more conservative. The smaller the step size, the more conservative
the algorithm will be. Smaller step_size work well when
`max_iterations` is large.
min_loss_reduction : float, optional (non-negative)
Minimum loss reduction required to make a further partition/split a
node during the tree learning phase. Larger (more positive) values
can help prevent overfitting by avoiding splits that do not
sufficiently reduce the loss function.
min_child_weight : float, optional (non-negative)
Controls the minimum weight of each leaf node. Larger values result in
more conservative tree learning and help prevent overfitting.
Formally, this is minimum sum of instance weights (hessians) in each
node. If the tree learning algorithm results in a leaf node with the
sum of instance weights less than `min_child_weight`, tree building
will terminate.
row_subsample : float, [0,1], optional
Subsample the ratio of the training set in each iteration of tree
construction. This is called the bagging trick and can usually help
prevent overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the examples (rows) to grow each tree.
column_subsample : float, [0,1], optional
Subsample ratio of the columns in each iteration of tree
construction. Like row_subsample, this can also help prevent
model overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the columns to grow each tree.
verbose : boolean, optional
Print progress information during training (if set to true).
random_seed : int, optional
Seeds random opertations such as column and row subsampling, such that
results are reproducable.
metric : str or list[str], optional
Performance metric(s) that are tracked during training. When specified,
the progress table will display the tracked metric(s) on training and
validation set.
Supported metrics are: {'accuracy', 'auc', 'log_loss'}
kwargs : dict, optional
Additional arguments for training the model.
- ``early_stopping_rounds`` : int, default None
If the validation metric does not improve after <early_stopping_rounds>,
stop training and return the best model.
If multiple metrics are being tracked, the last one is used.
- ``model_checkpoint_path`` : str, default None
If specified, checkpoint the model training to the given path every n iterations,
where n is specified by ``model_checkpoint_interval``.
For instance, if `model_checkpoint_interval` is 5, and `model_checkpoint_path` is
set to ``/tmp/model_tmp``, the checkpoints will be saved into
``/tmp/model_tmp/model_checkpoint_5``, ``/tmp/model_tmp/model_checkpoint_10``, ... etc.
Training can be resumed by setting ``resume_from_checkpoint`` to one of these checkpoints.
- ``model_checkpoint_interval`` : int, default 5
If model_check_point_path is specified,
save the model to the given path every n iterations.
- ``resume_from_checkpoint`` : str, default None
Continues training from a model checkpoint. The model must take
exact the same training data as the checkpointed model.
Returns
-------
out : BoostedTreesClassifier
A trained gradient boosted trees model for classifications tasks.
References
----------
- `Wikipedia - Gradient tree boosting
<http://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_
- `Trevor Hastie's slides on Boosted Trees and Random Forest
<http://jessica2.msri.org/attachments/10778/10778-boost.pdf>`_
See Also
--------
BoostedTreesClassifier, turicreate.logistic_classifier.LogisticClassifier, turicreate.svm_classifier.SVMClassifier
Examples
--------
.. sourcecode:: python
>>> url = 'https://static.turi.com/datasets/xgboost/mushroom.csv'
>>> data = turicreate.SFrame.read_csv(url)
>>> train, test = data.random_split(0.8)
>>> model = turicreate.boosted_trees_classifier.create(train, target='label')
>>> predictions = model.classify(test)
>>> results = model.evaluate(test)
"""
if random_seed is not None:
kwargs['random_seed'] = random_seed
if 'model_checkpoint_path' in kwargs:
kwargs['model_checkpoint_path'] = _make_internal_url(kwargs['model_checkpoint_path'])
if 'resume_from_checkpoint' in kwargs:
kwargs['resume_from_checkpoint'] = _make_internal_url(kwargs['resume_from_checkpoint'])
model = _sl.create(dataset = dataset,
target = target,
features = features,
model_name = 'boosted_trees_classifier',
max_iterations = max_iterations,
validation_set = validation_set,
class_weights = class_weights,
max_depth = max_depth,
step_size = step_size,
min_loss_reduction = min_loss_reduction,
min_child_weight = min_child_weight,
row_subsample = row_subsample,
column_subsample = column_subsample,
verbose = verbose,
metric = metric,
**kwargs)
return BoostedTreesClassifier(model.__proxy__) | python | def create(dataset, target,
features=None, max_iterations=10,
validation_set='auto',
class_weights = None,
max_depth=6, step_size=0.3,
min_loss_reduction=0.0, min_child_weight=0.1,
row_subsample=1.0, column_subsample=1.0,
verbose=True,
random_seed = None,
metric='auto',
**kwargs):
"""
Create a (binary or multi-class) classifier model of type
:class:`~turicreate.boosted_trees_classifier.BoostedTreesClassifier` using
gradient boosted trees (sometimes known as GBMs).
Parameters
----------
dataset : SFrame
A training dataset containing feature columns and a target column.
target : str
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in alphabetical order of the variable values.
For example, a target variable with 'cat', 'dog', and 'foosa' as possible
values is mapped to 0, 1, and, 2 respectively.
features : list[str], optional
A list of columns names of features used for training the model.
Defaults to None, which uses all columns in the SFrame ``dataset``
excepting the target column..
max_iterations : int, optional
The maximum number of iterations for boosting. Each iteration results
in the creation of an extra tree.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
For each row of the progress table, the chosen metrics are computed
for both the provided training dataset and the validation_set. The
format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. This is computed once per full iteration. Large
differences in model accuracy between the training data and validation
data is indicative of overfitting. The default value is 'auto'.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If provided, the dictionary must contain a key for each class
label. The value can be any positive number greater than 1e-20. Weights
are interpreted as relative to each other. So setting the weights to be
2.0 for the positive class and 1.0 for the negative class has the same
effect as setting them to be 20.0 and 10.0, respectively. If set to
`None`, all classes are taken to have weight 1.0. The `auto` mode sets
the class weight to be inversely proportional to the number of examples
in the training data with the given class.
max_depth : float, optional
Maximum depth of a tree. Must be at least 1.
step_size : float, [0,1], optional
Step size (shrinkage) used in update to prevents overfitting. It
shrinks the prediction of each weak learner to make the boosting
process more conservative. The smaller the step size, the more conservative
the algorithm will be. Smaller step_size work well when
`max_iterations` is large.
min_loss_reduction : float, optional (non-negative)
Minimum loss reduction required to make a further partition/split a
node during the tree learning phase. Larger (more positive) values
can help prevent overfitting by avoiding splits that do not
sufficiently reduce the loss function.
min_child_weight : float, optional (non-negative)
Controls the minimum weight of each leaf node. Larger values result in
more conservative tree learning and help prevent overfitting.
Formally, this is minimum sum of instance weights (hessians) in each
node. If the tree learning algorithm results in a leaf node with the
sum of instance weights less than `min_child_weight`, tree building
will terminate.
row_subsample : float, [0,1], optional
Subsample the ratio of the training set in each iteration of tree
construction. This is called the bagging trick and can usually help
prevent overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the examples (rows) to grow each tree.
column_subsample : float, [0,1], optional
Subsample ratio of the columns in each iteration of tree
construction. Like row_subsample, this can also help prevent
model overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the columns to grow each tree.
verbose : boolean, optional
Print progress information during training (if set to true).
random_seed : int, optional
Seeds random opertations such as column and row subsampling, such that
results are reproducable.
metric : str or list[str], optional
Performance metric(s) that are tracked during training. When specified,
the progress table will display the tracked metric(s) on training and
validation set.
Supported metrics are: {'accuracy', 'auc', 'log_loss'}
kwargs : dict, optional
Additional arguments for training the model.
- ``early_stopping_rounds`` : int, default None
If the validation metric does not improve after <early_stopping_rounds>,
stop training and return the best model.
If multiple metrics are being tracked, the last one is used.
- ``model_checkpoint_path`` : str, default None
If specified, checkpoint the model training to the given path every n iterations,
where n is specified by ``model_checkpoint_interval``.
For instance, if `model_checkpoint_interval` is 5, and `model_checkpoint_path` is
set to ``/tmp/model_tmp``, the checkpoints will be saved into
``/tmp/model_tmp/model_checkpoint_5``, ``/tmp/model_tmp/model_checkpoint_10``, ... etc.
Training can be resumed by setting ``resume_from_checkpoint`` to one of these checkpoints.
- ``model_checkpoint_interval`` : int, default 5
If model_check_point_path is specified,
save the model to the given path every n iterations.
- ``resume_from_checkpoint`` : str, default None
Continues training from a model checkpoint. The model must take
exact the same training data as the checkpointed model.
Returns
-------
out : BoostedTreesClassifier
A trained gradient boosted trees model for classifications tasks.
References
----------
- `Wikipedia - Gradient tree boosting
<http://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_
- `Trevor Hastie's slides on Boosted Trees and Random Forest
<http://jessica2.msri.org/attachments/10778/10778-boost.pdf>`_
See Also
--------
BoostedTreesClassifier, turicreate.logistic_classifier.LogisticClassifier, turicreate.svm_classifier.SVMClassifier
Examples
--------
.. sourcecode:: python
>>> url = 'https://static.turi.com/datasets/xgboost/mushroom.csv'
>>> data = turicreate.SFrame.read_csv(url)
>>> train, test = data.random_split(0.8)
>>> model = turicreate.boosted_trees_classifier.create(train, target='label')
>>> predictions = model.classify(test)
>>> results = model.evaluate(test)
"""
if random_seed is not None:
kwargs['random_seed'] = random_seed
if 'model_checkpoint_path' in kwargs:
kwargs['model_checkpoint_path'] = _make_internal_url(kwargs['model_checkpoint_path'])
if 'resume_from_checkpoint' in kwargs:
kwargs['resume_from_checkpoint'] = _make_internal_url(kwargs['resume_from_checkpoint'])
model = _sl.create(dataset = dataset,
target = target,
features = features,
model_name = 'boosted_trees_classifier',
max_iterations = max_iterations,
validation_set = validation_set,
class_weights = class_weights,
max_depth = max_depth,
step_size = step_size,
min_loss_reduction = min_loss_reduction,
min_child_weight = min_child_weight,
row_subsample = row_subsample,
column_subsample = column_subsample,
verbose = verbose,
metric = metric,
**kwargs)
return BoostedTreesClassifier(model.__proxy__) | [
"def",
"create",
"(",
"dataset",
",",
"target",
",",
"features",
"=",
"None",
",",
"max_iterations",
"=",
"10",
",",
"validation_set",
"=",
"'auto'",
",",
"class_weights",
"=",
"None",
",",
"max_depth",
"=",
"6",
",",
"step_size",
"=",
"0.3",
",",
"min_loss_reduction",
"=",
"0.0",
",",
"min_child_weight",
"=",
"0.1",
",",
"row_subsample",
"=",
"1.0",
",",
"column_subsample",
"=",
"1.0",
",",
"verbose",
"=",
"True",
",",
"random_seed",
"=",
"None",
",",
"metric",
"=",
"'auto'",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"random_seed",
"is",
"not",
"None",
":",
"kwargs",
"[",
"'random_seed'",
"]",
"=",
"random_seed",
"if",
"'model_checkpoint_path'",
"in",
"kwargs",
":",
"kwargs",
"[",
"'model_checkpoint_path'",
"]",
"=",
"_make_internal_url",
"(",
"kwargs",
"[",
"'model_checkpoint_path'",
"]",
")",
"if",
"'resume_from_checkpoint'",
"in",
"kwargs",
":",
"kwargs",
"[",
"'resume_from_checkpoint'",
"]",
"=",
"_make_internal_url",
"(",
"kwargs",
"[",
"'resume_from_checkpoint'",
"]",
")",
"model",
"=",
"_sl",
".",
"create",
"(",
"dataset",
"=",
"dataset",
",",
"target",
"=",
"target",
",",
"features",
"=",
"features",
",",
"model_name",
"=",
"'boosted_trees_classifier'",
",",
"max_iterations",
"=",
"max_iterations",
",",
"validation_set",
"=",
"validation_set",
",",
"class_weights",
"=",
"class_weights",
",",
"max_depth",
"=",
"max_depth",
",",
"step_size",
"=",
"step_size",
",",
"min_loss_reduction",
"=",
"min_loss_reduction",
",",
"min_child_weight",
"=",
"min_child_weight",
",",
"row_subsample",
"=",
"row_subsample",
",",
"column_subsample",
"=",
"column_subsample",
",",
"verbose",
"=",
"verbose",
",",
"metric",
"=",
"metric",
",",
"*",
"*",
"kwargs",
")",
"return",
"BoostedTreesClassifier",
"(",
"model",
".",
"__proxy__",
")"
] | Create a (binary or multi-class) classifier model of type
:class:`~turicreate.boosted_trees_classifier.BoostedTreesClassifier` using
gradient boosted trees (sometimes known as GBMs).
Parameters
----------
dataset : SFrame
A training dataset containing feature columns and a target column.
target : str
Name of the column containing the target variable. The values in this
column must be of string or integer type. String target variables are
automatically mapped to integers in alphabetical order of the variable values.
For example, a target variable with 'cat', 'dog', and 'foosa' as possible
values is mapped to 0, 1, and, 2 respectively.
features : list[str], optional
A list of columns names of features used for training the model.
Defaults to None, which uses all columns in the SFrame ``dataset``
excepting the target column..
max_iterations : int, optional
The maximum number of iterations for boosting. Each iteration results
in the creation of an extra tree.
validation_set : SFrame, optional
A dataset for monitoring the model's generalization performance.
For each row of the progress table, the chosen metrics are computed
for both the provided training dataset and the validation_set. The
format of this SFrame must be the same as the training set.
By default this argument is set to 'auto' and a validation set is
automatically sampled and used for progress printing. If
validation_set is set to None, then no additional metrics
are computed. This is computed once per full iteration. Large
differences in model accuracy between the training data and validation
data is indicative of overfitting. The default value is 'auto'.
class_weights : {dict, `auto`}, optional
Weights the examples in the training data according to the given class
weights. If provided, the dictionary must contain a key for each class
label. The value can be any positive number greater than 1e-20. Weights
are interpreted as relative to each other. So setting the weights to be
2.0 for the positive class and 1.0 for the negative class has the same
effect as setting them to be 20.0 and 10.0, respectively. If set to
`None`, all classes are taken to have weight 1.0. The `auto` mode sets
the class weight to be inversely proportional to the number of examples
in the training data with the given class.
max_depth : float, optional
Maximum depth of a tree. Must be at least 1.
step_size : float, [0,1], optional
Step size (shrinkage) used in update to prevents overfitting. It
shrinks the prediction of each weak learner to make the boosting
process more conservative. The smaller the step size, the more conservative
the algorithm will be. Smaller step_size work well when
`max_iterations` is large.
min_loss_reduction : float, optional (non-negative)
Minimum loss reduction required to make a further partition/split a
node during the tree learning phase. Larger (more positive) values
can help prevent overfitting by avoiding splits that do not
sufficiently reduce the loss function.
min_child_weight : float, optional (non-negative)
Controls the minimum weight of each leaf node. Larger values result in
more conservative tree learning and help prevent overfitting.
Formally, this is minimum sum of instance weights (hessians) in each
node. If the tree learning algorithm results in a leaf node with the
sum of instance weights less than `min_child_weight`, tree building
will terminate.
row_subsample : float, [0,1], optional
Subsample the ratio of the training set in each iteration of tree
construction. This is called the bagging trick and can usually help
prevent overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the examples (rows) to grow each tree.
column_subsample : float, [0,1], optional
Subsample ratio of the columns in each iteration of tree
construction. Like row_subsample, this can also help prevent
model overfitting. Setting this to a value of 0.5 results in the
model randomly sampling half of the columns to grow each tree.
verbose : boolean, optional
Print progress information during training (if set to true).
random_seed : int, optional
Seeds random opertations such as column and row subsampling, such that
results are reproducable.
metric : str or list[str], optional
Performance metric(s) that are tracked during training. When specified,
the progress table will display the tracked metric(s) on training and
validation set.
Supported metrics are: {'accuracy', 'auc', 'log_loss'}
kwargs : dict, optional
Additional arguments for training the model.
- ``early_stopping_rounds`` : int, default None
If the validation metric does not improve after <early_stopping_rounds>,
stop training and return the best model.
If multiple metrics are being tracked, the last one is used.
- ``model_checkpoint_path`` : str, default None
If specified, checkpoint the model training to the given path every n iterations,
where n is specified by ``model_checkpoint_interval``.
For instance, if `model_checkpoint_interval` is 5, and `model_checkpoint_path` is
set to ``/tmp/model_tmp``, the checkpoints will be saved into
``/tmp/model_tmp/model_checkpoint_5``, ``/tmp/model_tmp/model_checkpoint_10``, ... etc.
Training can be resumed by setting ``resume_from_checkpoint`` to one of these checkpoints.
- ``model_checkpoint_interval`` : int, default 5
If model_check_point_path is specified,
save the model to the given path every n iterations.
- ``resume_from_checkpoint`` : str, default None
Continues training from a model checkpoint. The model must take
exact the same training data as the checkpointed model.
Returns
-------
out : BoostedTreesClassifier
A trained gradient boosted trees model for classifications tasks.
References
----------
- `Wikipedia - Gradient tree boosting
<http://en.wikipedia.org/wiki/Gradient_boosting#Gradient_tree_boosting>`_
- `Trevor Hastie's slides on Boosted Trees and Random Forest
<http://jessica2.msri.org/attachments/10778/10778-boost.pdf>`_
See Also
--------
BoostedTreesClassifier, turicreate.logistic_classifier.LogisticClassifier, turicreate.svm_classifier.SVMClassifier
Examples
--------
.. sourcecode:: python
>>> url = 'https://static.turi.com/datasets/xgboost/mushroom.csv'
>>> data = turicreate.SFrame.read_csv(url)
>>> train, test = data.random_split(0.8)
>>> model = turicreate.boosted_trees_classifier.create(train, target='label')
>>> predictions = model.classify(test)
>>> results = model.evaluate(test) | [
"Create",
"a",
"(",
"binary",
"or",
"multi",
"-",
"class",
")",
"classifier",
"model",
"of",
"type",
":",
"class",
":",
"~turicreate",
".",
"boosted_trees_classifier",
".",
"BoostedTreesClassifier",
"using",
"gradient",
"boosted",
"trees",
"(",
"sometimes",
"known",
"as",
"GBMs",
")",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py#L450-L638 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py | BoostedTreesClassifier.classify | def classify(self, dataset, missing_value_action='auto'):
"""
Return a classification, for each example in the ``dataset``, using the
trained boosted trees model. The output SFrame contains predictions
as class labels (0 or 1) and probabilities associated with the the example.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
names as the features used for model training, but does not require
a target column. Additional columns are ignored.
missing_value_action : str, optional
Action to perform when missing values are encountered. Can be
one of:
- 'auto': By default the model will treat missing value as is.
- 'impute': Proceed with evaluation by filling in the missing
values with the mean of the training data. Missing
values are also imputed if an entire column of data is
missing during evaluation.
- 'error': Do not proceed with evaluation and terminate with
an error message.
Returns
-------
out : SFrame
An SFrame with model predictions i.e class labels and probabilities
associated with each of the class labels.
See Also
----------
create, evaluate, predict
Examples
----------
>>> data = turicreate.SFrame('https://static.turi.com/datasets/regression/houses.csv')
>>> data['is_expensive'] = data['price'] > 30000
>>> model = turicreate.boosted_trees_classifier.create(data,
>>> target='is_expensive',
>>> features=['bath', 'bedroom', 'size'])
>>> classes = model.classify(data)
"""
return super(BoostedTreesClassifier, self).classify(dataset,
missing_value_action=missing_value_action) | python | def classify(self, dataset, missing_value_action='auto'):
"""
Return a classification, for each example in the ``dataset``, using the
trained boosted trees model. The output SFrame contains predictions
as class labels (0 or 1) and probabilities associated with the the example.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
names as the features used for model training, but does not require
a target column. Additional columns are ignored.
missing_value_action : str, optional
Action to perform when missing values are encountered. Can be
one of:
- 'auto': By default the model will treat missing value as is.
- 'impute': Proceed with evaluation by filling in the missing
values with the mean of the training data. Missing
values are also imputed if an entire column of data is
missing during evaluation.
- 'error': Do not proceed with evaluation and terminate with
an error message.
Returns
-------
out : SFrame
An SFrame with model predictions i.e class labels and probabilities
associated with each of the class labels.
See Also
----------
create, evaluate, predict
Examples
----------
>>> data = turicreate.SFrame('https://static.turi.com/datasets/regression/houses.csv')
>>> data['is_expensive'] = data['price'] > 30000
>>> model = turicreate.boosted_trees_classifier.create(data,
>>> target='is_expensive',
>>> features=['bath', 'bedroom', 'size'])
>>> classes = model.classify(data)
"""
return super(BoostedTreesClassifier, self).classify(dataset,
missing_value_action=missing_value_action) | [
"def",
"classify",
"(",
"self",
",",
"dataset",
",",
"missing_value_action",
"=",
"'auto'",
")",
":",
"return",
"super",
"(",
"BoostedTreesClassifier",
",",
"self",
")",
".",
"classify",
"(",
"dataset",
",",
"missing_value_action",
"=",
"missing_value_action",
")"
] | Return a classification, for each example in the ``dataset``, using the
trained boosted trees model. The output SFrame contains predictions
as class labels (0 or 1) and probabilities associated with the the example.
Parameters
----------
dataset : SFrame
Dataset of new observations. Must include columns with the same
names as the features used for model training, but does not require
a target column. Additional columns are ignored.
missing_value_action : str, optional
Action to perform when missing values are encountered. Can be
one of:
- 'auto': By default the model will treat missing value as is.
- 'impute': Proceed with evaluation by filling in the missing
values with the mean of the training data. Missing
values are also imputed if an entire column of data is
missing during evaluation.
- 'error': Do not proceed with evaluation and terminate with
an error message.
Returns
-------
out : SFrame
An SFrame with model predictions i.e class labels and probabilities
associated with each of the class labels.
See Also
----------
create, evaluate, predict
Examples
----------
>>> data = turicreate.SFrame('https://static.turi.com/datasets/regression/houses.csv')
>>> data['is_expensive'] = data['price'] > 30000
>>> model = turicreate.boosted_trees_classifier.create(data,
>>> target='is_expensive',
>>> features=['bath', 'bedroom', 'size'])
>>> classes = model.classify(data) | [
"Return",
"a",
"classification",
"for",
"each",
"example",
"in",
"the",
"dataset",
"using",
"the",
"trained",
"boosted",
"trees",
"model",
".",
"The",
"output",
"SFrame",
"contains",
"predictions",
"as",
"class",
"labels",
"(",
"0",
"or",
"1",
")",
"and",
"probabilities",
"associated",
"with",
"the",
"the",
"example",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py#L365-L413 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py | BoostedTreesClassifier.export_coreml | def export_coreml(self, filename):
"""
Export the model in Core ML format.
Parameters
----------
filename: str
A valid filename where the model can be saved.
Examples
--------
>>> model.export_coreml("MyModel.mlmodel")
"""
from turicreate.toolkits import _coreml_utils
display_name = "boosted trees classifier"
short_description = _coreml_utils._mlmodel_short_description(display_name)
context = {"mode" : "classification",
"model_type" : "boosted_trees",
"version": _turicreate.__version__,
"class": self.__class__.__name__,
"short_description": short_description,
'user_defined':{
'turicreate_version': _turicreate.__version__
}
}
self._export_coreml_impl(filename, context) | python | def export_coreml(self, filename):
"""
Export the model in Core ML format.
Parameters
----------
filename: str
A valid filename where the model can be saved.
Examples
--------
>>> model.export_coreml("MyModel.mlmodel")
"""
from turicreate.toolkits import _coreml_utils
display_name = "boosted trees classifier"
short_description = _coreml_utils._mlmodel_short_description(display_name)
context = {"mode" : "classification",
"model_type" : "boosted_trees",
"version": _turicreate.__version__,
"class": self.__class__.__name__,
"short_description": short_description,
'user_defined':{
'turicreate_version': _turicreate.__version__
}
}
self._export_coreml_impl(filename, context) | [
"def",
"export_coreml",
"(",
"self",
",",
"filename",
")",
":",
"from",
"turicreate",
".",
"toolkits",
"import",
"_coreml_utils",
"display_name",
"=",
"\"boosted trees classifier\"",
"short_description",
"=",
"_coreml_utils",
".",
"_mlmodel_short_description",
"(",
"display_name",
")",
"context",
"=",
"{",
"\"mode\"",
":",
"\"classification\"",
",",
"\"model_type\"",
":",
"\"boosted_trees\"",
",",
"\"version\"",
":",
"_turicreate",
".",
"__version__",
",",
"\"class\"",
":",
"self",
".",
"__class__",
".",
"__name__",
",",
"\"short_description\"",
":",
"short_description",
",",
"'user_defined'",
":",
"{",
"'turicreate_version'",
":",
"_turicreate",
".",
"__version__",
"}",
"}",
"self",
".",
"_export_coreml_impl",
"(",
"filename",
",",
"context",
")"
] | Export the model in Core ML format.
Parameters
----------
filename: str
A valid filename where the model can be saved.
Examples
--------
>>> model.export_coreml("MyModel.mlmodel") | [
"Export",
"the",
"model",
"in",
"Core",
"ML",
"format",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/classifier/boosted_trees_classifier.py#L423-L448 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py | GraphAnalyticsModel._get | def _get(self, field):
"""
Return the value for the queried field.
Get the value of a given field. The list of all queryable fields is
documented in the beginning of the model class.
>>> out = m._get('graph')
Parameters
----------
field : string
Name of the field to be retrieved.
Returns
-------
out : value
The current value of the requested field.
"""
if field in self._list_fields():
return self.__proxy__.get(field)
else:
raise KeyError('Key \"%s\" not in model. Available fields are %s.' % (field, ', '.join(self._list_fields()))) | python | def _get(self, field):
"""
Return the value for the queried field.
Get the value of a given field. The list of all queryable fields is
documented in the beginning of the model class.
>>> out = m._get('graph')
Parameters
----------
field : string
Name of the field to be retrieved.
Returns
-------
out : value
The current value of the requested field.
"""
if field in self._list_fields():
return self.__proxy__.get(field)
else:
raise KeyError('Key \"%s\" not in model. Available fields are %s.' % (field, ', '.join(self._list_fields()))) | [
"def",
"_get",
"(",
"self",
",",
"field",
")",
":",
"if",
"field",
"in",
"self",
".",
"_list_fields",
"(",
")",
":",
"return",
"self",
".",
"__proxy__",
".",
"get",
"(",
"field",
")",
"else",
":",
"raise",
"KeyError",
"(",
"'Key \\\"%s\\\" not in model. Available fields are %s.'",
"%",
"(",
"field",
",",
"', '",
".",
"join",
"(",
"self",
".",
"_list_fields",
"(",
")",
")",
")",
")"
] | Return the value for the queried field.
Get the value of a given field. The list of all queryable fields is
documented in the beginning of the model class.
>>> out = m._get('graph')
Parameters
----------
field : string
Name of the field to be retrieved.
Returns
-------
out : value
The current value of the requested field. | [
"Return",
"the",
"value",
"for",
"the",
"queried",
"field",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py#L31-L53 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py | GraphAnalyticsModel._describe_fields | def _describe_fields(cls):
"""
Return a dictionary for the class fields description.
Fields should NOT be wrapped by _precomputed_field, if necessary
"""
dispatch_table = {
'ShortestPathModel': 'sssp',
'GraphColoringModel': 'graph_coloring',
'PagerankModel': 'pagerank',
'ConnectedComponentsModel': 'connected_components',
'TriangleCountingModel': 'triangle_counting',
'KcoreModel': 'kcore',
'DegreeCountingModel': 'degree_count',
'LabelPropagationModel': 'label_propagation'
}
try:
toolkit_name = dispatch_table[cls.__name__]
toolkit = _tc.extensions._toolkits.graph.__dict__[toolkit_name]
return toolkit.get_model_fields({})
except:
raise RuntimeError('Model %s does not have fields description' % cls.__name__) | python | def _describe_fields(cls):
"""
Return a dictionary for the class fields description.
Fields should NOT be wrapped by _precomputed_field, if necessary
"""
dispatch_table = {
'ShortestPathModel': 'sssp',
'GraphColoringModel': 'graph_coloring',
'PagerankModel': 'pagerank',
'ConnectedComponentsModel': 'connected_components',
'TriangleCountingModel': 'triangle_counting',
'KcoreModel': 'kcore',
'DegreeCountingModel': 'degree_count',
'LabelPropagationModel': 'label_propagation'
}
try:
toolkit_name = dispatch_table[cls.__name__]
toolkit = _tc.extensions._toolkits.graph.__dict__[toolkit_name]
return toolkit.get_model_fields({})
except:
raise RuntimeError('Model %s does not have fields description' % cls.__name__) | [
"def",
"_describe_fields",
"(",
"cls",
")",
":",
"dispatch_table",
"=",
"{",
"'ShortestPathModel'",
":",
"'sssp'",
",",
"'GraphColoringModel'",
":",
"'graph_coloring'",
",",
"'PagerankModel'",
":",
"'pagerank'",
",",
"'ConnectedComponentsModel'",
":",
"'connected_components'",
",",
"'TriangleCountingModel'",
":",
"'triangle_counting'",
",",
"'KcoreModel'",
":",
"'kcore'",
",",
"'DegreeCountingModel'",
":",
"'degree_count'",
",",
"'LabelPropagationModel'",
":",
"'label_propagation'",
"}",
"try",
":",
"toolkit_name",
"=",
"dispatch_table",
"[",
"cls",
".",
"__name__",
"]",
"toolkit",
"=",
"_tc",
".",
"extensions",
".",
"_toolkits",
".",
"graph",
".",
"__dict__",
"[",
"toolkit_name",
"]",
"return",
"toolkit",
".",
"get_model_fields",
"(",
"{",
"}",
")",
"except",
":",
"raise",
"RuntimeError",
"(",
"'Model %s does not have fields description'",
"%",
"cls",
".",
"__name__",
")"
] | Return a dictionary for the class fields description.
Fields should NOT be wrapped by _precomputed_field, if necessary | [
"Return",
"a",
"dictionary",
"for",
"the",
"class",
"fields",
"description",
".",
"Fields",
"should",
"NOT",
"be",
"wrapped",
"by",
"_precomputed_field",
"if",
"necessary"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py#L56-L76 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py | GraphAnalyticsModel._get_summary_struct | def _get_summary_struct(self):
"""
Returns a structured description of the model, including (where relevant)
the schema of the training data, description of the training data,
training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections.
Each section is a list.
Each item in a section list is a tuple of the form:
('<label>','<field>')
section_titles: list
A list of section titles.
The order matches that of the 'sections' object.
"""
g = self.graph
section_titles = ['Graph']
graph_summary = [(k, _precomputed_field(v)) for k, v in six.iteritems(g.summary())]
sections = [graph_summary]
# collect other sections
results = [(k, _precomputed_field(v)) for k, v in six.iteritems(self._result_fields())]
methods = [(k, _precomputed_field(v)) for k, v in six.iteritems(self._method_fields())]
settings = [(k, v) for k, v in six.iteritems(self._setting_fields())]
metrics = [(k, v) for k, v in six.iteritems(self._metric_fields())]
optional_sections = [('Results', results), ('Settings', settings), \
('Metrics', metrics), ('Methods', methods)]
# if section is not empty, append to summary structure
for (title, section) in optional_sections:
if len(section) > 0:
section_titles.append(title)
sections.append(section)
return (sections, section_titles) | python | def _get_summary_struct(self):
"""
Returns a structured description of the model, including (where relevant)
the schema of the training data, description of the training data,
training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections.
Each section is a list.
Each item in a section list is a tuple of the form:
('<label>','<field>')
section_titles: list
A list of section titles.
The order matches that of the 'sections' object.
"""
g = self.graph
section_titles = ['Graph']
graph_summary = [(k, _precomputed_field(v)) for k, v in six.iteritems(g.summary())]
sections = [graph_summary]
# collect other sections
results = [(k, _precomputed_field(v)) for k, v in six.iteritems(self._result_fields())]
methods = [(k, _precomputed_field(v)) for k, v in six.iteritems(self._method_fields())]
settings = [(k, v) for k, v in six.iteritems(self._setting_fields())]
metrics = [(k, v) for k, v in six.iteritems(self._metric_fields())]
optional_sections = [('Results', results), ('Settings', settings), \
('Metrics', metrics), ('Methods', methods)]
# if section is not empty, append to summary structure
for (title, section) in optional_sections:
if len(section) > 0:
section_titles.append(title)
sections.append(section)
return (sections, section_titles) | [
"def",
"_get_summary_struct",
"(",
"self",
")",
":",
"g",
"=",
"self",
".",
"graph",
"section_titles",
"=",
"[",
"'Graph'",
"]",
"graph_summary",
"=",
"[",
"(",
"k",
",",
"_precomputed_field",
"(",
"v",
")",
")",
"for",
"k",
",",
"v",
"in",
"six",
".",
"iteritems",
"(",
"g",
".",
"summary",
"(",
")",
")",
"]",
"sections",
"=",
"[",
"graph_summary",
"]",
"# collect other sections",
"results",
"=",
"[",
"(",
"k",
",",
"_precomputed_field",
"(",
"v",
")",
")",
"for",
"k",
",",
"v",
"in",
"six",
".",
"iteritems",
"(",
"self",
".",
"_result_fields",
"(",
")",
")",
"]",
"methods",
"=",
"[",
"(",
"k",
",",
"_precomputed_field",
"(",
"v",
")",
")",
"for",
"k",
",",
"v",
"in",
"six",
".",
"iteritems",
"(",
"self",
".",
"_method_fields",
"(",
")",
")",
"]",
"settings",
"=",
"[",
"(",
"k",
",",
"v",
")",
"for",
"k",
",",
"v",
"in",
"six",
".",
"iteritems",
"(",
"self",
".",
"_setting_fields",
"(",
")",
")",
"]",
"metrics",
"=",
"[",
"(",
"k",
",",
"v",
")",
"for",
"k",
",",
"v",
"in",
"six",
".",
"iteritems",
"(",
"self",
".",
"_metric_fields",
"(",
")",
")",
"]",
"optional_sections",
"=",
"[",
"(",
"'Results'",
",",
"results",
")",
",",
"(",
"'Settings'",
",",
"settings",
")",
",",
"(",
"'Metrics'",
",",
"metrics",
")",
",",
"(",
"'Methods'",
",",
"methods",
")",
"]",
"# if section is not empty, append to summary structure",
"for",
"(",
"title",
",",
"section",
")",
"in",
"optional_sections",
":",
"if",
"len",
"(",
"section",
")",
">",
"0",
":",
"section_titles",
".",
"append",
"(",
"title",
")",
"sections",
".",
"append",
"(",
"section",
")",
"return",
"(",
"sections",
",",
"section_titles",
")"
] | Returns a structured description of the model, including (where relevant)
the schema of the training data, description of the training data,
training statistics, and model hyperparameters.
Returns
-------
sections : list (of list of tuples)
A list of summary sections.
Each section is a list.
Each item in a section list is a tuple of the form:
('<label>','<field>')
section_titles: list
A list of section titles.
The order matches that of the 'sections' object. | [
"Returns",
"a",
"structured",
"description",
"of",
"the",
"model",
"including",
"(",
"where",
"relevant",
")",
"the",
"schema",
"of",
"the",
"training",
"data",
"description",
"of",
"the",
"training",
"data",
"training",
"statistics",
"and",
"model",
"hyperparameters",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/graph_analytics/_model_base.py#L90-L130 | train |
apple/turicreate | src/unity/python/turicreate/util/_type_checks.py | _raise_error_if_not_of_type | def _raise_error_if_not_of_type(arg, expected_type, arg_name=None):
"""
Check if the input is of expected type.
Parameters
----------
arg : Input argument.
expected_type : A type OR a list of types that the argument is expected
to be.
arg_name : The name of the variable in the function being used. No
name is assumed if set to None.
Examples
--------
_raise_error_if_not_of_type(sf, str, 'sf')
_raise_error_if_not_of_type(sf, [str, int], 'sf')
"""
display_name = "%s " % arg_name if arg_name is not None else "Argument "
lst_expected_type = [expected_type] if \
type(expected_type) == type else expected_type
err_msg = "%smust be of type %s " % (display_name,
' or '.join([x.__name__ for x in lst_expected_type]))
err_msg += "(not %s)." % type(arg).__name__
if not any(map(lambda x: isinstance(arg, x), lst_expected_type)):
raise TypeError(err_msg) | python | def _raise_error_if_not_of_type(arg, expected_type, arg_name=None):
"""
Check if the input is of expected type.
Parameters
----------
arg : Input argument.
expected_type : A type OR a list of types that the argument is expected
to be.
arg_name : The name of the variable in the function being used. No
name is assumed if set to None.
Examples
--------
_raise_error_if_not_of_type(sf, str, 'sf')
_raise_error_if_not_of_type(sf, [str, int], 'sf')
"""
display_name = "%s " % arg_name if arg_name is not None else "Argument "
lst_expected_type = [expected_type] if \
type(expected_type) == type else expected_type
err_msg = "%smust be of type %s " % (display_name,
' or '.join([x.__name__ for x in lst_expected_type]))
err_msg += "(not %s)." % type(arg).__name__
if not any(map(lambda x: isinstance(arg, x), lst_expected_type)):
raise TypeError(err_msg) | [
"def",
"_raise_error_if_not_of_type",
"(",
"arg",
",",
"expected_type",
",",
"arg_name",
"=",
"None",
")",
":",
"display_name",
"=",
"\"%s \"",
"%",
"arg_name",
"if",
"arg_name",
"is",
"not",
"None",
"else",
"\"Argument \"",
"lst_expected_type",
"=",
"[",
"expected_type",
"]",
"if",
"type",
"(",
"expected_type",
")",
"==",
"type",
"else",
"expected_type",
"err_msg",
"=",
"\"%smust be of type %s \"",
"%",
"(",
"display_name",
",",
"' or '",
".",
"join",
"(",
"[",
"x",
".",
"__name__",
"for",
"x",
"in",
"lst_expected_type",
"]",
")",
")",
"err_msg",
"+=",
"\"(not %s).\"",
"%",
"type",
"(",
"arg",
")",
".",
"__name__",
"if",
"not",
"any",
"(",
"map",
"(",
"lambda",
"x",
":",
"isinstance",
"(",
"arg",
",",
"x",
")",
",",
"lst_expected_type",
")",
")",
":",
"raise",
"TypeError",
"(",
"err_msg",
")"
] | Check if the input is of expected type.
Parameters
----------
arg : Input argument.
expected_type : A type OR a list of types that the argument is expected
to be.
arg_name : The name of the variable in the function being used. No
name is assumed if set to None.
Examples
--------
_raise_error_if_not_of_type(sf, str, 'sf')
_raise_error_if_not_of_type(sf, [str, int], 'sf') | [
"Check",
"if",
"the",
"input",
"is",
"of",
"expected",
"type",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/util/_type_checks.py#L11-L39 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/sound_classifier/vggish_input.py | waveform_to_examples | def waveform_to_examples(data, sample_rate):
"""Converts audio waveform into an array of examples for VGGish.
Args:
data: np.array of either one dimension (mono) or two dimensions
(multi-channel, with the outer dimension representing channels).
Each sample is generally expected to lie in the range [-1.0, +1.0],
although this is not required.
sample_rate: Sample rate of data.
Returns:
3-D np.array of shape [num_examples, num_frames, num_bands] which represents
a sequence of examples, each of which contains a patch of log mel
spectrogram, covering num_frames frames of audio and num_bands mel frequency
bands, where the frame length is vggish_params.STFT_HOP_LENGTH_SECONDS.
"""
import resampy
# Convert to mono.
if len(data.shape) > 1:
data = np.mean(data, axis=1)
# Resample to the rate assumed by VGGish.
if sample_rate != vggish_params.SAMPLE_RATE:
data = resampy.resample(data, sample_rate, vggish_params.SAMPLE_RATE)
# Compute log mel spectrogram features.
log_mel = mel_features.log_mel_spectrogram(
data,
audio_sample_rate=vggish_params.SAMPLE_RATE,
log_offset=vggish_params.LOG_OFFSET,
window_length_secs=vggish_params.STFT_WINDOW_LENGTH_SECONDS,
hop_length_secs=vggish_params.STFT_HOP_LENGTH_SECONDS,
num_mel_bins=vggish_params.NUM_MEL_BINS,
lower_edge_hertz=vggish_params.MEL_MIN_HZ,
upper_edge_hertz=vggish_params.MEL_MAX_HZ)
# Frame features into examples.
features_sample_rate = 1.0 / vggish_params.STFT_HOP_LENGTH_SECONDS
example_window_length = int(round(
vggish_params.EXAMPLE_WINDOW_SECONDS * features_sample_rate))
example_hop_length = int(round(
vggish_params.EXAMPLE_HOP_SECONDS * features_sample_rate))
log_mel_examples = mel_features.frame(
log_mel,
window_length=example_window_length,
hop_length=example_hop_length)
return log_mel_examples | python | def waveform_to_examples(data, sample_rate):
"""Converts audio waveform into an array of examples for VGGish.
Args:
data: np.array of either one dimension (mono) or two dimensions
(multi-channel, with the outer dimension representing channels).
Each sample is generally expected to lie in the range [-1.0, +1.0],
although this is not required.
sample_rate: Sample rate of data.
Returns:
3-D np.array of shape [num_examples, num_frames, num_bands] which represents
a sequence of examples, each of which contains a patch of log mel
spectrogram, covering num_frames frames of audio and num_bands mel frequency
bands, where the frame length is vggish_params.STFT_HOP_LENGTH_SECONDS.
"""
import resampy
# Convert to mono.
if len(data.shape) > 1:
data = np.mean(data, axis=1)
# Resample to the rate assumed by VGGish.
if sample_rate != vggish_params.SAMPLE_RATE:
data = resampy.resample(data, sample_rate, vggish_params.SAMPLE_RATE)
# Compute log mel spectrogram features.
log_mel = mel_features.log_mel_spectrogram(
data,
audio_sample_rate=vggish_params.SAMPLE_RATE,
log_offset=vggish_params.LOG_OFFSET,
window_length_secs=vggish_params.STFT_WINDOW_LENGTH_SECONDS,
hop_length_secs=vggish_params.STFT_HOP_LENGTH_SECONDS,
num_mel_bins=vggish_params.NUM_MEL_BINS,
lower_edge_hertz=vggish_params.MEL_MIN_HZ,
upper_edge_hertz=vggish_params.MEL_MAX_HZ)
# Frame features into examples.
features_sample_rate = 1.0 / vggish_params.STFT_HOP_LENGTH_SECONDS
example_window_length = int(round(
vggish_params.EXAMPLE_WINDOW_SECONDS * features_sample_rate))
example_hop_length = int(round(
vggish_params.EXAMPLE_HOP_SECONDS * features_sample_rate))
log_mel_examples = mel_features.frame(
log_mel,
window_length=example_window_length,
hop_length=example_hop_length)
return log_mel_examples | [
"def",
"waveform_to_examples",
"(",
"data",
",",
"sample_rate",
")",
":",
"import",
"resampy",
"# Convert to mono.",
"if",
"len",
"(",
"data",
".",
"shape",
")",
">",
"1",
":",
"data",
"=",
"np",
".",
"mean",
"(",
"data",
",",
"axis",
"=",
"1",
")",
"# Resample to the rate assumed by VGGish.",
"if",
"sample_rate",
"!=",
"vggish_params",
".",
"SAMPLE_RATE",
":",
"data",
"=",
"resampy",
".",
"resample",
"(",
"data",
",",
"sample_rate",
",",
"vggish_params",
".",
"SAMPLE_RATE",
")",
"# Compute log mel spectrogram features.",
"log_mel",
"=",
"mel_features",
".",
"log_mel_spectrogram",
"(",
"data",
",",
"audio_sample_rate",
"=",
"vggish_params",
".",
"SAMPLE_RATE",
",",
"log_offset",
"=",
"vggish_params",
".",
"LOG_OFFSET",
",",
"window_length_secs",
"=",
"vggish_params",
".",
"STFT_WINDOW_LENGTH_SECONDS",
",",
"hop_length_secs",
"=",
"vggish_params",
".",
"STFT_HOP_LENGTH_SECONDS",
",",
"num_mel_bins",
"=",
"vggish_params",
".",
"NUM_MEL_BINS",
",",
"lower_edge_hertz",
"=",
"vggish_params",
".",
"MEL_MIN_HZ",
",",
"upper_edge_hertz",
"=",
"vggish_params",
".",
"MEL_MAX_HZ",
")",
"# Frame features into examples.",
"features_sample_rate",
"=",
"1.0",
"/",
"vggish_params",
".",
"STFT_HOP_LENGTH_SECONDS",
"example_window_length",
"=",
"int",
"(",
"round",
"(",
"vggish_params",
".",
"EXAMPLE_WINDOW_SECONDS",
"*",
"features_sample_rate",
")",
")",
"example_hop_length",
"=",
"int",
"(",
"round",
"(",
"vggish_params",
".",
"EXAMPLE_HOP_SECONDS",
"*",
"features_sample_rate",
")",
")",
"log_mel_examples",
"=",
"mel_features",
".",
"frame",
"(",
"log_mel",
",",
"window_length",
"=",
"example_window_length",
",",
"hop_length",
"=",
"example_hop_length",
")",
"return",
"log_mel_examples"
] | Converts audio waveform into an array of examples for VGGish.
Args:
data: np.array of either one dimension (mono) or two dimensions
(multi-channel, with the outer dimension representing channels).
Each sample is generally expected to lie in the range [-1.0, +1.0],
although this is not required.
sample_rate: Sample rate of data.
Returns:
3-D np.array of shape [num_examples, num_frames, num_bands] which represents
a sequence of examples, each of which contains a patch of log mel
spectrogram, covering num_frames frames of audio and num_bands mel frequency
bands, where the frame length is vggish_params.STFT_HOP_LENGTH_SECONDS. | [
"Converts",
"audio",
"waveform",
"into",
"an",
"array",
"of",
"examples",
"for",
"VGGish",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/sound_classifier/vggish_input.py#L24-L71 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/sound_classifier/vggish_input.py | wavfile_to_examples | def wavfile_to_examples(wav_file):
"""Convenience wrapper around waveform_to_examples() for a common WAV format.
Args:
wav_file: String path to a file, or a file-like object. The file
is assumed to contain WAV audio data with signed 16-bit PCM samples.
Returns:
See waveform_to_examples.
"""
from scipy.io import wavfile
sr, wav_data = wavfile.read(wav_file)
assert wav_data.dtype == np.int16, 'Bad sample type: %r' % wav_data.dtype
samples = wav_data / 32768.0 # Convert to [-1.0, +1.0]
return waveform_to_examples(samples, sr) | python | def wavfile_to_examples(wav_file):
"""Convenience wrapper around waveform_to_examples() for a common WAV format.
Args:
wav_file: String path to a file, or a file-like object. The file
is assumed to contain WAV audio data with signed 16-bit PCM samples.
Returns:
See waveform_to_examples.
"""
from scipy.io import wavfile
sr, wav_data = wavfile.read(wav_file)
assert wav_data.dtype == np.int16, 'Bad sample type: %r' % wav_data.dtype
samples = wav_data / 32768.0 # Convert to [-1.0, +1.0]
return waveform_to_examples(samples, sr) | [
"def",
"wavfile_to_examples",
"(",
"wav_file",
")",
":",
"from",
"scipy",
".",
"io",
"import",
"wavfile",
"sr",
",",
"wav_data",
"=",
"wavfile",
".",
"read",
"(",
"wav_file",
")",
"assert",
"wav_data",
".",
"dtype",
"==",
"np",
".",
"int16",
",",
"'Bad sample type: %r'",
"%",
"wav_data",
".",
"dtype",
"samples",
"=",
"wav_data",
"/",
"32768.0",
"# Convert to [-1.0, +1.0]",
"return",
"waveform_to_examples",
"(",
"samples",
",",
"sr",
")"
] | Convenience wrapper around waveform_to_examples() for a common WAV format.
Args:
wav_file: String path to a file, or a file-like object. The file
is assumed to contain WAV audio data with signed 16-bit PCM samples.
Returns:
See waveform_to_examples. | [
"Convenience",
"wrapper",
"around",
"waveform_to_examples",
"()",
"for",
"a",
"common",
"WAV",
"format",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/sound_classifier/vggish_input.py#L74-L88 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/build_request.py | expand_no_defaults | def expand_no_defaults (property_sets):
""" Expand the given build request by combining all property_sets which don't
specify conflicting non-free features.
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
# First make all features and subfeatures explicit
expanded_property_sets = [ps.expand_subfeatures() for ps in property_sets]
# Now combine all of the expanded property_sets
product = __x_product (expanded_property_sets)
return [property_set.create(p) for p in product] | python | def expand_no_defaults (property_sets):
""" Expand the given build request by combining all property_sets which don't
specify conflicting non-free features.
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
# First make all features and subfeatures explicit
expanded_property_sets = [ps.expand_subfeatures() for ps in property_sets]
# Now combine all of the expanded property_sets
product = __x_product (expanded_property_sets)
return [property_set.create(p) for p in product] | [
"def",
"expand_no_defaults",
"(",
"property_sets",
")",
":",
"assert",
"is_iterable_typed",
"(",
"property_sets",
",",
"property_set",
".",
"PropertySet",
")",
"# First make all features and subfeatures explicit",
"expanded_property_sets",
"=",
"[",
"ps",
".",
"expand_subfeatures",
"(",
")",
"for",
"ps",
"in",
"property_sets",
"]",
"# Now combine all of the expanded property_sets",
"product",
"=",
"__x_product",
"(",
"expanded_property_sets",
")",
"return",
"[",
"property_set",
".",
"create",
"(",
"p",
")",
"for",
"p",
"in",
"product",
"]"
] | Expand the given build request by combining all property_sets which don't
specify conflicting non-free features. | [
"Expand",
"the",
"given",
"build",
"request",
"by",
"combining",
"all",
"property_sets",
"which",
"don",
"t",
"specify",
"conflicting",
"non",
"-",
"free",
"features",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/build_request.py#L17-L28 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/build_request.py | __x_product | def __x_product (property_sets):
""" Return the cross-product of all elements of property_sets, less any
that would contain conflicting values for single-valued features.
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
x_product_seen = set()
return __x_product_aux (property_sets, x_product_seen)[0] | python | def __x_product (property_sets):
""" Return the cross-product of all elements of property_sets, less any
that would contain conflicting values for single-valued features.
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
x_product_seen = set()
return __x_product_aux (property_sets, x_product_seen)[0] | [
"def",
"__x_product",
"(",
"property_sets",
")",
":",
"assert",
"is_iterable_typed",
"(",
"property_sets",
",",
"property_set",
".",
"PropertySet",
")",
"x_product_seen",
"=",
"set",
"(",
")",
"return",
"__x_product_aux",
"(",
"property_sets",
",",
"x_product_seen",
")",
"[",
"0",
"]"
] | Return the cross-product of all elements of property_sets, less any
that would contain conflicting values for single-valued features. | [
"Return",
"the",
"cross",
"-",
"product",
"of",
"all",
"elements",
"of",
"property_sets",
"less",
"any",
"that",
"would",
"contain",
"conflicting",
"values",
"for",
"single",
"-",
"valued",
"features",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/build_request.py#L31-L37 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/build_request.py | __x_product_aux | def __x_product_aux (property_sets, seen_features):
"""Returns non-conflicting combinations of property sets.
property_sets is a list of PropertySet instances. seen_features is a set of Property
instances.
Returns a tuple of:
- list of lists of Property instances, such that within each list, no two Property instance
have the same feature, and no Property is for feature in seen_features.
- set of features we saw in property_sets
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
assert isinstance(seen_features, set)
if not property_sets:
return ([], set())
properties = property_sets[0].all()
these_features = set()
for p in property_sets[0].non_free():
these_features.add(p.feature)
# Note: the algorithm as implemented here, as in original Jam code, appears to
# detect conflicts based on features, not properties. For example, if command
# line build request say:
#
# <a>1/<b>1 c<1>/<b>1
#
# It will decide that those two property sets conflict, because they both specify
# a value for 'b' and will not try building "<a>1 <c1> <b1>", but rather two
# different property sets. This is a topic for future fixing, maybe.
if these_features & seen_features:
(inner_result, inner_seen) = __x_product_aux(property_sets[1:], seen_features)
return (inner_result, inner_seen | these_features)
else:
result = []
(inner_result, inner_seen) = __x_product_aux(property_sets[1:], seen_features | these_features)
if inner_result:
for inner in inner_result:
result.append(properties + inner)
else:
result.append(properties)
if inner_seen & these_features:
# Some of elements in property_sets[1:] conflict with elements of property_sets[0],
# Try again, this time omitting elements of property_sets[0]
(inner_result2, inner_seen2) = __x_product_aux(property_sets[1:], seen_features)
result.extend(inner_result2)
return (result, inner_seen | these_features) | python | def __x_product_aux (property_sets, seen_features):
"""Returns non-conflicting combinations of property sets.
property_sets is a list of PropertySet instances. seen_features is a set of Property
instances.
Returns a tuple of:
- list of lists of Property instances, such that within each list, no two Property instance
have the same feature, and no Property is for feature in seen_features.
- set of features we saw in property_sets
"""
assert is_iterable_typed(property_sets, property_set.PropertySet)
assert isinstance(seen_features, set)
if not property_sets:
return ([], set())
properties = property_sets[0].all()
these_features = set()
for p in property_sets[0].non_free():
these_features.add(p.feature)
# Note: the algorithm as implemented here, as in original Jam code, appears to
# detect conflicts based on features, not properties. For example, if command
# line build request say:
#
# <a>1/<b>1 c<1>/<b>1
#
# It will decide that those two property sets conflict, because they both specify
# a value for 'b' and will not try building "<a>1 <c1> <b1>", but rather two
# different property sets. This is a topic for future fixing, maybe.
if these_features & seen_features:
(inner_result, inner_seen) = __x_product_aux(property_sets[1:], seen_features)
return (inner_result, inner_seen | these_features)
else:
result = []
(inner_result, inner_seen) = __x_product_aux(property_sets[1:], seen_features | these_features)
if inner_result:
for inner in inner_result:
result.append(properties + inner)
else:
result.append(properties)
if inner_seen & these_features:
# Some of elements in property_sets[1:] conflict with elements of property_sets[0],
# Try again, this time omitting elements of property_sets[0]
(inner_result2, inner_seen2) = __x_product_aux(property_sets[1:], seen_features)
result.extend(inner_result2)
return (result, inner_seen | these_features) | [
"def",
"__x_product_aux",
"(",
"property_sets",
",",
"seen_features",
")",
":",
"assert",
"is_iterable_typed",
"(",
"property_sets",
",",
"property_set",
".",
"PropertySet",
")",
"assert",
"isinstance",
"(",
"seen_features",
",",
"set",
")",
"if",
"not",
"property_sets",
":",
"return",
"(",
"[",
"]",
",",
"set",
"(",
")",
")",
"properties",
"=",
"property_sets",
"[",
"0",
"]",
".",
"all",
"(",
")",
"these_features",
"=",
"set",
"(",
")",
"for",
"p",
"in",
"property_sets",
"[",
"0",
"]",
".",
"non_free",
"(",
")",
":",
"these_features",
".",
"add",
"(",
"p",
".",
"feature",
")",
"# Note: the algorithm as implemented here, as in original Jam code, appears to",
"# detect conflicts based on features, not properties. For example, if command",
"# line build request say:",
"#",
"# <a>1/<b>1 c<1>/<b>1",
"#",
"# It will decide that those two property sets conflict, because they both specify",
"# a value for 'b' and will not try building \"<a>1 <c1> <b1>\", but rather two",
"# different property sets. This is a topic for future fixing, maybe.",
"if",
"these_features",
"&",
"seen_features",
":",
"(",
"inner_result",
",",
"inner_seen",
")",
"=",
"__x_product_aux",
"(",
"property_sets",
"[",
"1",
":",
"]",
",",
"seen_features",
")",
"return",
"(",
"inner_result",
",",
"inner_seen",
"|",
"these_features",
")",
"else",
":",
"result",
"=",
"[",
"]",
"(",
"inner_result",
",",
"inner_seen",
")",
"=",
"__x_product_aux",
"(",
"property_sets",
"[",
"1",
":",
"]",
",",
"seen_features",
"|",
"these_features",
")",
"if",
"inner_result",
":",
"for",
"inner",
"in",
"inner_result",
":",
"result",
".",
"append",
"(",
"properties",
"+",
"inner",
")",
"else",
":",
"result",
".",
"append",
"(",
"properties",
")",
"if",
"inner_seen",
"&",
"these_features",
":",
"# Some of elements in property_sets[1:] conflict with elements of property_sets[0],",
"# Try again, this time omitting elements of property_sets[0]",
"(",
"inner_result2",
",",
"inner_seen2",
")",
"=",
"__x_product_aux",
"(",
"property_sets",
"[",
"1",
":",
"]",
",",
"seen_features",
")",
"result",
".",
"extend",
"(",
"inner_result2",
")",
"return",
"(",
"result",
",",
"inner_seen",
"|",
"these_features",
")"
] | Returns non-conflicting combinations of property sets.
property_sets is a list of PropertySet instances. seen_features is a set of Property
instances.
Returns a tuple of:
- list of lists of Property instances, such that within each list, no two Property instance
have the same feature, and no Property is for feature in seen_features.
- set of features we saw in property_sets | [
"Returns",
"non",
"-",
"conflicting",
"combinations",
"of",
"property",
"sets",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/build_request.py#L39-L91 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/build_request.py | looks_like_implicit_value | def looks_like_implicit_value(v):
"""Returns true if 'v' is either implicit value, or
the part before the first '-' symbol is implicit value."""
assert isinstance(v, basestring)
if feature.is_implicit_value(v):
return 1
else:
split = v.split("-")
if feature.is_implicit_value(split[0]):
return 1
return 0 | python | def looks_like_implicit_value(v):
"""Returns true if 'v' is either implicit value, or
the part before the first '-' symbol is implicit value."""
assert isinstance(v, basestring)
if feature.is_implicit_value(v):
return 1
else:
split = v.split("-")
if feature.is_implicit_value(split[0]):
return 1
return 0 | [
"def",
"looks_like_implicit_value",
"(",
"v",
")",
":",
"assert",
"isinstance",
"(",
"v",
",",
"basestring",
")",
"if",
"feature",
".",
"is_implicit_value",
"(",
"v",
")",
":",
"return",
"1",
"else",
":",
"split",
"=",
"v",
".",
"split",
"(",
"\"-\"",
")",
"if",
"feature",
".",
"is_implicit_value",
"(",
"split",
"[",
"0",
"]",
")",
":",
"return",
"1",
"return",
"0"
] | Returns true if 'v' is either implicit value, or
the part before the first '-' symbol is implicit value. | [
"Returns",
"true",
"if",
"v",
"is",
"either",
"implicit",
"value",
"or",
"the",
"part",
"before",
"the",
"first",
"-",
"symbol",
"is",
"implicit",
"value",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/build_request.py#L95-L106 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/build_request.py | from_command_line | def from_command_line(command_line):
"""Takes the command line tokens (such as taken from ARGV rule)
and constructs build request from it. Returns a list of two
lists. First is the set of targets specified in the command line,
and second is the set of requested build properties."""
assert is_iterable_typed(command_line, basestring)
targets = []
properties = []
for e in command_line:
if e[:1] != "-":
# Build request spec either has "=" in it, or completely
# consists of implicit feature values.
if e.find("=") != -1 or looks_like_implicit_value(e.split("/")[0]):
properties.append(e)
elif e:
targets.append(e)
return [targets, properties] | python | def from_command_line(command_line):
"""Takes the command line tokens (such as taken from ARGV rule)
and constructs build request from it. Returns a list of two
lists. First is the set of targets specified in the command line,
and second is the set of requested build properties."""
assert is_iterable_typed(command_line, basestring)
targets = []
properties = []
for e in command_line:
if e[:1] != "-":
# Build request spec either has "=" in it, or completely
# consists of implicit feature values.
if e.find("=") != -1 or looks_like_implicit_value(e.split("/")[0]):
properties.append(e)
elif e:
targets.append(e)
return [targets, properties] | [
"def",
"from_command_line",
"(",
"command_line",
")",
":",
"assert",
"is_iterable_typed",
"(",
"command_line",
",",
"basestring",
")",
"targets",
"=",
"[",
"]",
"properties",
"=",
"[",
"]",
"for",
"e",
"in",
"command_line",
":",
"if",
"e",
"[",
":",
"1",
"]",
"!=",
"\"-\"",
":",
"# Build request spec either has \"=\" in it, or completely",
"# consists of implicit feature values.",
"if",
"e",
".",
"find",
"(",
"\"=\"",
")",
"!=",
"-",
"1",
"or",
"looks_like_implicit_value",
"(",
"e",
".",
"split",
"(",
"\"/\"",
")",
"[",
"0",
"]",
")",
":",
"properties",
".",
"append",
"(",
"e",
")",
"elif",
"e",
":",
"targets",
".",
"append",
"(",
"e",
")",
"return",
"[",
"targets",
",",
"properties",
"]"
] | Takes the command line tokens (such as taken from ARGV rule)
and constructs build request from it. Returns a list of two
lists. First is the set of targets specified in the command line,
and second is the set of requested build properties. | [
"Takes",
"the",
"command",
"line",
"tokens",
"(",
"such",
"as",
"taken",
"from",
"ARGV",
"rule",
")",
"and",
"constructs",
"build",
"request",
"from",
"it",
".",
"Returns",
"a",
"list",
"of",
"two",
"lists",
".",
"First",
"is",
"the",
"set",
"of",
"targets",
"specified",
"in",
"the",
"command",
"line",
"and",
"second",
"is",
"the",
"set",
"of",
"requested",
"build",
"properties",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/build_request.py#L108-L126 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | regex_to_error_msg | def regex_to_error_msg(regex):
"""Format a human-readable error message from a regex"""
return re.sub('([^\\\\])[()]', '\\1', regex) \
.replace('[ \t]*$', '') \
.replace('^', '') \
.replace('$', '') \
.replace('[ \t]*', ' ') \
.replace('[ \t]+', ' ') \
.replace('[0-9]+', 'X') \
\
.replace('\\[', '[') \
.replace('\\]', ']') \
.replace('\\(', '(') \
.replace('\\)', ')') \
.replace('\\.', '.') | python | def regex_to_error_msg(regex):
"""Format a human-readable error message from a regex"""
return re.sub('([^\\\\])[()]', '\\1', regex) \
.replace('[ \t]*$', '') \
.replace('^', '') \
.replace('$', '') \
.replace('[ \t]*', ' ') \
.replace('[ \t]+', ' ') \
.replace('[0-9]+', 'X') \
\
.replace('\\[', '[') \
.replace('\\]', ']') \
.replace('\\(', '(') \
.replace('\\)', ')') \
.replace('\\.', '.') | [
"def",
"regex_to_error_msg",
"(",
"regex",
")",
":",
"return",
"re",
".",
"sub",
"(",
"'([^\\\\\\\\])[()]'",
",",
"'\\\\1'",
",",
"regex",
")",
".",
"replace",
"(",
"'[ \\t]*$'",
",",
"''",
")",
".",
"replace",
"(",
"'^'",
",",
"''",
")",
".",
"replace",
"(",
"'$'",
",",
"''",
")",
".",
"replace",
"(",
"'[ \\t]*'",
",",
"' '",
")",
".",
"replace",
"(",
"'[ \\t]+'",
",",
"' '",
")",
".",
"replace",
"(",
"'[0-9]+'",
",",
"'X'",
")",
".",
"replace",
"(",
"'\\\\['",
",",
"'['",
")",
".",
"replace",
"(",
"'\\\\]'",
",",
"']'",
")",
".",
"replace",
"(",
"'\\\\('",
",",
"'('",
")",
".",
"replace",
"(",
"'\\\\)'",
",",
"')'",
")",
".",
"replace",
"(",
"'\\\\.'",
",",
"'.'",
")"
] | Format a human-readable error message from a regex | [
"Format",
"a",
"human",
"-",
"readable",
"error",
"message",
"from",
"a",
"regex"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L20-L34 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | random_chars | def random_chars(number):
"""Generate random characters"""
char_map = {
k: v for k, v in chars.CHARS.iteritems()
if not format_character(k).startswith('\\x')
}
char_num = sum(char_map.values())
return (
format_character(nth_char(char_map, random.randint(0, char_num - 1)))
for _ in xrange(0, number)
) | python | def random_chars(number):
"""Generate random characters"""
char_map = {
k: v for k, v in chars.CHARS.iteritems()
if not format_character(k).startswith('\\x')
}
char_num = sum(char_map.values())
return (
format_character(nth_char(char_map, random.randint(0, char_num - 1)))
for _ in xrange(0, number)
) | [
"def",
"random_chars",
"(",
"number",
")",
":",
"char_map",
"=",
"{",
"k",
":",
"v",
"for",
"k",
",",
"v",
"in",
"chars",
".",
"CHARS",
".",
"iteritems",
"(",
")",
"if",
"not",
"format_character",
"(",
"k",
")",
".",
"startswith",
"(",
"'\\\\x'",
")",
"}",
"char_num",
"=",
"sum",
"(",
"char_map",
".",
"values",
"(",
")",
")",
"return",
"(",
"format_character",
"(",
"nth_char",
"(",
"char_map",
",",
"random",
".",
"randint",
"(",
"0",
",",
"char_num",
"-",
"1",
")",
")",
")",
"for",
"_",
"in",
"xrange",
"(",
"0",
",",
"number",
")",
")"
] | Generate random characters | [
"Generate",
"random",
"characters"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L50-L61 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | templates_in | def templates_in(path):
"""Enumerate the templates found in path"""
ext = '.cpp'
return (
Template(f[0:-len(ext)], load_file(os.path.join(path, f)))
for f in os.listdir(path) if f.endswith(ext)
) | python | def templates_in(path):
"""Enumerate the templates found in path"""
ext = '.cpp'
return (
Template(f[0:-len(ext)], load_file(os.path.join(path, f)))
for f in os.listdir(path) if f.endswith(ext)
) | [
"def",
"templates_in",
"(",
"path",
")",
":",
"ext",
"=",
"'.cpp'",
"return",
"(",
"Template",
"(",
"f",
"[",
"0",
":",
"-",
"len",
"(",
"ext",
")",
"]",
",",
"load_file",
"(",
"os",
".",
"path",
".",
"join",
"(",
"path",
",",
"f",
")",
")",
")",
"for",
"f",
"in",
"os",
".",
"listdir",
"(",
"path",
")",
"if",
"f",
".",
"endswith",
"(",
"ext",
")",
")"
] | Enumerate the templates found in path | [
"Enumerate",
"the",
"templates",
"found",
"in",
"path"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L186-L192 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | nth_char | def nth_char(char_map, index):
"""Returns the nth character of a character->occurrence map"""
for char in char_map:
if index < char_map[char]:
return char
index = index - char_map[char]
return None | python | def nth_char(char_map, index):
"""Returns the nth character of a character->occurrence map"""
for char in char_map:
if index < char_map[char]:
return char
index = index - char_map[char]
return None | [
"def",
"nth_char",
"(",
"char_map",
",",
"index",
")",
":",
"for",
"char",
"in",
"char_map",
":",
"if",
"index",
"<",
"char_map",
"[",
"char",
"]",
":",
"return",
"char",
"index",
"=",
"index",
"-",
"char_map",
"[",
"char",
"]",
"return",
"None"
] | Returns the nth character of a character->occurrence map | [
"Returns",
"the",
"nth",
"character",
"of",
"a",
"character",
"-",
">",
"occurrence",
"map"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L195-L201 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | format_character | def format_character(char):
"""Returns the C-formatting of the character"""
if \
char in string.ascii_letters \
or char in string.digits \
or char in [
'_', '.', ':', ';', ' ', '!', '?', '+', '-', '/', '=', '<',
'>', '$', '(', ')', '@', '~', '`', '|', '#', '[', ']', '{',
'}', '&', '*', '^', '%']:
return char
elif char in ['"', '\'', '\\']:
return '\\{0}'.format(char)
elif char == '\n':
return '\\n'
elif char == '\r':
return '\\r'
elif char == '\t':
return '\\t'
else:
return '\\x{:02x}'.format(ord(char)) | python | def format_character(char):
"""Returns the C-formatting of the character"""
if \
char in string.ascii_letters \
or char in string.digits \
or char in [
'_', '.', ':', ';', ' ', '!', '?', '+', '-', '/', '=', '<',
'>', '$', '(', ')', '@', '~', '`', '|', '#', '[', ']', '{',
'}', '&', '*', '^', '%']:
return char
elif char in ['"', '\'', '\\']:
return '\\{0}'.format(char)
elif char == '\n':
return '\\n'
elif char == '\r':
return '\\r'
elif char == '\t':
return '\\t'
else:
return '\\x{:02x}'.format(ord(char)) | [
"def",
"format_character",
"(",
"char",
")",
":",
"if",
"char",
"in",
"string",
".",
"ascii_letters",
"or",
"char",
"in",
"string",
".",
"digits",
"or",
"char",
"in",
"[",
"'_'",
",",
"'.'",
",",
"':'",
",",
"';'",
",",
"' '",
",",
"'!'",
",",
"'?'",
",",
"'+'",
",",
"'-'",
",",
"'/'",
",",
"'='",
",",
"'<'",
",",
"'>'",
",",
"'$'",
",",
"'('",
",",
"')'",
",",
"'@'",
",",
"'~'",
",",
"'`'",
",",
"'|'",
",",
"'#'",
",",
"'['",
",",
"']'",
",",
"'{'",
",",
"'}'",
",",
"'&'",
",",
"'*'",
",",
"'^'",
",",
"'%'",
"]",
":",
"return",
"char",
"elif",
"char",
"in",
"[",
"'\"'",
",",
"'\\''",
",",
"'\\\\'",
"]",
":",
"return",
"'\\\\{0}'",
".",
"format",
"(",
"char",
")",
"elif",
"char",
"==",
"'\\n'",
":",
"return",
"'\\\\n'",
"elif",
"char",
"==",
"'\\r'",
":",
"return",
"'\\\\r'",
"elif",
"char",
"==",
"'\\t'",
":",
"return",
"'\\\\t'",
"else",
":",
"return",
"'\\\\x{:02x}'",
".",
"format",
"(",
"ord",
"(",
"char",
")",
")"
] | Returns the C-formatting of the character | [
"Returns",
"the",
"C",
"-",
"formatting",
"of",
"the",
"character"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L204-L223 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | write_file | def write_file(filename, content):
"""Create the file with the given content"""
print 'Generating {0}'.format(filename)
with open(filename, 'wb') as out_f:
out_f.write(content) | python | def write_file(filename, content):
"""Create the file with the given content"""
print 'Generating {0}'.format(filename)
with open(filename, 'wb') as out_f:
out_f.write(content) | [
"def",
"write_file",
"(",
"filename",
",",
"content",
")",
":",
"print",
"'Generating {0}'",
".",
"format",
"(",
"filename",
")",
"with",
"open",
"(",
"filename",
",",
"'wb'",
")",
"as",
"out_f",
":",
"out_f",
".",
"write",
"(",
"content",
")"
] | Create the file with the given content | [
"Create",
"the",
"file",
"with",
"the",
"given",
"content"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L226-L230 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | out_filename | def out_filename(template, n_val, mode):
"""Determine the output filename"""
return '{0}_{1}_{2}.cpp'.format(template.name, n_val, mode.identifier) | python | def out_filename(template, n_val, mode):
"""Determine the output filename"""
return '{0}_{1}_{2}.cpp'.format(template.name, n_val, mode.identifier) | [
"def",
"out_filename",
"(",
"template",
",",
"n_val",
",",
"mode",
")",
":",
"return",
"'{0}_{1}_{2}.cpp'",
".",
"format",
"(",
"template",
".",
"name",
",",
"n_val",
",",
"mode",
".",
"identifier",
")"
] | Determine the output filename | [
"Determine",
"the",
"output",
"filename"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L233-L235 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | main | def main():
"""The main function of the script"""
desc = 'Generate files to benchmark'
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'--src',
dest='src_dir',
default='src',
help='The directory containing the templates'
)
parser.add_argument(
'--out',
dest='out_dir',
default='generated',
help='The output directory'
)
parser.add_argument(
'--seed',
dest='seed',
default='13',
help='The random seed (to ensure consistent regeneration)'
)
args = parser.parse_args()
random.seed(int(args.seed))
mkdir_p(args.out_dir)
for template in templates_in(args.src_dir):
modes = template.modes()
n_range = template.range()
for n_value in n_range:
base = template.instantiate(n_value)
for mode in modes:
write_file(
os.path.join(
args.out_dir,
out_filename(template, n_value, mode)
),
mode.convert_from(base)
)
write_file(
os.path.join(args.out_dir, '{0}.json'.format(template.name)),
json.dumps({
'files': {
n: {
m.identifier: out_filename(template, n, m)
for m in modes
} for n in n_range
},
'name': template.name,
'x_axis_label': template.property('x_axis_label'),
'desc': template.property('desc'),
'modes': {m.identifier: m.description() for m in modes}
})
) | python | def main():
"""The main function of the script"""
desc = 'Generate files to benchmark'
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'--src',
dest='src_dir',
default='src',
help='The directory containing the templates'
)
parser.add_argument(
'--out',
dest='out_dir',
default='generated',
help='The output directory'
)
parser.add_argument(
'--seed',
dest='seed',
default='13',
help='The random seed (to ensure consistent regeneration)'
)
args = parser.parse_args()
random.seed(int(args.seed))
mkdir_p(args.out_dir)
for template in templates_in(args.src_dir):
modes = template.modes()
n_range = template.range()
for n_value in n_range:
base = template.instantiate(n_value)
for mode in modes:
write_file(
os.path.join(
args.out_dir,
out_filename(template, n_value, mode)
),
mode.convert_from(base)
)
write_file(
os.path.join(args.out_dir, '{0}.json'.format(template.name)),
json.dumps({
'files': {
n: {
m.identifier: out_filename(template, n, m)
for m in modes
} for n in n_range
},
'name': template.name,
'x_axis_label': template.property('x_axis_label'),
'desc': template.property('desc'),
'modes': {m.identifier: m.description() for m in modes}
})
) | [
"def",
"main",
"(",
")",
":",
"desc",
"=",
"'Generate files to benchmark'",
"parser",
"=",
"argparse",
".",
"ArgumentParser",
"(",
"description",
"=",
"desc",
")",
"parser",
".",
"add_argument",
"(",
"'--src'",
",",
"dest",
"=",
"'src_dir'",
",",
"default",
"=",
"'src'",
",",
"help",
"=",
"'The directory containing the templates'",
")",
"parser",
".",
"add_argument",
"(",
"'--out'",
",",
"dest",
"=",
"'out_dir'",
",",
"default",
"=",
"'generated'",
",",
"help",
"=",
"'The output directory'",
")",
"parser",
".",
"add_argument",
"(",
"'--seed'",
",",
"dest",
"=",
"'seed'",
",",
"default",
"=",
"'13'",
",",
"help",
"=",
"'The random seed (to ensure consistent regeneration)'",
")",
"args",
"=",
"parser",
".",
"parse_args",
"(",
")",
"random",
".",
"seed",
"(",
"int",
"(",
"args",
".",
"seed",
")",
")",
"mkdir_p",
"(",
"args",
".",
"out_dir",
")",
"for",
"template",
"in",
"templates_in",
"(",
"args",
".",
"src_dir",
")",
":",
"modes",
"=",
"template",
".",
"modes",
"(",
")",
"n_range",
"=",
"template",
".",
"range",
"(",
")",
"for",
"n_value",
"in",
"n_range",
":",
"base",
"=",
"template",
".",
"instantiate",
"(",
"n_value",
")",
"for",
"mode",
"in",
"modes",
":",
"write_file",
"(",
"os",
".",
"path",
".",
"join",
"(",
"args",
".",
"out_dir",
",",
"out_filename",
"(",
"template",
",",
"n_value",
",",
"mode",
")",
")",
",",
"mode",
".",
"convert_from",
"(",
"base",
")",
")",
"write_file",
"(",
"os",
".",
"path",
".",
"join",
"(",
"args",
".",
"out_dir",
",",
"'{0}.json'",
".",
"format",
"(",
"template",
".",
"name",
")",
")",
",",
"json",
".",
"dumps",
"(",
"{",
"'files'",
":",
"{",
"n",
":",
"{",
"m",
".",
"identifier",
":",
"out_filename",
"(",
"template",
",",
"n",
",",
"m",
")",
"for",
"m",
"in",
"modes",
"}",
"for",
"n",
"in",
"n_range",
"}",
",",
"'name'",
":",
"template",
".",
"name",
",",
"'x_axis_label'",
":",
"template",
".",
"property",
"(",
"'x_axis_label'",
")",
",",
"'desc'",
":",
"template",
".",
"property",
"(",
"'desc'",
")",
",",
"'modes'",
":",
"{",
"m",
".",
"identifier",
":",
"m",
".",
"description",
"(",
")",
"for",
"m",
"in",
"modes",
"}",
"}",
")",
")"
] | The main function of the script | [
"The",
"main",
"function",
"of",
"the",
"script"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L238-L295 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | Mode.convert_from | def convert_from(self, base):
"""Convert a BOOST_METAPARSE_STRING mode document into one with
this mode"""
if self.identifier == 'bmp':
return base
elif self.identifier == 'man':
result = []
prefix = 'BOOST_METAPARSE_STRING("'
while True:
bmp_at = base.find(prefix)
if bmp_at == -1:
return ''.join(result) + base
else:
result.append(
base[0:bmp_at] + '::boost::metaparse::string<'
)
new_base = ''
was_backslash = False
comma = ''
for i in xrange(bmp_at + len(prefix), len(base)):
if was_backslash:
result.append(
'{0}\'\\{1}\''.format(comma, base[i])
)
was_backslash = False
comma = ','
elif base[i] == '"':
new_base = base[i+2:]
break
elif base[i] == '\\':
was_backslash = True
else:
result.append('{0}\'{1}\''.format(comma, base[i]))
comma = ','
base = new_base
result.append('>') | python | def convert_from(self, base):
"""Convert a BOOST_METAPARSE_STRING mode document into one with
this mode"""
if self.identifier == 'bmp':
return base
elif self.identifier == 'man':
result = []
prefix = 'BOOST_METAPARSE_STRING("'
while True:
bmp_at = base.find(prefix)
if bmp_at == -1:
return ''.join(result) + base
else:
result.append(
base[0:bmp_at] + '::boost::metaparse::string<'
)
new_base = ''
was_backslash = False
comma = ''
for i in xrange(bmp_at + len(prefix), len(base)):
if was_backslash:
result.append(
'{0}\'\\{1}\''.format(comma, base[i])
)
was_backslash = False
comma = ','
elif base[i] == '"':
new_base = base[i+2:]
break
elif base[i] == '\\':
was_backslash = True
else:
result.append('{0}\'{1}\''.format(comma, base[i]))
comma = ','
base = new_base
result.append('>') | [
"def",
"convert_from",
"(",
"self",
",",
"base",
")",
":",
"if",
"self",
".",
"identifier",
"==",
"'bmp'",
":",
"return",
"base",
"elif",
"self",
".",
"identifier",
"==",
"'man'",
":",
"result",
"=",
"[",
"]",
"prefix",
"=",
"'BOOST_METAPARSE_STRING(\"'",
"while",
"True",
":",
"bmp_at",
"=",
"base",
".",
"find",
"(",
"prefix",
")",
"if",
"bmp_at",
"==",
"-",
"1",
":",
"return",
"''",
".",
"join",
"(",
"result",
")",
"+",
"base",
"else",
":",
"result",
".",
"append",
"(",
"base",
"[",
"0",
":",
"bmp_at",
"]",
"+",
"'::boost::metaparse::string<'",
")",
"new_base",
"=",
"''",
"was_backslash",
"=",
"False",
"comma",
"=",
"''",
"for",
"i",
"in",
"xrange",
"(",
"bmp_at",
"+",
"len",
"(",
"prefix",
")",
",",
"len",
"(",
"base",
")",
")",
":",
"if",
"was_backslash",
":",
"result",
".",
"append",
"(",
"'{0}\\'\\\\{1}\\''",
".",
"format",
"(",
"comma",
",",
"base",
"[",
"i",
"]",
")",
")",
"was_backslash",
"=",
"False",
"comma",
"=",
"','",
"elif",
"base",
"[",
"i",
"]",
"==",
"'\"'",
":",
"new_base",
"=",
"base",
"[",
"i",
"+",
"2",
":",
"]",
"break",
"elif",
"base",
"[",
"i",
"]",
"==",
"'\\\\'",
":",
"was_backslash",
"=",
"True",
"else",
":",
"result",
".",
"append",
"(",
"'{0}\\'{1}\\''",
".",
"format",
"(",
"comma",
",",
"base",
"[",
"i",
"]",
")",
")",
"comma",
"=",
"','",
"base",
"=",
"new_base",
"result",
".",
"append",
"(",
"'>'",
")"
] | Convert a BOOST_METAPARSE_STRING mode document into one with
this mode | [
"Convert",
"a",
"BOOST_METAPARSE_STRING",
"mode",
"document",
"into",
"one",
"with",
"this",
"mode"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L89-L124 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | Template.instantiate | def instantiate(self, value_of_n):
"""Instantiates the template"""
template = Cheetah.Template.Template(
self.content,
searchList={'n': value_of_n}
)
template.random_string = random_string
return str(template) | python | def instantiate(self, value_of_n):
"""Instantiates the template"""
template = Cheetah.Template.Template(
self.content,
searchList={'n': value_of_n}
)
template.random_string = random_string
return str(template) | [
"def",
"instantiate",
"(",
"self",
",",
"value_of_n",
")",
":",
"template",
"=",
"Cheetah",
".",
"Template",
".",
"Template",
"(",
"self",
".",
"content",
",",
"searchList",
"=",
"{",
"'n'",
":",
"value_of_n",
"}",
")",
"template",
".",
"random_string",
"=",
"random_string",
"return",
"str",
"(",
"template",
")"
] | Instantiates the template | [
"Instantiates",
"the",
"template"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L134-L141 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | Template.range | def range(self):
"""Returns the range for N"""
match = self._match(in_comment(
'n[ \t]+in[ \t]*\\[([0-9]+)\\.\\.([0-9]+)\\),[ \t]+'
'step[ \t]+([0-9]+)'
))
return range(
int(match.group(1)),
int(match.group(2)),
int(match.group(3))
) | python | def range(self):
"""Returns the range for N"""
match = self._match(in_comment(
'n[ \t]+in[ \t]*\\[([0-9]+)\\.\\.([0-9]+)\\),[ \t]+'
'step[ \t]+([0-9]+)'
))
return range(
int(match.group(1)),
int(match.group(2)),
int(match.group(3))
) | [
"def",
"range",
"(",
"self",
")",
":",
"match",
"=",
"self",
".",
"_match",
"(",
"in_comment",
"(",
"'n[ \\t]+in[ \\t]*\\\\[([0-9]+)\\\\.\\\\.([0-9]+)\\\\),[ \\t]+'",
"'step[ \\t]+([0-9]+)'",
")",
")",
"return",
"range",
"(",
"int",
"(",
"match",
".",
"group",
"(",
"1",
")",
")",
",",
"int",
"(",
"match",
".",
"group",
"(",
"2",
")",
")",
",",
"int",
"(",
"match",
".",
"group",
"(",
"3",
")",
")",
")"
] | Returns the range for N | [
"Returns",
"the",
"range",
"for",
"N"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L143-L153 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py | Template._match | def _match(self, regex):
"""Find the first line matching regex and return the match object"""
cregex = re.compile(regex)
for line in self.content.splitlines():
match = cregex.match(line)
if match:
return match
raise Exception('No "{0}" line in {1}.cpp'.format(
regex_to_error_msg(regex),
self.name
)) | python | def _match(self, regex):
"""Find the first line matching regex and return the match object"""
cregex = re.compile(regex)
for line in self.content.splitlines():
match = cregex.match(line)
if match:
return match
raise Exception('No "{0}" line in {1}.cpp'.format(
regex_to_error_msg(regex),
self.name
)) | [
"def",
"_match",
"(",
"self",
",",
"regex",
")",
":",
"cregex",
"=",
"re",
".",
"compile",
"(",
"regex",
")",
"for",
"line",
"in",
"self",
".",
"content",
".",
"splitlines",
"(",
")",
":",
"match",
"=",
"cregex",
".",
"match",
"(",
"line",
")",
"if",
"match",
":",
"return",
"match",
"raise",
"Exception",
"(",
"'No \"{0}\" line in {1}.cpp'",
".",
"format",
"(",
"regex_to_error_msg",
"(",
"regex",
")",
",",
"self",
".",
"name",
")",
")"
] | Find the first line matching regex and return the match object | [
"Find",
"the",
"first",
"line",
"matching",
"regex",
"and",
"return",
"the",
"match",
"object"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/generate.py#L163-L173 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/coremltools/models/pipeline.py | Pipeline.add_model | def add_model(self, spec):
"""
Add a protobuf spec or :py:class:`models.MLModel` instance to the pipeline.
All input features of this model must either match the input_features
of the pipeline, or match the outputs of a previous model.
Parameters
----------
spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model.
"""
if isinstance(spec, _model.MLModel):
spec = spec._spec
pipeline = self.spec.pipeline
step_spec = pipeline.models.add()
step_spec.CopyFrom(spec) | python | def add_model(self, spec):
"""
Add a protobuf spec or :py:class:`models.MLModel` instance to the pipeline.
All input features of this model must either match the input_features
of the pipeline, or match the outputs of a previous model.
Parameters
----------
spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model.
"""
if isinstance(spec, _model.MLModel):
spec = spec._spec
pipeline = self.spec.pipeline
step_spec = pipeline.models.add()
step_spec.CopyFrom(spec) | [
"def",
"add_model",
"(",
"self",
",",
"spec",
")",
":",
"if",
"isinstance",
"(",
"spec",
",",
"_model",
".",
"MLModel",
")",
":",
"spec",
"=",
"spec",
".",
"_spec",
"pipeline",
"=",
"self",
".",
"spec",
".",
"pipeline",
"step_spec",
"=",
"pipeline",
".",
"models",
".",
"add",
"(",
")",
"step_spec",
".",
"CopyFrom",
"(",
"spec",
")"
] | Add a protobuf spec or :py:class:`models.MLModel` instance to the pipeline.
All input features of this model must either match the input_features
of the pipeline, or match the outputs of a previous model.
Parameters
----------
spec: [MLModel, Model_pb2]
A protobuf spec or MLModel instance containing a model. | [
"Add",
"a",
"protobuf",
"spec",
"or",
":",
"py",
":",
"class",
":",
"models",
".",
"MLModel",
"instance",
"to",
"the",
"pipeline",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/coremltools/models/pipeline.py#L61-L79 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/deps/protobuf/python/setup.py | GetVersion | def GetVersion():
"""Gets the version from google/protobuf/__init__.py
Do not import google.protobuf.__init__ directly, because an installed
protobuf library may be loaded instead."""
with open(os.path.join('google', 'protobuf', '__init__.py')) as version_file:
exec(version_file.read(), globals())
return __version__ | python | def GetVersion():
"""Gets the version from google/protobuf/__init__.py
Do not import google.protobuf.__init__ directly, because an installed
protobuf library may be loaded instead."""
with open(os.path.join('google', 'protobuf', '__init__.py')) as version_file:
exec(version_file.read(), globals())
return __version__ | [
"def",
"GetVersion",
"(",
")",
":",
"with",
"open",
"(",
"os",
".",
"path",
".",
"join",
"(",
"'google'",
",",
"'protobuf'",
",",
"'__init__.py'",
")",
")",
"as",
"version_file",
":",
"exec",
"(",
"version_file",
".",
"read",
"(",
")",
",",
"globals",
"(",
")",
")",
"return",
"__version__"
] | Gets the version from google/protobuf/__init__.py
Do not import google.protobuf.__init__ directly, because an installed
protobuf library may be loaded instead. | [
"Gets",
"the",
"version",
"from",
"google",
"/",
"protobuf",
"/",
"__init__",
".",
"py"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/deps/protobuf/python/setup.py#L39-L47 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/deps/protobuf/python/setup.py | generate_proto | def generate_proto(source, require = True):
"""Invokes the Protocol Compiler to generate a _pb2.py from the given
.proto file. Does nothing if the output already exists and is newer than
the input."""
if not require and not os.path.exists(source):
return
output = source.replace(".proto", "_pb2.py").replace("../src/", "")
if (not os.path.exists(output) or
(os.path.exists(source) and
os.path.getmtime(source) > os.path.getmtime(output))):
print("Generating %s..." % output)
if not os.path.exists(source):
sys.stderr.write("Can't find required file: %s\n" % source)
sys.exit(-1)
if protoc is None:
sys.stderr.write(
"protoc is not installed nor found in ../src. Please compile it "
"or install the binary package.\n")
sys.exit(-1)
protoc_command = [ protoc, "-I../src", "-I.", "--python_out=.", source ]
if subprocess.call(protoc_command) != 0:
sys.exit(-1) | python | def generate_proto(source, require = True):
"""Invokes the Protocol Compiler to generate a _pb2.py from the given
.proto file. Does nothing if the output already exists and is newer than
the input."""
if not require and not os.path.exists(source):
return
output = source.replace(".proto", "_pb2.py").replace("../src/", "")
if (not os.path.exists(output) or
(os.path.exists(source) and
os.path.getmtime(source) > os.path.getmtime(output))):
print("Generating %s..." % output)
if not os.path.exists(source):
sys.stderr.write("Can't find required file: %s\n" % source)
sys.exit(-1)
if protoc is None:
sys.stderr.write(
"protoc is not installed nor found in ../src. Please compile it "
"or install the binary package.\n")
sys.exit(-1)
protoc_command = [ protoc, "-I../src", "-I.", "--python_out=.", source ]
if subprocess.call(protoc_command) != 0:
sys.exit(-1) | [
"def",
"generate_proto",
"(",
"source",
",",
"require",
"=",
"True",
")",
":",
"if",
"not",
"require",
"and",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"source",
")",
":",
"return",
"output",
"=",
"source",
".",
"replace",
"(",
"\".proto\"",
",",
"\"_pb2.py\"",
")",
".",
"replace",
"(",
"\"../src/\"",
",",
"\"\"",
")",
"if",
"(",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"output",
")",
"or",
"(",
"os",
".",
"path",
".",
"exists",
"(",
"source",
")",
"and",
"os",
".",
"path",
".",
"getmtime",
"(",
"source",
")",
">",
"os",
".",
"path",
".",
"getmtime",
"(",
"output",
")",
")",
")",
":",
"print",
"(",
"\"Generating %s...\"",
"%",
"output",
")",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"source",
")",
":",
"sys",
".",
"stderr",
".",
"write",
"(",
"\"Can't find required file: %s\\n\"",
"%",
"source",
")",
"sys",
".",
"exit",
"(",
"-",
"1",
")",
"if",
"protoc",
"is",
"None",
":",
"sys",
".",
"stderr",
".",
"write",
"(",
"\"protoc is not installed nor found in ../src. Please compile it \"",
"\"or install the binary package.\\n\"",
")",
"sys",
".",
"exit",
"(",
"-",
"1",
")",
"protoc_command",
"=",
"[",
"protoc",
",",
"\"-I../src\"",
",",
"\"-I.\"",
",",
"\"--python_out=.\"",
",",
"source",
"]",
"if",
"subprocess",
".",
"call",
"(",
"protoc_command",
")",
"!=",
"0",
":",
"sys",
".",
"exit",
"(",
"-",
"1",
")"
] | Invokes the Protocol Compiler to generate a _pb2.py from the given
.proto file. Does nothing if the output already exists and is newer than
the input. | [
"Invokes",
"the",
"Protocol",
"Compiler",
"to",
"generate",
"a",
"_pb2",
".",
"py",
"from",
"the",
"given",
".",
"proto",
"file",
".",
"Does",
"nothing",
"if",
"the",
"output",
"already",
"exists",
"and",
"is",
"newer",
"than",
"the",
"input",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/deps/protobuf/python/setup.py#L50-L77 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _validate_row_label | def _validate_row_label(label, column_type_map):
"""
Validate a row label column.
Parameters
----------
label : str
Name of the row label column.
column_type_map : dict[str, type]
Dictionary mapping the name of each column in an SFrame to the type of
the values in the column.
"""
if not isinstance(label, str):
raise TypeError("The row label column name must be a string.")
if not label in column_type_map.keys():
raise ToolkitError("Row label column not found in the dataset.")
if not column_type_map[label] in (str, int):
raise TypeError("Row labels must be integers or strings.") | python | def _validate_row_label(label, column_type_map):
"""
Validate a row label column.
Parameters
----------
label : str
Name of the row label column.
column_type_map : dict[str, type]
Dictionary mapping the name of each column in an SFrame to the type of
the values in the column.
"""
if not isinstance(label, str):
raise TypeError("The row label column name must be a string.")
if not label in column_type_map.keys():
raise ToolkitError("Row label column not found in the dataset.")
if not column_type_map[label] in (str, int):
raise TypeError("Row labels must be integers or strings.") | [
"def",
"_validate_row_label",
"(",
"label",
",",
"column_type_map",
")",
":",
"if",
"not",
"isinstance",
"(",
"label",
",",
"str",
")",
":",
"raise",
"TypeError",
"(",
"\"The row label column name must be a string.\"",
")",
"if",
"not",
"label",
"in",
"column_type_map",
".",
"keys",
"(",
")",
":",
"raise",
"ToolkitError",
"(",
"\"Row label column not found in the dataset.\"",
")",
"if",
"not",
"column_type_map",
"[",
"label",
"]",
"in",
"(",
"str",
",",
"int",
")",
":",
"raise",
"TypeError",
"(",
"\"Row labels must be integers or strings.\"",
")"
] | Validate a row label column.
Parameters
----------
label : str
Name of the row label column.
column_type_map : dict[str, type]
Dictionary mapping the name of each column in an SFrame to the type of
the values in the column. | [
"Validate",
"a",
"row",
"label",
"column",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L13-L33 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _robust_column_name | def _robust_column_name(base_name, column_names):
"""
Generate a new column name that is guaranteed not to conflict with an
existing set of column names.
Parameters
----------
base_name : str
The base of the new column name. Usually this does not conflict with
the existing column names, in which case this function simply returns
`base_name`.
column_names : list[str]
List of existing column names.
Returns
-------
robust_name : str
The new column name. If `base_name` isn't in `column_names`, then
`robust_name` is the same as `base_name`. If there are conflicts, a
numeric suffix is added to `base_name` until it no longer conflicts
with the column names.
"""
robust_name = base_name
i = 1
while robust_name in column_names:
robust_name = base_name + '.{}'.format(i)
i += 1
return robust_name | python | def _robust_column_name(base_name, column_names):
"""
Generate a new column name that is guaranteed not to conflict with an
existing set of column names.
Parameters
----------
base_name : str
The base of the new column name. Usually this does not conflict with
the existing column names, in which case this function simply returns
`base_name`.
column_names : list[str]
List of existing column names.
Returns
-------
robust_name : str
The new column name. If `base_name` isn't in `column_names`, then
`robust_name` is the same as `base_name`. If there are conflicts, a
numeric suffix is added to `base_name` until it no longer conflicts
with the column names.
"""
robust_name = base_name
i = 1
while robust_name in column_names:
robust_name = base_name + '.{}'.format(i)
i += 1
return robust_name | [
"def",
"_robust_column_name",
"(",
"base_name",
",",
"column_names",
")",
":",
"robust_name",
"=",
"base_name",
"i",
"=",
"1",
"while",
"robust_name",
"in",
"column_names",
":",
"robust_name",
"=",
"base_name",
"+",
"'.{}'",
".",
"format",
"(",
"i",
")",
"i",
"+=",
"1",
"return",
"robust_name"
] | Generate a new column name that is guaranteed not to conflict with an
existing set of column names.
Parameters
----------
base_name : str
The base of the new column name. Usually this does not conflict with
the existing column names, in which case this function simply returns
`base_name`.
column_names : list[str]
List of existing column names.
Returns
-------
robust_name : str
The new column name. If `base_name` isn't in `column_names`, then
`robust_name` is the same as `base_name`. If there are conflicts, a
numeric suffix is added to `base_name` until it no longer conflicts
with the column names. | [
"Generate",
"a",
"new",
"column",
"name",
"that",
"is",
"guaranteed",
"not",
"to",
"conflict",
"with",
"an",
"existing",
"set",
"of",
"column",
"names",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L36-L66 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _select_valid_features | def _select_valid_features(dataset, features, valid_feature_types,
target_column=None):
"""
Utility function for selecting columns of only valid feature types.
Parameters
----------
dataset: SFrame
The input SFrame containing columns of potential features.
features: list[str]
List of feature column names. If None, the candidate feature set is
taken to be all the columns in the dataset.
valid_feature_types: list[type]
List of Python types that represent valid features. If type is array.array,
then an extra check is done to ensure that the individual elements of the array
are of numeric type. If type is dict, then an extra check is done to ensure
that dictionary values are numeric.
target_column: str
Name of the target column. If not None, the target column is excluded
from the list of valid feature columns.
Returns
-------
out: list[str]
List of valid feature column names. Warnings are given for each candidate
feature column that is excluded.
Examples
--------
# Select all the columns of type `str` in sf, excluding the target column named
# 'rating'
>>> valid_columns = _select_valid_features(sf, None, [str], target_column='rating')
# Select the subset of columns 'X1', 'X2', 'X3' that has dictionary type or defines
# numeric array type
>>> valid_columns = _select_valid_features(sf, ['X1', 'X2', 'X3'], [dict, array.array])
"""
if features is not None:
if not hasattr(features, '__iter__'):
raise TypeError("Input 'features' must be an iterable type.")
if not all([isinstance(x, str) for x in features]):
raise TypeError("Input 'features' must contain only strings.")
## Extract the features and labels
if features is None:
features = dataset.column_names()
col_type_map = {
col_name: col_type for (col_name, col_type) in
zip(dataset.column_names(), dataset.column_types())}
valid_features = []
for col_name in features:
if col_name not in dataset.column_names():
_logging.warning("Column '{}' is not in the input dataset.".format(col_name))
elif col_name == target_column:
_logging.warning("Excluding target column " + target_column + " as a feature.")
elif col_type_map[col_name] not in valid_feature_types:
_logging.warning("Column '{}' is excluded as a ".format(col_name) +
"feature due to invalid column type.")
else:
valid_features.append(col_name)
if len(valid_features) == 0:
raise ValueError("The dataset does not contain any valid feature columns. " +
"Accepted feature types are " + str(valid_feature_types) + ".")
return valid_features | python | def _select_valid_features(dataset, features, valid_feature_types,
target_column=None):
"""
Utility function for selecting columns of only valid feature types.
Parameters
----------
dataset: SFrame
The input SFrame containing columns of potential features.
features: list[str]
List of feature column names. If None, the candidate feature set is
taken to be all the columns in the dataset.
valid_feature_types: list[type]
List of Python types that represent valid features. If type is array.array,
then an extra check is done to ensure that the individual elements of the array
are of numeric type. If type is dict, then an extra check is done to ensure
that dictionary values are numeric.
target_column: str
Name of the target column. If not None, the target column is excluded
from the list of valid feature columns.
Returns
-------
out: list[str]
List of valid feature column names. Warnings are given for each candidate
feature column that is excluded.
Examples
--------
# Select all the columns of type `str` in sf, excluding the target column named
# 'rating'
>>> valid_columns = _select_valid_features(sf, None, [str], target_column='rating')
# Select the subset of columns 'X1', 'X2', 'X3' that has dictionary type or defines
# numeric array type
>>> valid_columns = _select_valid_features(sf, ['X1', 'X2', 'X3'], [dict, array.array])
"""
if features is not None:
if not hasattr(features, '__iter__'):
raise TypeError("Input 'features' must be an iterable type.")
if not all([isinstance(x, str) for x in features]):
raise TypeError("Input 'features' must contain only strings.")
## Extract the features and labels
if features is None:
features = dataset.column_names()
col_type_map = {
col_name: col_type for (col_name, col_type) in
zip(dataset.column_names(), dataset.column_types())}
valid_features = []
for col_name in features:
if col_name not in dataset.column_names():
_logging.warning("Column '{}' is not in the input dataset.".format(col_name))
elif col_name == target_column:
_logging.warning("Excluding target column " + target_column + " as a feature.")
elif col_type_map[col_name] not in valid_feature_types:
_logging.warning("Column '{}' is excluded as a ".format(col_name) +
"feature due to invalid column type.")
else:
valid_features.append(col_name)
if len(valid_features) == 0:
raise ValueError("The dataset does not contain any valid feature columns. " +
"Accepted feature types are " + str(valid_feature_types) + ".")
return valid_features | [
"def",
"_select_valid_features",
"(",
"dataset",
",",
"features",
",",
"valid_feature_types",
",",
"target_column",
"=",
"None",
")",
":",
"if",
"features",
"is",
"not",
"None",
":",
"if",
"not",
"hasattr",
"(",
"features",
",",
"'__iter__'",
")",
":",
"raise",
"TypeError",
"(",
"\"Input 'features' must be an iterable type.\"",
")",
"if",
"not",
"all",
"(",
"[",
"isinstance",
"(",
"x",
",",
"str",
")",
"for",
"x",
"in",
"features",
"]",
")",
":",
"raise",
"TypeError",
"(",
"\"Input 'features' must contain only strings.\"",
")",
"## Extract the features and labels",
"if",
"features",
"is",
"None",
":",
"features",
"=",
"dataset",
".",
"column_names",
"(",
")",
"col_type_map",
"=",
"{",
"col_name",
":",
"col_type",
"for",
"(",
"col_name",
",",
"col_type",
")",
"in",
"zip",
"(",
"dataset",
".",
"column_names",
"(",
")",
",",
"dataset",
".",
"column_types",
"(",
")",
")",
"}",
"valid_features",
"=",
"[",
"]",
"for",
"col_name",
"in",
"features",
":",
"if",
"col_name",
"not",
"in",
"dataset",
".",
"column_names",
"(",
")",
":",
"_logging",
".",
"warning",
"(",
"\"Column '{}' is not in the input dataset.\"",
".",
"format",
"(",
"col_name",
")",
")",
"elif",
"col_name",
"==",
"target_column",
":",
"_logging",
".",
"warning",
"(",
"\"Excluding target column \"",
"+",
"target_column",
"+",
"\" as a feature.\"",
")",
"elif",
"col_type_map",
"[",
"col_name",
"]",
"not",
"in",
"valid_feature_types",
":",
"_logging",
".",
"warning",
"(",
"\"Column '{}' is excluded as a \"",
".",
"format",
"(",
"col_name",
")",
"+",
"\"feature due to invalid column type.\"",
")",
"else",
":",
"valid_features",
".",
"append",
"(",
"col_name",
")",
"if",
"len",
"(",
"valid_features",
")",
"==",
"0",
":",
"raise",
"ValueError",
"(",
"\"The dataset does not contain any valid feature columns. \"",
"+",
"\"Accepted feature types are \"",
"+",
"str",
"(",
"valid_feature_types",
")",
"+",
"\".\"",
")",
"return",
"valid_features"
] | Utility function for selecting columns of only valid feature types.
Parameters
----------
dataset: SFrame
The input SFrame containing columns of potential features.
features: list[str]
List of feature column names. If None, the candidate feature set is
taken to be all the columns in the dataset.
valid_feature_types: list[type]
List of Python types that represent valid features. If type is array.array,
then an extra check is done to ensure that the individual elements of the array
are of numeric type. If type is dict, then an extra check is done to ensure
that dictionary values are numeric.
target_column: str
Name of the target column. If not None, the target column is excluded
from the list of valid feature columns.
Returns
-------
out: list[str]
List of valid feature column names. Warnings are given for each candidate
feature column that is excluded.
Examples
--------
# Select all the columns of type `str` in sf, excluding the target column named
# 'rating'
>>> valid_columns = _select_valid_features(sf, None, [str], target_column='rating')
# Select the subset of columns 'X1', 'X2', 'X3' that has dictionary type or defines
# numeric array type
>>> valid_columns = _select_valid_features(sf, ['X1', 'X2', 'X3'], [dict, array.array]) | [
"Utility",
"function",
"for",
"selecting",
"columns",
"of",
"only",
"valid",
"feature",
"types",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L68-L143 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _check_elements_equal | def _check_elements_equal(lst):
"""
Returns true if all of the elements in the list are equal.
"""
assert isinstance(lst, list), "Input value must be a list."
return not lst or lst.count(lst[0]) == len(lst) | python | def _check_elements_equal(lst):
"""
Returns true if all of the elements in the list are equal.
"""
assert isinstance(lst, list), "Input value must be a list."
return not lst or lst.count(lst[0]) == len(lst) | [
"def",
"_check_elements_equal",
"(",
"lst",
")",
":",
"assert",
"isinstance",
"(",
"lst",
",",
"list",
")",
",",
"\"Input value must be a list.\"",
"return",
"not",
"lst",
"or",
"lst",
".",
"count",
"(",
"lst",
"[",
"0",
"]",
")",
"==",
"len",
"(",
"lst",
")"
] | Returns true if all of the elements in the list are equal. | [
"Returns",
"true",
"if",
"all",
"of",
"the",
"elements",
"in",
"the",
"list",
"are",
"equal",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L145-L150 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _validate_lists | def _validate_lists(sa, allowed_types=[str], require_same_type=True,
require_equal_length=False, num_to_check=10):
"""
For a list-typed SArray, check whether the first elements are lists that
- contain only the provided types
- all have the same lengths (optionally)
Parameters
----------
sa : SArray
An SArray containing lists.
allowed_types : list
A list of types that are allowed in each list.
require_same_type : bool
If true, the function returns false if more than one type of object
exists in the examined lists.
require_equal_length : bool
If true, the function requires false when the list lengths differ.
Returns
-------
out : bool
Returns true if all elements are lists of equal length and containing
only ints or floats. Otherwise returns false.
"""
if len(sa) == 0:
return True
first_elements = sa.head(num_to_check)
if first_elements.dtype != list:
raise ValueError("Expected an SArray of lists when type-checking lists.")
# Check list lengths
list_lengths = list(first_elements.item_length())
same_length = _check_elements_equal(list_lengths)
if require_equal_length and not same_length:
return False
# If list lengths are all zero, return True.
if len(first_elements[0]) == 0:
return True
# Check for matching types within each list
types = first_elements.apply(lambda xs: [str(type(x)) for x in xs])
same_type = [_check_elements_equal(x) for x in types]
all_same_type = _check_elements_equal(same_type)
if require_same_type and not all_same_type:
return False
# Check for matching types across lists
first_types = [t[0] for t in types if t]
all_same_type = _check_elements_equal(first_types)
if require_same_type and not all_same_type:
return False
# Check to make sure all elements have types that are allowed
allowed_type_strs = [str(x) for x in allowed_types]
for list_element_types in types:
for t in list_element_types:
if t not in allowed_type_strs:
return False
return True | python | def _validate_lists(sa, allowed_types=[str], require_same_type=True,
require_equal_length=False, num_to_check=10):
"""
For a list-typed SArray, check whether the first elements are lists that
- contain only the provided types
- all have the same lengths (optionally)
Parameters
----------
sa : SArray
An SArray containing lists.
allowed_types : list
A list of types that are allowed in each list.
require_same_type : bool
If true, the function returns false if more than one type of object
exists in the examined lists.
require_equal_length : bool
If true, the function requires false when the list lengths differ.
Returns
-------
out : bool
Returns true if all elements are lists of equal length and containing
only ints or floats. Otherwise returns false.
"""
if len(sa) == 0:
return True
first_elements = sa.head(num_to_check)
if first_elements.dtype != list:
raise ValueError("Expected an SArray of lists when type-checking lists.")
# Check list lengths
list_lengths = list(first_elements.item_length())
same_length = _check_elements_equal(list_lengths)
if require_equal_length and not same_length:
return False
# If list lengths are all zero, return True.
if len(first_elements[0]) == 0:
return True
# Check for matching types within each list
types = first_elements.apply(lambda xs: [str(type(x)) for x in xs])
same_type = [_check_elements_equal(x) for x in types]
all_same_type = _check_elements_equal(same_type)
if require_same_type and not all_same_type:
return False
# Check for matching types across lists
first_types = [t[0] for t in types if t]
all_same_type = _check_elements_equal(first_types)
if require_same_type and not all_same_type:
return False
# Check to make sure all elements have types that are allowed
allowed_type_strs = [str(x) for x in allowed_types]
for list_element_types in types:
for t in list_element_types:
if t not in allowed_type_strs:
return False
return True | [
"def",
"_validate_lists",
"(",
"sa",
",",
"allowed_types",
"=",
"[",
"str",
"]",
",",
"require_same_type",
"=",
"True",
",",
"require_equal_length",
"=",
"False",
",",
"num_to_check",
"=",
"10",
")",
":",
"if",
"len",
"(",
"sa",
")",
"==",
"0",
":",
"return",
"True",
"first_elements",
"=",
"sa",
".",
"head",
"(",
"num_to_check",
")",
"if",
"first_elements",
".",
"dtype",
"!=",
"list",
":",
"raise",
"ValueError",
"(",
"\"Expected an SArray of lists when type-checking lists.\"",
")",
"# Check list lengths",
"list_lengths",
"=",
"list",
"(",
"first_elements",
".",
"item_length",
"(",
")",
")",
"same_length",
"=",
"_check_elements_equal",
"(",
"list_lengths",
")",
"if",
"require_equal_length",
"and",
"not",
"same_length",
":",
"return",
"False",
"# If list lengths are all zero, return True.",
"if",
"len",
"(",
"first_elements",
"[",
"0",
"]",
")",
"==",
"0",
":",
"return",
"True",
"# Check for matching types within each list",
"types",
"=",
"first_elements",
".",
"apply",
"(",
"lambda",
"xs",
":",
"[",
"str",
"(",
"type",
"(",
"x",
")",
")",
"for",
"x",
"in",
"xs",
"]",
")",
"same_type",
"=",
"[",
"_check_elements_equal",
"(",
"x",
")",
"for",
"x",
"in",
"types",
"]",
"all_same_type",
"=",
"_check_elements_equal",
"(",
"same_type",
")",
"if",
"require_same_type",
"and",
"not",
"all_same_type",
":",
"return",
"False",
"# Check for matching types across lists",
"first_types",
"=",
"[",
"t",
"[",
"0",
"]",
"for",
"t",
"in",
"types",
"if",
"t",
"]",
"all_same_type",
"=",
"_check_elements_equal",
"(",
"first_types",
")",
"if",
"require_same_type",
"and",
"not",
"all_same_type",
":",
"return",
"False",
"# Check to make sure all elements have types that are allowed",
"allowed_type_strs",
"=",
"[",
"str",
"(",
"x",
")",
"for",
"x",
"in",
"allowed_types",
"]",
"for",
"list_element_types",
"in",
"types",
":",
"for",
"t",
"in",
"list_element_types",
":",
"if",
"t",
"not",
"in",
"allowed_type_strs",
":",
"return",
"False",
"return",
"True"
] | For a list-typed SArray, check whether the first elements are lists that
- contain only the provided types
- all have the same lengths (optionally)
Parameters
----------
sa : SArray
An SArray containing lists.
allowed_types : list
A list of types that are allowed in each list.
require_same_type : bool
If true, the function returns false if more than one type of object
exists in the examined lists.
require_equal_length : bool
If true, the function requires false when the list lengths differ.
Returns
-------
out : bool
Returns true if all elements are lists of equal length and containing
only ints or floats. Otherwise returns false. | [
"For",
"a",
"list",
"-",
"typed",
"SArray",
"check",
"whether",
"the",
"first",
"elements",
"are",
"lists",
"that",
"-",
"contain",
"only",
"the",
"provided",
"types",
"-",
"all",
"have",
"the",
"same",
"lengths",
"(",
"optionally",
")"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L152-L217 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_private_utils.py | _summarize_accessible_fields | def _summarize_accessible_fields(field_descriptions, width=40,
section_title='Accessible fields'):
"""
Create a summary string for the accessible fields in a model. Unlike
`_toolkit_repr_print`, this function does not look up the values of the
fields, it just formats the names and descriptions.
Parameters
----------
field_descriptions : dict{str: str}
Name of each field and its description, in a dictionary. Keys and
values should be strings.
width : int, optional
Width of the names. This is usually determined and passed by the
calling `__repr__` method.
section_title : str, optional
Name of the accessible fields section in the summary string.
Returns
-------
out : str
"""
key_str = "{:<{}}: {}"
items = []
items.append(section_title)
items.append("-" * len(section_title))
for field_name, field_desc in field_descriptions.items():
items.append(key_str.format(field_name, width, field_desc))
return "\n".join(items) | python | def _summarize_accessible_fields(field_descriptions, width=40,
section_title='Accessible fields'):
"""
Create a summary string for the accessible fields in a model. Unlike
`_toolkit_repr_print`, this function does not look up the values of the
fields, it just formats the names and descriptions.
Parameters
----------
field_descriptions : dict{str: str}
Name of each field and its description, in a dictionary. Keys and
values should be strings.
width : int, optional
Width of the names. This is usually determined and passed by the
calling `__repr__` method.
section_title : str, optional
Name of the accessible fields section in the summary string.
Returns
-------
out : str
"""
key_str = "{:<{}}: {}"
items = []
items.append(section_title)
items.append("-" * len(section_title))
for field_name, field_desc in field_descriptions.items():
items.append(key_str.format(field_name, width, field_desc))
return "\n".join(items) | [
"def",
"_summarize_accessible_fields",
"(",
"field_descriptions",
",",
"width",
"=",
"40",
",",
"section_title",
"=",
"'Accessible fields'",
")",
":",
"key_str",
"=",
"\"{:<{}}: {}\"",
"items",
"=",
"[",
"]",
"items",
".",
"append",
"(",
"section_title",
")",
"items",
".",
"append",
"(",
"\"-\"",
"*",
"len",
"(",
"section_title",
")",
")",
"for",
"field_name",
",",
"field_desc",
"in",
"field_descriptions",
".",
"items",
"(",
")",
":",
"items",
".",
"append",
"(",
"key_str",
".",
"format",
"(",
"field_name",
",",
"width",
",",
"field_desc",
")",
")",
"return",
"\"\\n\"",
".",
"join",
"(",
"items",
")"
] | Create a summary string for the accessible fields in a model. Unlike
`_toolkit_repr_print`, this function does not look up the values of the
fields, it just formats the names and descriptions.
Parameters
----------
field_descriptions : dict{str: str}
Name of each field and its description, in a dictionary. Keys and
values should be strings.
width : int, optional
Width of the names. This is usually determined and passed by the
calling `__repr__` method.
section_title : str, optional
Name of the accessible fields section in the summary string.
Returns
-------
out : str | [
"Create",
"a",
"summary",
"string",
"for",
"the",
"accessible",
"fields",
"in",
"a",
"model",
".",
"Unlike",
"_toolkit_repr_print",
"this",
"function",
"does",
"not",
"look",
"up",
"the",
"values",
"of",
"the",
"fields",
"it",
"just",
"formats",
"the",
"names",
"and",
"descriptions",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_private_utils.py#L219-L252 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/coremltools/models/datatypes.py | _is_valid_datatype | def _is_valid_datatype(datatype_instance):
"""
Returns true if datatype_instance is a valid datatype object and false otherwise.
"""
# Remap so we can still use the python types for the simple cases
global _simple_type_remap
if datatype_instance in _simple_type_remap:
return True
# Now set the protobuf from this interface.
if isinstance(datatype_instance, (Int64, Double, String, Array)):
return True
elif isinstance(datatype_instance, Dictionary):
kt = datatype_instance.key_type
if isinstance(kt, (Int64, String)):
return True
return False | python | def _is_valid_datatype(datatype_instance):
"""
Returns true if datatype_instance is a valid datatype object and false otherwise.
"""
# Remap so we can still use the python types for the simple cases
global _simple_type_remap
if datatype_instance in _simple_type_remap:
return True
# Now set the protobuf from this interface.
if isinstance(datatype_instance, (Int64, Double, String, Array)):
return True
elif isinstance(datatype_instance, Dictionary):
kt = datatype_instance.key_type
if isinstance(kt, (Int64, String)):
return True
return False | [
"def",
"_is_valid_datatype",
"(",
"datatype_instance",
")",
":",
"# Remap so we can still use the python types for the simple cases",
"global",
"_simple_type_remap",
"if",
"datatype_instance",
"in",
"_simple_type_remap",
":",
"return",
"True",
"# Now set the protobuf from this interface.",
"if",
"isinstance",
"(",
"datatype_instance",
",",
"(",
"Int64",
",",
"Double",
",",
"String",
",",
"Array",
")",
")",
":",
"return",
"True",
"elif",
"isinstance",
"(",
"datatype_instance",
",",
"Dictionary",
")",
":",
"kt",
"=",
"datatype_instance",
".",
"key_type",
"if",
"isinstance",
"(",
"kt",
",",
"(",
"Int64",
",",
"String",
")",
")",
":",
"return",
"True",
"return",
"False"
] | Returns true if datatype_instance is a valid datatype object and false otherwise. | [
"Returns",
"true",
"if",
"datatype_instance",
"is",
"a",
"valid",
"datatype",
"object",
"and",
"false",
"otherwise",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/coremltools/models/datatypes.py#L130-L150 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/coremltools/models/datatypes.py | _normalize_datatype | def _normalize_datatype(datatype_instance):
"""
Translates a user specified datatype to an instance of the ones defined above.
Valid data types are passed through, and the following type specifications
are translated to the proper instances:
str, "String" -> String()
int, "Int64" -> Int64()
float, "Double" -> Double()
If a data type is not recognized, then an error is raised.
"""
global _simple_type_remap
if datatype_instance in _simple_type_remap:
return _simple_type_remap[datatype_instance]
# Now set the protobuf from this interface.
if isinstance(datatype_instance, (Int64, Double, String, Array)):
return datatype_instance
elif isinstance(datatype_instance, Dictionary):
kt = datatype_instance.key_type
if isinstance(kt, (Int64, String)):
return datatype_instance
raise ValueError("Datatype instance not recognized.") | python | def _normalize_datatype(datatype_instance):
"""
Translates a user specified datatype to an instance of the ones defined above.
Valid data types are passed through, and the following type specifications
are translated to the proper instances:
str, "String" -> String()
int, "Int64" -> Int64()
float, "Double" -> Double()
If a data type is not recognized, then an error is raised.
"""
global _simple_type_remap
if datatype_instance in _simple_type_remap:
return _simple_type_remap[datatype_instance]
# Now set the protobuf from this interface.
if isinstance(datatype_instance, (Int64, Double, String, Array)):
return datatype_instance
elif isinstance(datatype_instance, Dictionary):
kt = datatype_instance.key_type
if isinstance(kt, (Int64, String)):
return datatype_instance
raise ValueError("Datatype instance not recognized.") | [
"def",
"_normalize_datatype",
"(",
"datatype_instance",
")",
":",
"global",
"_simple_type_remap",
"if",
"datatype_instance",
"in",
"_simple_type_remap",
":",
"return",
"_simple_type_remap",
"[",
"datatype_instance",
"]",
"# Now set the protobuf from this interface.",
"if",
"isinstance",
"(",
"datatype_instance",
",",
"(",
"Int64",
",",
"Double",
",",
"String",
",",
"Array",
")",
")",
":",
"return",
"datatype_instance",
"elif",
"isinstance",
"(",
"datatype_instance",
",",
"Dictionary",
")",
":",
"kt",
"=",
"datatype_instance",
".",
"key_type",
"if",
"isinstance",
"(",
"kt",
",",
"(",
"Int64",
",",
"String",
")",
")",
":",
"return",
"datatype_instance",
"raise",
"ValueError",
"(",
"\"Datatype instance not recognized.\"",
")"
] | Translates a user specified datatype to an instance of the ones defined above.
Valid data types are passed through, and the following type specifications
are translated to the proper instances:
str, "String" -> String()
int, "Int64" -> Int64()
float, "Double" -> Double()
If a data type is not recognized, then an error is raised. | [
"Translates",
"a",
"user",
"specified",
"datatype",
"to",
"an",
"instance",
"of",
"the",
"ones",
"defined",
"above",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/coremltools/models/datatypes.py#L152-L179 | train |
apple/turicreate | src/external/coremltools_wrap/coremltools/coremltools/converters/sklearn/_gradient_boosting_classifier.py | convert | def convert(model, feature_names, target):
"""Convert a boosted tree model to protobuf format.
Parameters
----------
decision_tree : GradientBoostingClassifier
A trained scikit-learn tree model.
feature_names: [str]
Name of the input columns.
target: str
Name of the output column.
Returns
-------
model_spec: An object of type Model_pb.
Protobuf representation of the model
"""
if not(_HAS_SKLEARN):
raise RuntimeError('scikit-learn not found. scikit-learn conversion API is disabled.')
_sklearn_util.check_expected_type(model, _ensemble.GradientBoostingClassifier)
def is_gbr_model(m):
if len(m.estimators_) == 0:
return False
if hasattr(m, 'estimators_') and m.estimators_ is not None:
for t in m.estimators_.flatten():
if not hasattr(t, 'tree_') or t.tree_ is None:
return False
return True
else:
return False
_sklearn_util.check_fitted(model, is_gbr_model)
post_evaluation_transform = None
if model.n_classes_ == 2:
base_prediction = [model.init_.prior]
post_evaluation_transform = 'Regression_Logistic'
else:
base_prediction = list(model.init_.priors)
post_evaluation_transform = 'Classification_SoftMax'
return _MLModel(_convert_tree_ensemble(model, feature_names, target, mode = 'classifier',
base_prediction = base_prediction, class_labels = model.classes_,
post_evaluation_transform = post_evaluation_transform)) | python | def convert(model, feature_names, target):
"""Convert a boosted tree model to protobuf format.
Parameters
----------
decision_tree : GradientBoostingClassifier
A trained scikit-learn tree model.
feature_names: [str]
Name of the input columns.
target: str
Name of the output column.
Returns
-------
model_spec: An object of type Model_pb.
Protobuf representation of the model
"""
if not(_HAS_SKLEARN):
raise RuntimeError('scikit-learn not found. scikit-learn conversion API is disabled.')
_sklearn_util.check_expected_type(model, _ensemble.GradientBoostingClassifier)
def is_gbr_model(m):
if len(m.estimators_) == 0:
return False
if hasattr(m, 'estimators_') and m.estimators_ is not None:
for t in m.estimators_.flatten():
if not hasattr(t, 'tree_') or t.tree_ is None:
return False
return True
else:
return False
_sklearn_util.check_fitted(model, is_gbr_model)
post_evaluation_transform = None
if model.n_classes_ == 2:
base_prediction = [model.init_.prior]
post_evaluation_transform = 'Regression_Logistic'
else:
base_prediction = list(model.init_.priors)
post_evaluation_transform = 'Classification_SoftMax'
return _MLModel(_convert_tree_ensemble(model, feature_names, target, mode = 'classifier',
base_prediction = base_prediction, class_labels = model.classes_,
post_evaluation_transform = post_evaluation_transform)) | [
"def",
"convert",
"(",
"model",
",",
"feature_names",
",",
"target",
")",
":",
"if",
"not",
"(",
"_HAS_SKLEARN",
")",
":",
"raise",
"RuntimeError",
"(",
"'scikit-learn not found. scikit-learn conversion API is disabled.'",
")",
"_sklearn_util",
".",
"check_expected_type",
"(",
"model",
",",
"_ensemble",
".",
"GradientBoostingClassifier",
")",
"def",
"is_gbr_model",
"(",
"m",
")",
":",
"if",
"len",
"(",
"m",
".",
"estimators_",
")",
"==",
"0",
":",
"return",
"False",
"if",
"hasattr",
"(",
"m",
",",
"'estimators_'",
")",
"and",
"m",
".",
"estimators_",
"is",
"not",
"None",
":",
"for",
"t",
"in",
"m",
".",
"estimators_",
".",
"flatten",
"(",
")",
":",
"if",
"not",
"hasattr",
"(",
"t",
",",
"'tree_'",
")",
"or",
"t",
".",
"tree_",
"is",
"None",
":",
"return",
"False",
"return",
"True",
"else",
":",
"return",
"False",
"_sklearn_util",
".",
"check_fitted",
"(",
"model",
",",
"is_gbr_model",
")",
"post_evaluation_transform",
"=",
"None",
"if",
"model",
".",
"n_classes_",
"==",
"2",
":",
"base_prediction",
"=",
"[",
"model",
".",
"init_",
".",
"prior",
"]",
"post_evaluation_transform",
"=",
"'Regression_Logistic'",
"else",
":",
"base_prediction",
"=",
"list",
"(",
"model",
".",
"init_",
".",
"priors",
")",
"post_evaluation_transform",
"=",
"'Classification_SoftMax'",
"return",
"_MLModel",
"(",
"_convert_tree_ensemble",
"(",
"model",
",",
"feature_names",
",",
"target",
",",
"mode",
"=",
"'classifier'",
",",
"base_prediction",
"=",
"base_prediction",
",",
"class_labels",
"=",
"model",
".",
"classes_",
",",
"post_evaluation_transform",
"=",
"post_evaluation_transform",
")",
")"
] | Convert a boosted tree model to protobuf format.
Parameters
----------
decision_tree : GradientBoostingClassifier
A trained scikit-learn tree model.
feature_names: [str]
Name of the input columns.
target: str
Name of the output column.
Returns
-------
model_spec: An object of type Model_pb.
Protobuf representation of the model | [
"Convert",
"a",
"boosted",
"tree",
"model",
"to",
"protobuf",
"format",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/external/coremltools_wrap/coremltools/coremltools/converters/sklearn/_gradient_boosting_classifier.py#L19-L62 | train |
apple/turicreate | src/unity/python/turicreate/meta/asttools/visitors/symbol_visitor.py | get_symbols | def get_symbols(node, ctx_types=(ast.Load, ast.Store)):
'''
Returns all symbols defined in an ast node.
if ctx_types is given, then restrict the symbols to ones with that context.
:param node: ast node
:param ctx_types: type or tuple of types that may be found assigned to the `ctx` attribute of
an ast Name node.
'''
gen = SymbolVisitor(ctx_types)
return gen.visit(node) | python | def get_symbols(node, ctx_types=(ast.Load, ast.Store)):
'''
Returns all symbols defined in an ast node.
if ctx_types is given, then restrict the symbols to ones with that context.
:param node: ast node
:param ctx_types: type or tuple of types that may be found assigned to the `ctx` attribute of
an ast Name node.
'''
gen = SymbolVisitor(ctx_types)
return gen.visit(node) | [
"def",
"get_symbols",
"(",
"node",
",",
"ctx_types",
"=",
"(",
"ast",
".",
"Load",
",",
"ast",
".",
"Store",
")",
")",
":",
"gen",
"=",
"SymbolVisitor",
"(",
"ctx_types",
")",
"return",
"gen",
".",
"visit",
"(",
"node",
")"
] | Returns all symbols defined in an ast node.
if ctx_types is given, then restrict the symbols to ones with that context.
:param node: ast node
:param ctx_types: type or tuple of types that may be found assigned to the `ctx` attribute of
an ast Name node. | [
"Returns",
"all",
"symbols",
"defined",
"in",
"an",
"ast",
"node",
".",
"if",
"ctx_types",
"is",
"given",
"then",
"restrict",
"the",
"symbols",
"to",
"ones",
"with",
"that",
"context",
".",
":",
"param",
"node",
":",
"ast",
"node",
":",
"param",
"ctx_types",
":",
"type",
"or",
"tuple",
"of",
"types",
"that",
"may",
"be",
"found",
"assigned",
"to",
"the",
"ctx",
"attribute",
"of",
"an",
"ast",
"Name",
"node",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/meta/asttools/visitors/symbol_visitor.py#L58-L70 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/util/order.py | Order.order | def order (self, objects):
""" Given a list of objects, reorder them so that the constains specified
by 'add_pair' are satisfied.
The algorithm was adopted from an awk script by Nikita Youshchenko
(yoush at cs dot msu dot su)
"""
# The algorithm used is the same is standard transitive closure,
# except that we're not keeping in-degree for all vertices, but
# rather removing edges.
result = []
if not objects:
return result
constraints = self.__eliminate_unused_constraits (objects)
# Find some library that nobody depends upon and add it to
# the 'result' array.
obj = None
while objects:
new_objects = []
while objects:
obj = objects [0]
if self.__has_no_dependents (obj, constraints):
# Emulate break ;
new_objects.extend (objects [1:])
objects = []
else:
new_objects.append (obj)
obj = None
objects = objects [1:]
if not obj:
raise BaseException ("Circular order dependencies")
# No problem with placing first.
result.append (obj)
# Remove all containts where 'obj' comes first,
# since they are already satisfied.
constraints = self.__remove_satisfied (constraints, obj)
# Add the remaining objects for further processing
# on the next iteration
objects = new_objects
return result | python | def order (self, objects):
""" Given a list of objects, reorder them so that the constains specified
by 'add_pair' are satisfied.
The algorithm was adopted from an awk script by Nikita Youshchenko
(yoush at cs dot msu dot su)
"""
# The algorithm used is the same is standard transitive closure,
# except that we're not keeping in-degree for all vertices, but
# rather removing edges.
result = []
if not objects:
return result
constraints = self.__eliminate_unused_constraits (objects)
# Find some library that nobody depends upon and add it to
# the 'result' array.
obj = None
while objects:
new_objects = []
while objects:
obj = objects [0]
if self.__has_no_dependents (obj, constraints):
# Emulate break ;
new_objects.extend (objects [1:])
objects = []
else:
new_objects.append (obj)
obj = None
objects = objects [1:]
if not obj:
raise BaseException ("Circular order dependencies")
# No problem with placing first.
result.append (obj)
# Remove all containts where 'obj' comes first,
# since they are already satisfied.
constraints = self.__remove_satisfied (constraints, obj)
# Add the remaining objects for further processing
# on the next iteration
objects = new_objects
return result | [
"def",
"order",
"(",
"self",
",",
"objects",
")",
":",
"# The algorithm used is the same is standard transitive closure,",
"# except that we're not keeping in-degree for all vertices, but",
"# rather removing edges.",
"result",
"=",
"[",
"]",
"if",
"not",
"objects",
":",
"return",
"result",
"constraints",
"=",
"self",
".",
"__eliminate_unused_constraits",
"(",
"objects",
")",
"# Find some library that nobody depends upon and add it to",
"# the 'result' array.",
"obj",
"=",
"None",
"while",
"objects",
":",
"new_objects",
"=",
"[",
"]",
"while",
"objects",
":",
"obj",
"=",
"objects",
"[",
"0",
"]",
"if",
"self",
".",
"__has_no_dependents",
"(",
"obj",
",",
"constraints",
")",
":",
"# Emulate break ;",
"new_objects",
".",
"extend",
"(",
"objects",
"[",
"1",
":",
"]",
")",
"objects",
"=",
"[",
"]",
"else",
":",
"new_objects",
".",
"append",
"(",
"obj",
")",
"obj",
"=",
"None",
"objects",
"=",
"objects",
"[",
"1",
":",
"]",
"if",
"not",
"obj",
":",
"raise",
"BaseException",
"(",
"\"Circular order dependencies\"",
")",
"# No problem with placing first.",
"result",
".",
"append",
"(",
"obj",
")",
"# Remove all containts where 'obj' comes first,",
"# since they are already satisfied.",
"constraints",
"=",
"self",
".",
"__remove_satisfied",
"(",
"constraints",
",",
"obj",
")",
"# Add the remaining objects for further processing",
"# on the next iteration",
"objects",
"=",
"new_objects",
"return",
"result"
] | Given a list of objects, reorder them so that the constains specified
by 'add_pair' are satisfied.
The algorithm was adopted from an awk script by Nikita Youshchenko
(yoush at cs dot msu dot su) | [
"Given",
"a",
"list",
"of",
"objects",
"reorder",
"them",
"so",
"that",
"the",
"constains",
"specified",
"by",
"add_pair",
"are",
"satisfied",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/util/order.py#L37-L86 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/util/order.py | Order.__eliminate_unused_constraits | def __eliminate_unused_constraits (self, objects):
""" Eliminate constraints which mention objects not in 'objects'.
In graph-theory terms, this is finding subgraph induced by
ordered vertices.
"""
result = []
for c in self.constraints_:
if c [0] in objects and c [1] in objects:
result.append (c)
return result | python | def __eliminate_unused_constraits (self, objects):
""" Eliminate constraints which mention objects not in 'objects'.
In graph-theory terms, this is finding subgraph induced by
ordered vertices.
"""
result = []
for c in self.constraints_:
if c [0] in objects and c [1] in objects:
result.append (c)
return result | [
"def",
"__eliminate_unused_constraits",
"(",
"self",
",",
"objects",
")",
":",
"result",
"=",
"[",
"]",
"for",
"c",
"in",
"self",
".",
"constraints_",
":",
"if",
"c",
"[",
"0",
"]",
"in",
"objects",
"and",
"c",
"[",
"1",
"]",
"in",
"objects",
":",
"result",
".",
"append",
"(",
"c",
")",
"return",
"result"
] | Eliminate constraints which mention objects not in 'objects'.
In graph-theory terms, this is finding subgraph induced by
ordered vertices. | [
"Eliminate",
"constraints",
"which",
"mention",
"objects",
"not",
"in",
"objects",
".",
"In",
"graph",
"-",
"theory",
"terms",
"this",
"is",
"finding",
"subgraph",
"induced",
"by",
"ordered",
"vertices",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/util/order.py#L88-L98 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/util/order.py | Order.__has_no_dependents | def __has_no_dependents (self, obj, constraints):
""" Returns true if there's no constraint in 'constraints' where
'obj' comes second.
"""
failed = False
while constraints and not failed:
c = constraints [0]
if c [1] == obj:
failed = True
constraints = constraints [1:]
return not failed | python | def __has_no_dependents (self, obj, constraints):
""" Returns true if there's no constraint in 'constraints' where
'obj' comes second.
"""
failed = False
while constraints and not failed:
c = constraints [0]
if c [1] == obj:
failed = True
constraints = constraints [1:]
return not failed | [
"def",
"__has_no_dependents",
"(",
"self",
",",
"obj",
",",
"constraints",
")",
":",
"failed",
"=",
"False",
"while",
"constraints",
"and",
"not",
"failed",
":",
"c",
"=",
"constraints",
"[",
"0",
"]",
"if",
"c",
"[",
"1",
"]",
"==",
"obj",
":",
"failed",
"=",
"True",
"constraints",
"=",
"constraints",
"[",
"1",
":",
"]",
"return",
"not",
"failed"
] | Returns true if there's no constraint in 'constraints' where
'obj' comes second. | [
"Returns",
"true",
"if",
"there",
"s",
"no",
"constraint",
"in",
"constraints",
"where",
"obj",
"comes",
"second",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/util/order.py#L100-L113 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | path_order | def path_order (x, y):
""" Helper for as_path, below. Orders properties with the implicit ones
first, and within the two sections in alphabetical order of feature
name.
"""
if x == y:
return 0
xg = get_grist (x)
yg = get_grist (y)
if yg and not xg:
return -1
elif xg and not yg:
return 1
else:
if not xg:
x = feature.expand_subfeatures([x])
y = feature.expand_subfeatures([y])
if x < y:
return -1
elif x > y:
return 1
else:
return 0 | python | def path_order (x, y):
""" Helper for as_path, below. Orders properties with the implicit ones
first, and within the two sections in alphabetical order of feature
name.
"""
if x == y:
return 0
xg = get_grist (x)
yg = get_grist (y)
if yg and not xg:
return -1
elif xg and not yg:
return 1
else:
if not xg:
x = feature.expand_subfeatures([x])
y = feature.expand_subfeatures([y])
if x < y:
return -1
elif x > y:
return 1
else:
return 0 | [
"def",
"path_order",
"(",
"x",
",",
"y",
")",
":",
"if",
"x",
"==",
"y",
":",
"return",
"0",
"xg",
"=",
"get_grist",
"(",
"x",
")",
"yg",
"=",
"get_grist",
"(",
"y",
")",
"if",
"yg",
"and",
"not",
"xg",
":",
"return",
"-",
"1",
"elif",
"xg",
"and",
"not",
"yg",
":",
"return",
"1",
"else",
":",
"if",
"not",
"xg",
":",
"x",
"=",
"feature",
".",
"expand_subfeatures",
"(",
"[",
"x",
"]",
")",
"y",
"=",
"feature",
".",
"expand_subfeatures",
"(",
"[",
"y",
"]",
")",
"if",
"x",
"<",
"y",
":",
"return",
"-",
"1",
"elif",
"x",
">",
"y",
":",
"return",
"1",
"else",
":",
"return",
"0"
] | Helper for as_path, below. Orders properties with the implicit ones
first, and within the two sections in alphabetical order of feature
name. | [
"Helper",
"for",
"as_path",
"below",
".",
"Orders",
"properties",
"with",
"the",
"implicit",
"ones",
"first",
"and",
"within",
"the",
"two",
"sections",
"in",
"alphabetical",
"order",
"of",
"feature",
"name",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L244-L271 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | refine | def refine (properties, requirements):
""" Refines 'properties' by overriding any non-free properties
for which a different value is specified in 'requirements'.
Conditional requirements are just added without modification.
Returns the resulting list of properties.
"""
assert is_iterable_typed(properties, Property)
assert is_iterable_typed(requirements, Property)
# The result has no duplicates, so we store it in a set
result = set()
# Records all requirements.
required = {}
# All the elements of requirements should be present in the result
# Record them so that we can handle 'properties'.
for r in requirements:
# Don't consider conditional requirements.
if not r.condition:
required[r.feature] = r
for p in properties:
# Skip conditional properties
if p.condition:
result.add(p)
# No processing for free properties
elif p.feature.free:
result.add(p)
else:
if p.feature in required:
result.add(required[p.feature])
else:
result.add(p)
return sequence.unique(list(result) + requirements) | python | def refine (properties, requirements):
""" Refines 'properties' by overriding any non-free properties
for which a different value is specified in 'requirements'.
Conditional requirements are just added without modification.
Returns the resulting list of properties.
"""
assert is_iterable_typed(properties, Property)
assert is_iterable_typed(requirements, Property)
# The result has no duplicates, so we store it in a set
result = set()
# Records all requirements.
required = {}
# All the elements of requirements should be present in the result
# Record them so that we can handle 'properties'.
for r in requirements:
# Don't consider conditional requirements.
if not r.condition:
required[r.feature] = r
for p in properties:
# Skip conditional properties
if p.condition:
result.add(p)
# No processing for free properties
elif p.feature.free:
result.add(p)
else:
if p.feature in required:
result.add(required[p.feature])
else:
result.add(p)
return sequence.unique(list(result) + requirements) | [
"def",
"refine",
"(",
"properties",
",",
"requirements",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"Property",
")",
"assert",
"is_iterable_typed",
"(",
"requirements",
",",
"Property",
")",
"# The result has no duplicates, so we store it in a set",
"result",
"=",
"set",
"(",
")",
"# Records all requirements.",
"required",
"=",
"{",
"}",
"# All the elements of requirements should be present in the result",
"# Record them so that we can handle 'properties'.",
"for",
"r",
"in",
"requirements",
":",
"# Don't consider conditional requirements.",
"if",
"not",
"r",
".",
"condition",
":",
"required",
"[",
"r",
".",
"feature",
"]",
"=",
"r",
"for",
"p",
"in",
"properties",
":",
"# Skip conditional properties",
"if",
"p",
".",
"condition",
":",
"result",
".",
"add",
"(",
"p",
")",
"# No processing for free properties",
"elif",
"p",
".",
"feature",
".",
"free",
":",
"result",
".",
"add",
"(",
"p",
")",
"else",
":",
"if",
"p",
".",
"feature",
"in",
"required",
":",
"result",
".",
"add",
"(",
"required",
"[",
"p",
".",
"feature",
"]",
")",
"else",
":",
"result",
".",
"add",
"(",
"p",
")",
"return",
"sequence",
".",
"unique",
"(",
"list",
"(",
"result",
")",
"+",
"requirements",
")"
] | Refines 'properties' by overriding any non-free properties
for which a different value is specified in 'requirements'.
Conditional requirements are just added without modification.
Returns the resulting list of properties. | [
"Refines",
"properties",
"by",
"overriding",
"any",
"non",
"-",
"free",
"properties",
"for",
"which",
"a",
"different",
"value",
"is",
"specified",
"in",
"requirements",
".",
"Conditional",
"requirements",
"are",
"just",
"added",
"without",
"modification",
".",
"Returns",
"the",
"resulting",
"list",
"of",
"properties",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L277-L311 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | translate_paths | def translate_paths (properties, path):
""" Interpret all path properties in 'properties' as relative to 'path'
The property values are assumed to be in system-specific form, and
will be translated into normalized form.
"""
assert is_iterable_typed(properties, Property)
result = []
for p in properties:
if p.feature.path:
values = __re_two_ampersands.split(p.value)
new_value = "&&".join(os.path.normpath(os.path.join(path, v)) for v in values)
if new_value != p.value:
result.append(Property(p.feature, new_value, p.condition))
else:
result.append(p)
else:
result.append (p)
return result | python | def translate_paths (properties, path):
""" Interpret all path properties in 'properties' as relative to 'path'
The property values are assumed to be in system-specific form, and
will be translated into normalized form.
"""
assert is_iterable_typed(properties, Property)
result = []
for p in properties:
if p.feature.path:
values = __re_two_ampersands.split(p.value)
new_value = "&&".join(os.path.normpath(os.path.join(path, v)) for v in values)
if new_value != p.value:
result.append(Property(p.feature, new_value, p.condition))
else:
result.append(p)
else:
result.append (p)
return result | [
"def",
"translate_paths",
"(",
"properties",
",",
"path",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"Property",
")",
"result",
"=",
"[",
"]",
"for",
"p",
"in",
"properties",
":",
"if",
"p",
".",
"feature",
".",
"path",
":",
"values",
"=",
"__re_two_ampersands",
".",
"split",
"(",
"p",
".",
"value",
")",
"new_value",
"=",
"\"&&\"",
".",
"join",
"(",
"os",
".",
"path",
".",
"normpath",
"(",
"os",
".",
"path",
".",
"join",
"(",
"path",
",",
"v",
")",
")",
"for",
"v",
"in",
"values",
")",
"if",
"new_value",
"!=",
"p",
".",
"value",
":",
"result",
".",
"append",
"(",
"Property",
"(",
"p",
".",
"feature",
",",
"new_value",
",",
"p",
".",
"condition",
")",
")",
"else",
":",
"result",
".",
"append",
"(",
"p",
")",
"else",
":",
"result",
".",
"append",
"(",
"p",
")",
"return",
"result"
] | Interpret all path properties in 'properties' as relative to 'path'
The property values are assumed to be in system-specific form, and
will be translated into normalized form. | [
"Interpret",
"all",
"path",
"properties",
"in",
"properties",
"as",
"relative",
"to",
"path",
"The",
"property",
"values",
"are",
"assumed",
"to",
"be",
"in",
"system",
"-",
"specific",
"form",
"and",
"will",
"be",
"translated",
"into",
"normalized",
"form",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L313-L336 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | translate_indirect | def translate_indirect(properties, context_module):
"""Assumes that all feature values that start with '@' are
names of rules, used in 'context-module'. Such rules can be
either local to the module or global. Qualified local rules
with the name of the module."""
assert is_iterable_typed(properties, Property)
assert isinstance(context_module, basestring)
result = []
for p in properties:
if p.value[0] == '@':
q = qualify_jam_action(p.value[1:], context_module)
get_manager().engine().register_bjam_action(q)
result.append(Property(p.feature, '@' + q, p.condition))
else:
result.append(p)
return result | python | def translate_indirect(properties, context_module):
"""Assumes that all feature values that start with '@' are
names of rules, used in 'context-module'. Such rules can be
either local to the module or global. Qualified local rules
with the name of the module."""
assert is_iterable_typed(properties, Property)
assert isinstance(context_module, basestring)
result = []
for p in properties:
if p.value[0] == '@':
q = qualify_jam_action(p.value[1:], context_module)
get_manager().engine().register_bjam_action(q)
result.append(Property(p.feature, '@' + q, p.condition))
else:
result.append(p)
return result | [
"def",
"translate_indirect",
"(",
"properties",
",",
"context_module",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"Property",
")",
"assert",
"isinstance",
"(",
"context_module",
",",
"basestring",
")",
"result",
"=",
"[",
"]",
"for",
"p",
"in",
"properties",
":",
"if",
"p",
".",
"value",
"[",
"0",
"]",
"==",
"'@'",
":",
"q",
"=",
"qualify_jam_action",
"(",
"p",
".",
"value",
"[",
"1",
":",
"]",
",",
"context_module",
")",
"get_manager",
"(",
")",
".",
"engine",
"(",
")",
".",
"register_bjam_action",
"(",
"q",
")",
"result",
".",
"append",
"(",
"Property",
"(",
"p",
".",
"feature",
",",
"'@'",
"+",
"q",
",",
"p",
".",
"condition",
")",
")",
"else",
":",
"result",
".",
"append",
"(",
"p",
")",
"return",
"result"
] | Assumes that all feature values that start with '@' are
names of rules, used in 'context-module'. Such rules can be
either local to the module or global. Qualified local rules
with the name of the module. | [
"Assumes",
"that",
"all",
"feature",
"values",
"that",
"start",
"with"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L338-L354 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | validate | def validate (properties):
""" Exit with error if any of the properties is not valid.
properties may be a single property or a sequence of properties.
"""
if isinstance(properties, Property):
properties = [properties]
assert is_iterable_typed(properties, Property)
for p in properties:
__validate1(p) | python | def validate (properties):
""" Exit with error if any of the properties is not valid.
properties may be a single property or a sequence of properties.
"""
if isinstance(properties, Property):
properties = [properties]
assert is_iterable_typed(properties, Property)
for p in properties:
__validate1(p) | [
"def",
"validate",
"(",
"properties",
")",
":",
"if",
"isinstance",
"(",
"properties",
",",
"Property",
")",
":",
"properties",
"=",
"[",
"properties",
"]",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"Property",
")",
"for",
"p",
"in",
"properties",
":",
"__validate1",
"(",
"p",
")"
] | Exit with error if any of the properties is not valid.
properties may be a single property or a sequence of properties. | [
"Exit",
"with",
"error",
"if",
"any",
"of",
"the",
"properties",
"is",
"not",
"valid",
".",
"properties",
"may",
"be",
"a",
"single",
"property",
"or",
"a",
"sequence",
"of",
"properties",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L356-L364 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | split_conditional | def split_conditional (property):
""" If 'property' is conditional property, returns
condition and the property, e.g
<variant>debug,<toolset>gcc:<inlining>full will become
<variant>debug,<toolset>gcc <inlining>full.
Otherwise, returns empty string.
"""
assert isinstance(property, basestring)
m = __re_split_conditional.match (property)
if m:
return (m.group (1), '<' + m.group (2))
return None | python | def split_conditional (property):
""" If 'property' is conditional property, returns
condition and the property, e.g
<variant>debug,<toolset>gcc:<inlining>full will become
<variant>debug,<toolset>gcc <inlining>full.
Otherwise, returns empty string.
"""
assert isinstance(property, basestring)
m = __re_split_conditional.match (property)
if m:
return (m.group (1), '<' + m.group (2))
return None | [
"def",
"split_conditional",
"(",
"property",
")",
":",
"assert",
"isinstance",
"(",
"property",
",",
"basestring",
")",
"m",
"=",
"__re_split_conditional",
".",
"match",
"(",
"property",
")",
"if",
"m",
":",
"return",
"(",
"m",
".",
"group",
"(",
"1",
")",
",",
"'<'",
"+",
"m",
".",
"group",
"(",
"2",
")",
")",
"return",
"None"
] | If 'property' is conditional property, returns
condition and the property, e.g
<variant>debug,<toolset>gcc:<inlining>full will become
<variant>debug,<toolset>gcc <inlining>full.
Otherwise, returns empty string. | [
"If",
"property",
"is",
"conditional",
"property",
"returns",
"condition",
"and",
"the",
"property",
"e",
".",
"g",
"<variant",
">",
"debug",
"<toolset",
">",
"gcc",
":",
"<inlining",
">",
"full",
"will",
"become",
"<variant",
">",
"debug",
"<toolset",
">",
"gcc",
"<inlining",
">",
"full",
".",
"Otherwise",
"returns",
"empty",
"string",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L394-L407 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | select | def select (features, properties):
""" Selects properties which correspond to any of the given features.
"""
assert is_iterable_typed(properties, basestring)
result = []
# add any missing angle brackets
features = add_grist (features)
return [p for p in properties if get_grist(p) in features] | python | def select (features, properties):
""" Selects properties which correspond to any of the given features.
"""
assert is_iterable_typed(properties, basestring)
result = []
# add any missing angle brackets
features = add_grist (features)
return [p for p in properties if get_grist(p) in features] | [
"def",
"select",
"(",
"features",
",",
"properties",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"basestring",
")",
"result",
"=",
"[",
"]",
"# add any missing angle brackets",
"features",
"=",
"add_grist",
"(",
"features",
")",
"return",
"[",
"p",
"for",
"p",
"in",
"properties",
"if",
"get_grist",
"(",
"p",
")",
"in",
"features",
"]"
] | Selects properties which correspond to any of the given features. | [
"Selects",
"properties",
"which",
"correspond",
"to",
"any",
"of",
"the",
"given",
"features",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L410-L419 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | evaluate_conditionals_in_context | def evaluate_conditionals_in_context (properties, context):
""" Removes all conditional properties which conditions are not met
For those with met conditions, removes the condition. Properies
in conditions are looked up in 'context'
"""
if __debug__:
from .property_set import PropertySet
assert is_iterable_typed(properties, Property)
assert isinstance(context, PropertySet)
base = []
conditional = []
for p in properties:
if p.condition:
conditional.append (p)
else:
base.append (p)
result = base[:]
for p in conditional:
# Evaluate condition
# FIXME: probably inefficient
if all(x in context for x in p.condition):
result.append(Property(p.feature, p.value))
return result | python | def evaluate_conditionals_in_context (properties, context):
""" Removes all conditional properties which conditions are not met
For those with met conditions, removes the condition. Properies
in conditions are looked up in 'context'
"""
if __debug__:
from .property_set import PropertySet
assert is_iterable_typed(properties, Property)
assert isinstance(context, PropertySet)
base = []
conditional = []
for p in properties:
if p.condition:
conditional.append (p)
else:
base.append (p)
result = base[:]
for p in conditional:
# Evaluate condition
# FIXME: probably inefficient
if all(x in context for x in p.condition):
result.append(Property(p.feature, p.value))
return result | [
"def",
"evaluate_conditionals_in_context",
"(",
"properties",
",",
"context",
")",
":",
"if",
"__debug__",
":",
"from",
".",
"property_set",
"import",
"PropertySet",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"Property",
")",
"assert",
"isinstance",
"(",
"context",
",",
"PropertySet",
")",
"base",
"=",
"[",
"]",
"conditional",
"=",
"[",
"]",
"for",
"p",
"in",
"properties",
":",
"if",
"p",
".",
"condition",
":",
"conditional",
".",
"append",
"(",
"p",
")",
"else",
":",
"base",
".",
"append",
"(",
"p",
")",
"result",
"=",
"base",
"[",
":",
"]",
"for",
"p",
"in",
"conditional",
":",
"# Evaluate condition",
"# FIXME: probably inefficient",
"if",
"all",
"(",
"x",
"in",
"context",
"for",
"x",
"in",
"p",
".",
"condition",
")",
":",
"result",
".",
"append",
"(",
"Property",
"(",
"p",
".",
"feature",
",",
"p",
".",
"value",
")",
")",
"return",
"result"
] | Removes all conditional properties which conditions are not met
For those with met conditions, removes the condition. Properies
in conditions are looked up in 'context' | [
"Removes",
"all",
"conditional",
"properties",
"which",
"conditions",
"are",
"not",
"met",
"For",
"those",
"with",
"met",
"conditions",
"removes",
"the",
"condition",
".",
"Properies",
"in",
"conditions",
"are",
"looked",
"up",
"in",
"context"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L428-L454 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | change | def change (properties, feature, value = None):
""" Returns a modified version of properties with all values of the
given feature replaced by the given value.
If 'value' is None the feature will be removed.
"""
assert is_iterable_typed(properties, basestring)
assert isinstance(feature, basestring)
assert isinstance(value, (basestring, type(None)))
result = []
feature = add_grist (feature)
for p in properties:
if get_grist (p) == feature:
if value:
result.append (replace_grist (value, feature))
else:
result.append (p)
return result | python | def change (properties, feature, value = None):
""" Returns a modified version of properties with all values of the
given feature replaced by the given value.
If 'value' is None the feature will be removed.
"""
assert is_iterable_typed(properties, basestring)
assert isinstance(feature, basestring)
assert isinstance(value, (basestring, type(None)))
result = []
feature = add_grist (feature)
for p in properties:
if get_grist (p) == feature:
if value:
result.append (replace_grist (value, feature))
else:
result.append (p)
return result | [
"def",
"change",
"(",
"properties",
",",
"feature",
",",
"value",
"=",
"None",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"feature",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"value",
",",
"(",
"basestring",
",",
"type",
"(",
"None",
")",
")",
")",
"result",
"=",
"[",
"]",
"feature",
"=",
"add_grist",
"(",
"feature",
")",
"for",
"p",
"in",
"properties",
":",
"if",
"get_grist",
"(",
"p",
")",
"==",
"feature",
":",
"if",
"value",
":",
"result",
".",
"append",
"(",
"replace_grist",
"(",
"value",
",",
"feature",
")",
")",
"else",
":",
"result",
".",
"append",
"(",
"p",
")",
"return",
"result"
] | Returns a modified version of properties with all values of the
given feature replaced by the given value.
If 'value' is None the feature will be removed. | [
"Returns",
"a",
"modified",
"version",
"of",
"properties",
"with",
"all",
"values",
"of",
"the",
"given",
"feature",
"replaced",
"by",
"the",
"given",
"value",
".",
"If",
"value",
"is",
"None",
"the",
"feature",
"will",
"be",
"removed",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L457-L477 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | __validate1 | def __validate1 (property):
""" Exit with error if property is not valid.
"""
assert isinstance(property, Property)
msg = None
if not property.feature.free:
feature.validate_value_string (property.feature, property.value) | python | def __validate1 (property):
""" Exit with error if property is not valid.
"""
assert isinstance(property, Property)
msg = None
if not property.feature.free:
feature.validate_value_string (property.feature, property.value) | [
"def",
"__validate1",
"(",
"property",
")",
":",
"assert",
"isinstance",
"(",
"property",
",",
"Property",
")",
"msg",
"=",
"None",
"if",
"not",
"property",
".",
"feature",
".",
"free",
":",
"feature",
".",
"validate_value_string",
"(",
"property",
".",
"feature",
",",
"property",
".",
"value",
")"
] | Exit with error if property is not valid. | [
"Exit",
"with",
"error",
"if",
"property",
"is",
"not",
"valid",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L483-L490 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | remove | def remove(attributes, properties):
"""Returns a property sets which include all the elements
in 'properties' that do not have attributes listed in 'attributes'."""
if isinstance(attributes, basestring):
attributes = [attributes]
assert is_iterable_typed(attributes, basestring)
assert is_iterable_typed(properties, basestring)
result = []
for e in properties:
attributes_new = feature.attributes(get_grist(e))
has_common_features = 0
for a in attributes_new:
if a in attributes:
has_common_features = 1
break
if not has_common_features:
result += e
return result | python | def remove(attributes, properties):
"""Returns a property sets which include all the elements
in 'properties' that do not have attributes listed in 'attributes'."""
if isinstance(attributes, basestring):
attributes = [attributes]
assert is_iterable_typed(attributes, basestring)
assert is_iterable_typed(properties, basestring)
result = []
for e in properties:
attributes_new = feature.attributes(get_grist(e))
has_common_features = 0
for a in attributes_new:
if a in attributes:
has_common_features = 1
break
if not has_common_features:
result += e
return result | [
"def",
"remove",
"(",
"attributes",
",",
"properties",
")",
":",
"if",
"isinstance",
"(",
"attributes",
",",
"basestring",
")",
":",
"attributes",
"=",
"[",
"attributes",
"]",
"assert",
"is_iterable_typed",
"(",
"attributes",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"basestring",
")",
"result",
"=",
"[",
"]",
"for",
"e",
"in",
"properties",
":",
"attributes_new",
"=",
"feature",
".",
"attributes",
"(",
"get_grist",
"(",
"e",
")",
")",
"has_common_features",
"=",
"0",
"for",
"a",
"in",
"attributes_new",
":",
"if",
"a",
"in",
"attributes",
":",
"has_common_features",
"=",
"1",
"break",
"if",
"not",
"has_common_features",
":",
"result",
"+=",
"e",
"return",
"result"
] | Returns a property sets which include all the elements
in 'properties' that do not have attributes listed in 'attributes'. | [
"Returns",
"a",
"property",
"sets",
"which",
"include",
"all",
"the",
"elements",
"in",
"properties",
"that",
"do",
"not",
"have",
"attributes",
"listed",
"in",
"attributes",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L520-L539 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | take | def take(attributes, properties):
"""Returns a property set which include all
properties in 'properties' that have any of 'attributes'."""
assert is_iterable_typed(attributes, basestring)
assert is_iterable_typed(properties, basestring)
result = []
for e in properties:
if b2.util.set.intersection(attributes, feature.attributes(get_grist(e))):
result.append(e)
return result | python | def take(attributes, properties):
"""Returns a property set which include all
properties in 'properties' that have any of 'attributes'."""
assert is_iterable_typed(attributes, basestring)
assert is_iterable_typed(properties, basestring)
result = []
for e in properties:
if b2.util.set.intersection(attributes, feature.attributes(get_grist(e))):
result.append(e)
return result | [
"def",
"take",
"(",
"attributes",
",",
"properties",
")",
":",
"assert",
"is_iterable_typed",
"(",
"attributes",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"basestring",
")",
"result",
"=",
"[",
"]",
"for",
"e",
"in",
"properties",
":",
"if",
"b2",
".",
"util",
".",
"set",
".",
"intersection",
"(",
"attributes",
",",
"feature",
".",
"attributes",
"(",
"get_grist",
"(",
"e",
")",
")",
")",
":",
"result",
".",
"append",
"(",
"e",
")",
"return",
"result"
] | Returns a property set which include all
properties in 'properties' that have any of 'attributes'. | [
"Returns",
"a",
"property",
"set",
"which",
"include",
"all",
"properties",
"in",
"properties",
"that",
"have",
"any",
"of",
"attributes",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L542-L551 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/build/property.py | PropertyMap.insert | def insert (self, properties, value):
""" Associate value with properties.
"""
assert is_iterable_typed(properties, basestring)
assert isinstance(value, basestring)
self.__properties.append(properties)
self.__values.append(value) | python | def insert (self, properties, value):
""" Associate value with properties.
"""
assert is_iterable_typed(properties, basestring)
assert isinstance(value, basestring)
self.__properties.append(properties)
self.__values.append(value) | [
"def",
"insert",
"(",
"self",
",",
"properties",
",",
"value",
")",
":",
"assert",
"is_iterable_typed",
"(",
"properties",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"value",
",",
"basestring",
")",
"self",
".",
"__properties",
".",
"append",
"(",
"properties",
")",
"self",
".",
"__values",
".",
"append",
"(",
"value",
")"
] | Associate value with properties. | [
"Associate",
"value",
"with",
"properties",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/build/property.py#L590-L596 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | benchmark_command | def benchmark_command(cmd, progress):
"""Benchmark one command execution"""
full_cmd = '/usr/bin/time --format="%U %M" {0}'.format(cmd)
print '{0:6.2f}% Running {1}'.format(100.0 * progress, full_cmd)
(_, err) = subprocess.Popen(
['/bin/sh', '-c', full_cmd],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).communicate('')
values = err.strip().split(' ')
if len(values) == 2:
try:
return (float(values[0]), float(values[1]))
except: # pylint:disable=I0011,W0702
pass # Handled by the code after the "if"
print err
raise Exception('Error during benchmarking') | python | def benchmark_command(cmd, progress):
"""Benchmark one command execution"""
full_cmd = '/usr/bin/time --format="%U %M" {0}'.format(cmd)
print '{0:6.2f}% Running {1}'.format(100.0 * progress, full_cmd)
(_, err) = subprocess.Popen(
['/bin/sh', '-c', full_cmd],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).communicate('')
values = err.strip().split(' ')
if len(values) == 2:
try:
return (float(values[0]), float(values[1]))
except: # pylint:disable=I0011,W0702
pass # Handled by the code after the "if"
print err
raise Exception('Error during benchmarking') | [
"def",
"benchmark_command",
"(",
"cmd",
",",
"progress",
")",
":",
"full_cmd",
"=",
"'/usr/bin/time --format=\"%U %M\" {0}'",
".",
"format",
"(",
"cmd",
")",
"print",
"'{0:6.2f}% Running {1}'",
".",
"format",
"(",
"100.0",
"*",
"progress",
",",
"full_cmd",
")",
"(",
"_",
",",
"err",
")",
"=",
"subprocess",
".",
"Popen",
"(",
"[",
"'/bin/sh'",
",",
"'-c'",
",",
"full_cmd",
"]",
",",
"stdin",
"=",
"subprocess",
".",
"PIPE",
",",
"stdout",
"=",
"subprocess",
".",
"PIPE",
",",
"stderr",
"=",
"subprocess",
".",
"PIPE",
")",
".",
"communicate",
"(",
"''",
")",
"values",
"=",
"err",
".",
"strip",
"(",
")",
".",
"split",
"(",
"' '",
")",
"if",
"len",
"(",
"values",
")",
"==",
"2",
":",
"try",
":",
"return",
"(",
"float",
"(",
"values",
"[",
"0",
"]",
")",
",",
"float",
"(",
"values",
"[",
"1",
"]",
")",
")",
"except",
":",
"# pylint:disable=I0011,W0702",
"pass",
"# Handled by the code after the \"if\"",
"print",
"err",
"raise",
"Exception",
"(",
"'Error during benchmarking'",
")"
] | Benchmark one command execution | [
"Benchmark",
"one",
"command",
"execution"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L26-L45 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | benchmark_file | def benchmark_file(
filename, compiler, include_dirs, (progress_from, progress_to),
iter_count, extra_flags = ''):
"""Benchmark one file"""
time_sum = 0
mem_sum = 0
for nth_run in xrange(0, iter_count):
(time_spent, mem_used) = benchmark_command(
'{0} -std=c++11 {1} -c {2} {3}'.format(
compiler,
' '.join('-I{0}'.format(i) for i in include_dirs),
filename,
extra_flags
),
(
progress_to * nth_run + progress_from * (iter_count - nth_run)
) / iter_count
)
os.remove(os.path.splitext(os.path.basename(filename))[0] + '.o')
time_sum = time_sum + time_spent
mem_sum = mem_sum + mem_used
return {
"time": time_sum / iter_count,
"memory": mem_sum / (iter_count * 1024)
} | python | def benchmark_file(
filename, compiler, include_dirs, (progress_from, progress_to),
iter_count, extra_flags = ''):
"""Benchmark one file"""
time_sum = 0
mem_sum = 0
for nth_run in xrange(0, iter_count):
(time_spent, mem_used) = benchmark_command(
'{0} -std=c++11 {1} -c {2} {3}'.format(
compiler,
' '.join('-I{0}'.format(i) for i in include_dirs),
filename,
extra_flags
),
(
progress_to * nth_run + progress_from * (iter_count - nth_run)
) / iter_count
)
os.remove(os.path.splitext(os.path.basename(filename))[0] + '.o')
time_sum = time_sum + time_spent
mem_sum = mem_sum + mem_used
return {
"time": time_sum / iter_count,
"memory": mem_sum / (iter_count * 1024)
} | [
"def",
"benchmark_file",
"(",
"filename",
",",
"compiler",
",",
"include_dirs",
",",
"(",
"progress_from",
",",
"progress_to",
")",
",",
"iter_count",
",",
"extra_flags",
"=",
"''",
")",
":",
"time_sum",
"=",
"0",
"mem_sum",
"=",
"0",
"for",
"nth_run",
"in",
"xrange",
"(",
"0",
",",
"iter_count",
")",
":",
"(",
"time_spent",
",",
"mem_used",
")",
"=",
"benchmark_command",
"(",
"'{0} -std=c++11 {1} -c {2} {3}'",
".",
"format",
"(",
"compiler",
",",
"' '",
".",
"join",
"(",
"'-I{0}'",
".",
"format",
"(",
"i",
")",
"for",
"i",
"in",
"include_dirs",
")",
",",
"filename",
",",
"extra_flags",
")",
",",
"(",
"progress_to",
"*",
"nth_run",
"+",
"progress_from",
"*",
"(",
"iter_count",
"-",
"nth_run",
")",
")",
"/",
"iter_count",
")",
"os",
".",
"remove",
"(",
"os",
".",
"path",
".",
"splitext",
"(",
"os",
".",
"path",
".",
"basename",
"(",
"filename",
")",
")",
"[",
"0",
"]",
"+",
"'.o'",
")",
"time_sum",
"=",
"time_sum",
"+",
"time_spent",
"mem_sum",
"=",
"mem_sum",
"+",
"mem_used",
"return",
"{",
"\"time\"",
":",
"time_sum",
"/",
"iter_count",
",",
"\"memory\"",
":",
"mem_sum",
"/",
"(",
"iter_count",
"*",
"1024",
")",
"}"
] | Benchmark one file | [
"Benchmark",
"one",
"file"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L48-L73 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | compiler_info | def compiler_info(compiler):
"""Determine the name + version of the compiler"""
(out, err) = subprocess.Popen(
['/bin/sh', '-c', '{0} -v'.format(compiler)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).communicate('')
gcc_clang = re.compile('(gcc|clang) version ([0-9]+(\\.[0-9]+)*)')
for line in (out + err).split('\n'):
mtch = gcc_clang.search(line)
if mtch:
return mtch.group(1) + ' ' + mtch.group(2)
return compiler | python | def compiler_info(compiler):
"""Determine the name + version of the compiler"""
(out, err) = subprocess.Popen(
['/bin/sh', '-c', '{0} -v'.format(compiler)],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
).communicate('')
gcc_clang = re.compile('(gcc|clang) version ([0-9]+(\\.[0-9]+)*)')
for line in (out + err).split('\n'):
mtch = gcc_clang.search(line)
if mtch:
return mtch.group(1) + ' ' + mtch.group(2)
return compiler | [
"def",
"compiler_info",
"(",
"compiler",
")",
":",
"(",
"out",
",",
"err",
")",
"=",
"subprocess",
".",
"Popen",
"(",
"[",
"'/bin/sh'",
",",
"'-c'",
",",
"'{0} -v'",
".",
"format",
"(",
"compiler",
")",
"]",
",",
"stdin",
"=",
"subprocess",
".",
"PIPE",
",",
"stdout",
"=",
"subprocess",
".",
"PIPE",
",",
"stderr",
"=",
"subprocess",
".",
"PIPE",
")",
".",
"communicate",
"(",
"''",
")",
"gcc_clang",
"=",
"re",
".",
"compile",
"(",
"'(gcc|clang) version ([0-9]+(\\\\.[0-9]+)*)'",
")",
"for",
"line",
"in",
"(",
"out",
"+",
"err",
")",
".",
"split",
"(",
"'\\n'",
")",
":",
"mtch",
"=",
"gcc_clang",
".",
"search",
"(",
"line",
")",
"if",
"mtch",
":",
"return",
"mtch",
".",
"group",
"(",
"1",
")",
"+",
"' '",
"+",
"mtch",
".",
"group",
"(",
"2",
")",
"return",
"compiler"
] | Determine the name + version of the compiler | [
"Determine",
"the",
"name",
"+",
"version",
"of",
"the",
"compiler"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L76-L92 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | files_in_dir | def files_in_dir(path, extension):
"""Enumartes the files in path with the given extension"""
ends = '.{0}'.format(extension)
return (f for f in os.listdir(path) if f.endswith(ends)) | python | def files_in_dir(path, extension):
"""Enumartes the files in path with the given extension"""
ends = '.{0}'.format(extension)
return (f for f in os.listdir(path) if f.endswith(ends)) | [
"def",
"files_in_dir",
"(",
"path",
",",
"extension",
")",
":",
"ends",
"=",
"'.{0}'",
".",
"format",
"(",
"extension",
")",
"return",
"(",
"f",
"for",
"f",
"in",
"os",
".",
"listdir",
"(",
"path",
")",
"if",
"f",
".",
"endswith",
"(",
"ends",
")",
")"
] | Enumartes the files in path with the given extension | [
"Enumartes",
"the",
"files",
"in",
"path",
"with",
"the",
"given",
"extension"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L105-L108 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | format_time | def format_time(seconds):
"""Format a duration"""
minute = 60
hour = minute * 60
day = hour * 24
week = day * 7
result = []
for name, dur in [
('week', week), ('day', day), ('hour', hour),
('minute', minute), ('second', 1)
]:
if seconds > dur:
value = seconds // dur
result.append(
'{0} {1}{2}'.format(int(value), name, 's' if value > 1 else '')
)
seconds = seconds % dur
return ' '.join(result) | python | def format_time(seconds):
"""Format a duration"""
minute = 60
hour = minute * 60
day = hour * 24
week = day * 7
result = []
for name, dur in [
('week', week), ('day', day), ('hour', hour),
('minute', minute), ('second', 1)
]:
if seconds > dur:
value = seconds // dur
result.append(
'{0} {1}{2}'.format(int(value), name, 's' if value > 1 else '')
)
seconds = seconds % dur
return ' '.join(result) | [
"def",
"format_time",
"(",
"seconds",
")",
":",
"minute",
"=",
"60",
"hour",
"=",
"minute",
"*",
"60",
"day",
"=",
"hour",
"*",
"24",
"week",
"=",
"day",
"*",
"7",
"result",
"=",
"[",
"]",
"for",
"name",
",",
"dur",
"in",
"[",
"(",
"'week'",
",",
"week",
")",
",",
"(",
"'day'",
",",
"day",
")",
",",
"(",
"'hour'",
",",
"hour",
")",
",",
"(",
"'minute'",
",",
"minute",
")",
",",
"(",
"'second'",
",",
"1",
")",
"]",
":",
"if",
"seconds",
">",
"dur",
":",
"value",
"=",
"seconds",
"//",
"dur",
"result",
".",
"append",
"(",
"'{0} {1}{2}'",
".",
"format",
"(",
"int",
"(",
"value",
")",
",",
"name",
",",
"'s'",
"if",
"value",
">",
"1",
"else",
"''",
")",
")",
"seconds",
"=",
"seconds",
"%",
"dur",
"return",
"' '",
".",
"join",
"(",
"result",
")"
] | Format a duration | [
"Format",
"a",
"duration"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L111-L129 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | benchmark | def benchmark(src_dir, compiler, include_dirs, iter_count):
"""Do the benchmarking"""
files = list(files_in_dir(src_dir, 'cpp'))
random.shuffle(files)
has_string_templates = True
string_template_file_cnt = sum(1 for file in files if 'bmp' in file)
file_count = len(files) + string_template_file_cnt
started_at = time.time()
result = {}
for filename in files:
progress = len(result)
result[filename] = benchmark_file(
os.path.join(src_dir, filename),
compiler,
include_dirs,
(float(progress) / file_count, float(progress + 1) / file_count),
iter_count
)
if 'bmp' in filename and has_string_templates:
try:
temp_result = benchmark_file(
os.path.join(src_dir, filename),
compiler,
include_dirs,
(float(progress + 1) / file_count, float(progress + 2) / file_count),
iter_count,
'-Xclang -fstring-literal-templates'
)
result[filename.replace('bmp', 'slt')] = temp_result
except:
has_string_templates = False
file_count -= string_template_file_cnt
print 'Stopping the benchmarking of string literal templates'
elapsed = time.time() - started_at
total = float(file_count * elapsed) / len(result)
print 'Elapsed time: {0}, Remaining time: {1}'.format(
format_time(elapsed),
format_time(total - elapsed)
)
return result | python | def benchmark(src_dir, compiler, include_dirs, iter_count):
"""Do the benchmarking"""
files = list(files_in_dir(src_dir, 'cpp'))
random.shuffle(files)
has_string_templates = True
string_template_file_cnt = sum(1 for file in files if 'bmp' in file)
file_count = len(files) + string_template_file_cnt
started_at = time.time()
result = {}
for filename in files:
progress = len(result)
result[filename] = benchmark_file(
os.path.join(src_dir, filename),
compiler,
include_dirs,
(float(progress) / file_count, float(progress + 1) / file_count),
iter_count
)
if 'bmp' in filename and has_string_templates:
try:
temp_result = benchmark_file(
os.path.join(src_dir, filename),
compiler,
include_dirs,
(float(progress + 1) / file_count, float(progress + 2) / file_count),
iter_count,
'-Xclang -fstring-literal-templates'
)
result[filename.replace('bmp', 'slt')] = temp_result
except:
has_string_templates = False
file_count -= string_template_file_cnt
print 'Stopping the benchmarking of string literal templates'
elapsed = time.time() - started_at
total = float(file_count * elapsed) / len(result)
print 'Elapsed time: {0}, Remaining time: {1}'.format(
format_time(elapsed),
format_time(total - elapsed)
)
return result | [
"def",
"benchmark",
"(",
"src_dir",
",",
"compiler",
",",
"include_dirs",
",",
"iter_count",
")",
":",
"files",
"=",
"list",
"(",
"files_in_dir",
"(",
"src_dir",
",",
"'cpp'",
")",
")",
"random",
".",
"shuffle",
"(",
"files",
")",
"has_string_templates",
"=",
"True",
"string_template_file_cnt",
"=",
"sum",
"(",
"1",
"for",
"file",
"in",
"files",
"if",
"'bmp'",
"in",
"file",
")",
"file_count",
"=",
"len",
"(",
"files",
")",
"+",
"string_template_file_cnt",
"started_at",
"=",
"time",
".",
"time",
"(",
")",
"result",
"=",
"{",
"}",
"for",
"filename",
"in",
"files",
":",
"progress",
"=",
"len",
"(",
"result",
")",
"result",
"[",
"filename",
"]",
"=",
"benchmark_file",
"(",
"os",
".",
"path",
".",
"join",
"(",
"src_dir",
",",
"filename",
")",
",",
"compiler",
",",
"include_dirs",
",",
"(",
"float",
"(",
"progress",
")",
"/",
"file_count",
",",
"float",
"(",
"progress",
"+",
"1",
")",
"/",
"file_count",
")",
",",
"iter_count",
")",
"if",
"'bmp'",
"in",
"filename",
"and",
"has_string_templates",
":",
"try",
":",
"temp_result",
"=",
"benchmark_file",
"(",
"os",
".",
"path",
".",
"join",
"(",
"src_dir",
",",
"filename",
")",
",",
"compiler",
",",
"include_dirs",
",",
"(",
"float",
"(",
"progress",
"+",
"1",
")",
"/",
"file_count",
",",
"float",
"(",
"progress",
"+",
"2",
")",
"/",
"file_count",
")",
",",
"iter_count",
",",
"'-Xclang -fstring-literal-templates'",
")",
"result",
"[",
"filename",
".",
"replace",
"(",
"'bmp'",
",",
"'slt'",
")",
"]",
"=",
"temp_result",
"except",
":",
"has_string_templates",
"=",
"False",
"file_count",
"-=",
"string_template_file_cnt",
"print",
"'Stopping the benchmarking of string literal templates'",
"elapsed",
"=",
"time",
".",
"time",
"(",
")",
"-",
"started_at",
"total",
"=",
"float",
"(",
"file_count",
"*",
"elapsed",
")",
"/",
"len",
"(",
"result",
")",
"print",
"'Elapsed time: {0}, Remaining time: {1}'",
".",
"format",
"(",
"format_time",
"(",
"elapsed",
")",
",",
"format_time",
"(",
"total",
"-",
"elapsed",
")",
")",
"return",
"result"
] | Do the benchmarking | [
"Do",
"the",
"benchmarking"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L132-L174 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | plot | def plot(values, mode_names, title, (xlabel, ylabel), out_file):
"""Plot a diagram"""
matplotlib.pyplot.clf()
for mode, mode_name in mode_names.iteritems():
vals = values[mode]
matplotlib.pyplot.plot(
[x for x, _ in vals],
[y for _, y in vals],
label=mode_name
)
matplotlib.pyplot.title(title)
matplotlib.pyplot.xlabel(xlabel)
matplotlib.pyplot.ylabel(ylabel)
if len(mode_names) > 1:
matplotlib.pyplot.legend()
matplotlib.pyplot.savefig(out_file) | python | def plot(values, mode_names, title, (xlabel, ylabel), out_file):
"""Plot a diagram"""
matplotlib.pyplot.clf()
for mode, mode_name in mode_names.iteritems():
vals = values[mode]
matplotlib.pyplot.plot(
[x for x, _ in vals],
[y for _, y in vals],
label=mode_name
)
matplotlib.pyplot.title(title)
matplotlib.pyplot.xlabel(xlabel)
matplotlib.pyplot.ylabel(ylabel)
if len(mode_names) > 1:
matplotlib.pyplot.legend()
matplotlib.pyplot.savefig(out_file) | [
"def",
"plot",
"(",
"values",
",",
"mode_names",
",",
"title",
",",
"(",
"xlabel",
",",
"ylabel",
")",
",",
"out_file",
")",
":",
"matplotlib",
".",
"pyplot",
".",
"clf",
"(",
")",
"for",
"mode",
",",
"mode_name",
"in",
"mode_names",
".",
"iteritems",
"(",
")",
":",
"vals",
"=",
"values",
"[",
"mode",
"]",
"matplotlib",
".",
"pyplot",
".",
"plot",
"(",
"[",
"x",
"for",
"x",
",",
"_",
"in",
"vals",
"]",
",",
"[",
"y",
"for",
"_",
",",
"y",
"in",
"vals",
"]",
",",
"label",
"=",
"mode_name",
")",
"matplotlib",
".",
"pyplot",
".",
"title",
"(",
"title",
")",
"matplotlib",
".",
"pyplot",
".",
"xlabel",
"(",
"xlabel",
")",
"matplotlib",
".",
"pyplot",
".",
"ylabel",
"(",
"ylabel",
")",
"if",
"len",
"(",
"mode_names",
")",
">",
"1",
":",
"matplotlib",
".",
"pyplot",
".",
"legend",
"(",
")",
"matplotlib",
".",
"pyplot",
".",
"savefig",
"(",
"out_file",
")"
] | Plot a diagram | [
"Plot",
"a",
"diagram"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L177-L192 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | configs_in | def configs_in(src_dir):
"""Enumerate all configs in src_dir"""
for filename in files_in_dir(src_dir, 'json'):
with open(os.path.join(src_dir, filename), 'rb') as in_f:
yield json.load(in_f) | python | def configs_in(src_dir):
"""Enumerate all configs in src_dir"""
for filename in files_in_dir(src_dir, 'json'):
with open(os.path.join(src_dir, filename), 'rb') as in_f:
yield json.load(in_f) | [
"def",
"configs_in",
"(",
"src_dir",
")",
":",
"for",
"filename",
"in",
"files_in_dir",
"(",
"src_dir",
",",
"'json'",
")",
":",
"with",
"open",
"(",
"os",
".",
"path",
".",
"join",
"(",
"src_dir",
",",
"filename",
")",
",",
"'rb'",
")",
"as",
"in_f",
":",
"yield",
"json",
".",
"load",
"(",
"in_f",
")"
] | Enumerate all configs in src_dir | [
"Enumerate",
"all",
"configs",
"in",
"src_dir"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L203-L207 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | join_images | def join_images(img_files, out_file):
"""Join the list of images into the out file"""
images = [PIL.Image.open(f) for f in img_files]
joined = PIL.Image.new(
'RGB',
(sum(i.size[0] for i in images), max(i.size[1] for i in images))
)
left = 0
for img in images:
joined.paste(im=img, box=(left, 0))
left = left + img.size[0]
joined.save(out_file) | python | def join_images(img_files, out_file):
"""Join the list of images into the out file"""
images = [PIL.Image.open(f) for f in img_files]
joined = PIL.Image.new(
'RGB',
(sum(i.size[0] for i in images), max(i.size[1] for i in images))
)
left = 0
for img in images:
joined.paste(im=img, box=(left, 0))
left = left + img.size[0]
joined.save(out_file) | [
"def",
"join_images",
"(",
"img_files",
",",
"out_file",
")",
":",
"images",
"=",
"[",
"PIL",
".",
"Image",
".",
"open",
"(",
"f",
")",
"for",
"f",
"in",
"img_files",
"]",
"joined",
"=",
"PIL",
".",
"Image",
".",
"new",
"(",
"'RGB'",
",",
"(",
"sum",
"(",
"i",
".",
"size",
"[",
"0",
"]",
"for",
"i",
"in",
"images",
")",
",",
"max",
"(",
"i",
".",
"size",
"[",
"1",
"]",
"for",
"i",
"in",
"images",
")",
")",
")",
"left",
"=",
"0",
"for",
"img",
"in",
"images",
":",
"joined",
".",
"paste",
"(",
"im",
"=",
"img",
",",
"box",
"=",
"(",
"left",
",",
"0",
")",
")",
"left",
"=",
"left",
"+",
"img",
".",
"size",
"[",
"0",
"]",
"joined",
".",
"save",
"(",
"out_file",
")"
] | Join the list of images into the out file | [
"Join",
"the",
"list",
"of",
"images",
"into",
"the",
"out",
"file"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L215-L226 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | plot_temp_diagrams | def plot_temp_diagrams(config, results, temp_dir):
"""Plot temporary diagrams"""
display_name = {
'time': 'Compilation time (s)',
'memory': 'Compiler memory usage (MB)',
}
files = config['files']
img_files = []
if any('slt' in result for result in results) and 'bmp' in files.values()[0]:
config['modes']['slt'] = 'Using BOOST_METAPARSE_STRING with string literal templates'
for f in files.values():
f['slt'] = f['bmp'].replace('bmp', 'slt')
for measured in ['time', 'memory']:
mpts = sorted(int(k) for k in files.keys())
img_files.append(os.path.join(temp_dir, '_{0}.png'.format(measured)))
plot(
{
m: [(x, results[files[str(x)][m]][measured]) for x in mpts]
for m in config['modes'].keys()
},
config['modes'],
display_name[measured],
(config['x_axis_label'], display_name[measured]),
img_files[-1]
)
return img_files | python | def plot_temp_diagrams(config, results, temp_dir):
"""Plot temporary diagrams"""
display_name = {
'time': 'Compilation time (s)',
'memory': 'Compiler memory usage (MB)',
}
files = config['files']
img_files = []
if any('slt' in result for result in results) and 'bmp' in files.values()[0]:
config['modes']['slt'] = 'Using BOOST_METAPARSE_STRING with string literal templates'
for f in files.values():
f['slt'] = f['bmp'].replace('bmp', 'slt')
for measured in ['time', 'memory']:
mpts = sorted(int(k) for k in files.keys())
img_files.append(os.path.join(temp_dir, '_{0}.png'.format(measured)))
plot(
{
m: [(x, results[files[str(x)][m]][measured]) for x in mpts]
for m in config['modes'].keys()
},
config['modes'],
display_name[measured],
(config['x_axis_label'], display_name[measured]),
img_files[-1]
)
return img_files | [
"def",
"plot_temp_diagrams",
"(",
"config",
",",
"results",
",",
"temp_dir",
")",
":",
"display_name",
"=",
"{",
"'time'",
":",
"'Compilation time (s)'",
",",
"'memory'",
":",
"'Compiler memory usage (MB)'",
",",
"}",
"files",
"=",
"config",
"[",
"'files'",
"]",
"img_files",
"=",
"[",
"]",
"if",
"any",
"(",
"'slt'",
"in",
"result",
"for",
"result",
"in",
"results",
")",
"and",
"'bmp'",
"in",
"files",
".",
"values",
"(",
")",
"[",
"0",
"]",
":",
"config",
"[",
"'modes'",
"]",
"[",
"'slt'",
"]",
"=",
"'Using BOOST_METAPARSE_STRING with string literal templates'",
"for",
"f",
"in",
"files",
".",
"values",
"(",
")",
":",
"f",
"[",
"'slt'",
"]",
"=",
"f",
"[",
"'bmp'",
"]",
".",
"replace",
"(",
"'bmp'",
",",
"'slt'",
")",
"for",
"measured",
"in",
"[",
"'time'",
",",
"'memory'",
"]",
":",
"mpts",
"=",
"sorted",
"(",
"int",
"(",
"k",
")",
"for",
"k",
"in",
"files",
".",
"keys",
"(",
")",
")",
"img_files",
".",
"append",
"(",
"os",
".",
"path",
".",
"join",
"(",
"temp_dir",
",",
"'_{0}.png'",
".",
"format",
"(",
"measured",
")",
")",
")",
"plot",
"(",
"{",
"m",
":",
"[",
"(",
"x",
",",
"results",
"[",
"files",
"[",
"str",
"(",
"x",
")",
"]",
"[",
"m",
"]",
"]",
"[",
"measured",
"]",
")",
"for",
"x",
"in",
"mpts",
"]",
"for",
"m",
"in",
"config",
"[",
"'modes'",
"]",
".",
"keys",
"(",
")",
"}",
",",
"config",
"[",
"'modes'",
"]",
",",
"display_name",
"[",
"measured",
"]",
",",
"(",
"config",
"[",
"'x_axis_label'",
"]",
",",
"display_name",
"[",
"measured",
"]",
")",
",",
"img_files",
"[",
"-",
"1",
"]",
")",
"return",
"img_files"
] | Plot temporary diagrams | [
"Plot",
"temporary",
"diagrams"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L229-L257 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | plot_diagram | def plot_diagram(config, results, images_dir, out_filename):
"""Plot one diagram"""
img_files = plot_temp_diagrams(config, results, images_dir)
join_images(img_files, out_filename)
for img_file in img_files:
os.remove(img_file) | python | def plot_diagram(config, results, images_dir, out_filename):
"""Plot one diagram"""
img_files = plot_temp_diagrams(config, results, images_dir)
join_images(img_files, out_filename)
for img_file in img_files:
os.remove(img_file) | [
"def",
"plot_diagram",
"(",
"config",
",",
"results",
",",
"images_dir",
",",
"out_filename",
")",
":",
"img_files",
"=",
"plot_temp_diagrams",
"(",
"config",
",",
"results",
",",
"images_dir",
")",
"join_images",
"(",
"img_files",
",",
"out_filename",
")",
"for",
"img_file",
"in",
"img_files",
":",
"os",
".",
"remove",
"(",
"img_file",
")"
] | Plot one diagram | [
"Plot",
"one",
"diagram"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L260-L265 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | plot_diagrams | def plot_diagrams(results, configs, compiler, out_dir):
"""Plot all diagrams specified by the configs"""
compiler_fn = make_filename(compiler)
total = psutil.virtual_memory().total # pylint:disable=I0011,E1101
memory = int(math.ceil(byte_to_gb(total)))
images_dir = os.path.join(out_dir, 'images')
for config in configs:
out_prefix = '{0}_{1}'.format(config['name'], compiler_fn)
plot_diagram(
config,
results,
images_dir,
os.path.join(images_dir, '{0}.png'.format(out_prefix))
)
with open(
os.path.join(out_dir, '{0}.qbk'.format(out_prefix)),
'wb'
) as out_f:
qbk_content = """{0}
Measured on a {2} host with {3} GB memory. Compiler used: {4}.
[$images/metaparse/{1}.png [width 100%]]
""".format(config['desc'], out_prefix, platform.platform(), memory, compiler)
out_f.write(qbk_content) | python | def plot_diagrams(results, configs, compiler, out_dir):
"""Plot all diagrams specified by the configs"""
compiler_fn = make_filename(compiler)
total = psutil.virtual_memory().total # pylint:disable=I0011,E1101
memory = int(math.ceil(byte_to_gb(total)))
images_dir = os.path.join(out_dir, 'images')
for config in configs:
out_prefix = '{0}_{1}'.format(config['name'], compiler_fn)
plot_diagram(
config,
results,
images_dir,
os.path.join(images_dir, '{0}.png'.format(out_prefix))
)
with open(
os.path.join(out_dir, '{0}.qbk'.format(out_prefix)),
'wb'
) as out_f:
qbk_content = """{0}
Measured on a {2} host with {3} GB memory. Compiler used: {4}.
[$images/metaparse/{1}.png [width 100%]]
""".format(config['desc'], out_prefix, platform.platform(), memory, compiler)
out_f.write(qbk_content) | [
"def",
"plot_diagrams",
"(",
"results",
",",
"configs",
",",
"compiler",
",",
"out_dir",
")",
":",
"compiler_fn",
"=",
"make_filename",
"(",
"compiler",
")",
"total",
"=",
"psutil",
".",
"virtual_memory",
"(",
")",
".",
"total",
"# pylint:disable=I0011,E1101",
"memory",
"=",
"int",
"(",
"math",
".",
"ceil",
"(",
"byte_to_gb",
"(",
"total",
")",
")",
")",
"images_dir",
"=",
"os",
".",
"path",
".",
"join",
"(",
"out_dir",
",",
"'images'",
")",
"for",
"config",
"in",
"configs",
":",
"out_prefix",
"=",
"'{0}_{1}'",
".",
"format",
"(",
"config",
"[",
"'name'",
"]",
",",
"compiler_fn",
")",
"plot_diagram",
"(",
"config",
",",
"results",
",",
"images_dir",
",",
"os",
".",
"path",
".",
"join",
"(",
"images_dir",
",",
"'{0}.png'",
".",
"format",
"(",
"out_prefix",
")",
")",
")",
"with",
"open",
"(",
"os",
".",
"path",
".",
"join",
"(",
"out_dir",
",",
"'{0}.qbk'",
".",
"format",
"(",
"out_prefix",
")",
")",
",",
"'wb'",
")",
"as",
"out_f",
":",
"qbk_content",
"=",
"\"\"\"{0}\nMeasured on a {2} host with {3} GB memory. Compiler used: {4}.\n\n[$images/metaparse/{1}.png [width 100%]]\n\"\"\"",
".",
"format",
"(",
"config",
"[",
"'desc'",
"]",
",",
"out_prefix",
",",
"platform",
".",
"platform",
"(",
")",
",",
"memory",
",",
"compiler",
")",
"out_f",
".",
"write",
"(",
"qbk_content",
")"
] | Plot all diagrams specified by the configs | [
"Plot",
"all",
"diagrams",
"specified",
"by",
"the",
"configs"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L268-L295 | train |
apple/turicreate | deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py | main | def main():
"""The main function of the script"""
desc = 'Benchmark the files generated by generate.py'
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'--src',
dest='src_dir',
default='generated',
help='The directory containing the sources to benchmark'
)
parser.add_argument(
'--out',
dest='out_dir',
default='../../doc',
help='The output directory'
)
parser.add_argument(
'--include',
dest='include',
default='include',
help='The directory containing the headeres for the benchmark'
)
parser.add_argument(
'--boost_headers',
dest='boost_headers',
default='../../../..',
help='The directory containing the Boost headers (the boost directory)'
)
parser.add_argument(
'--compiler',
dest='compiler',
default='g++',
help='The compiler to do the benchmark with'
)
parser.add_argument(
'--repeat_count',
dest='repeat_count',
type=int,
default=5,
help='How many times a measurement should be repeated.'
)
args = parser.parse_args()
compiler = compiler_info(args.compiler)
results = benchmark(
args.src_dir,
args.compiler,
[args.include, args.boost_headers],
args.repeat_count
)
plot_diagrams(results, configs_in(args.src_dir), compiler, args.out_dir) | python | def main():
"""The main function of the script"""
desc = 'Benchmark the files generated by generate.py'
parser = argparse.ArgumentParser(description=desc)
parser.add_argument(
'--src',
dest='src_dir',
default='generated',
help='The directory containing the sources to benchmark'
)
parser.add_argument(
'--out',
dest='out_dir',
default='../../doc',
help='The output directory'
)
parser.add_argument(
'--include',
dest='include',
default='include',
help='The directory containing the headeres for the benchmark'
)
parser.add_argument(
'--boost_headers',
dest='boost_headers',
default='../../../..',
help='The directory containing the Boost headers (the boost directory)'
)
parser.add_argument(
'--compiler',
dest='compiler',
default='g++',
help='The compiler to do the benchmark with'
)
parser.add_argument(
'--repeat_count',
dest='repeat_count',
type=int,
default=5,
help='How many times a measurement should be repeated.'
)
args = parser.parse_args()
compiler = compiler_info(args.compiler)
results = benchmark(
args.src_dir,
args.compiler,
[args.include, args.boost_headers],
args.repeat_count
)
plot_diagrams(results, configs_in(args.src_dir), compiler, args.out_dir) | [
"def",
"main",
"(",
")",
":",
"desc",
"=",
"'Benchmark the files generated by generate.py'",
"parser",
"=",
"argparse",
".",
"ArgumentParser",
"(",
"description",
"=",
"desc",
")",
"parser",
".",
"add_argument",
"(",
"'--src'",
",",
"dest",
"=",
"'src_dir'",
",",
"default",
"=",
"'generated'",
",",
"help",
"=",
"'The directory containing the sources to benchmark'",
")",
"parser",
".",
"add_argument",
"(",
"'--out'",
",",
"dest",
"=",
"'out_dir'",
",",
"default",
"=",
"'../../doc'",
",",
"help",
"=",
"'The output directory'",
")",
"parser",
".",
"add_argument",
"(",
"'--include'",
",",
"dest",
"=",
"'include'",
",",
"default",
"=",
"'include'",
",",
"help",
"=",
"'The directory containing the headeres for the benchmark'",
")",
"parser",
".",
"add_argument",
"(",
"'--boost_headers'",
",",
"dest",
"=",
"'boost_headers'",
",",
"default",
"=",
"'../../../..'",
",",
"help",
"=",
"'The directory containing the Boost headers (the boost directory)'",
")",
"parser",
".",
"add_argument",
"(",
"'--compiler'",
",",
"dest",
"=",
"'compiler'",
",",
"default",
"=",
"'g++'",
",",
"help",
"=",
"'The compiler to do the benchmark with'",
")",
"parser",
".",
"add_argument",
"(",
"'--repeat_count'",
",",
"dest",
"=",
"'repeat_count'",
",",
"type",
"=",
"int",
",",
"default",
"=",
"5",
",",
"help",
"=",
"'How many times a measurement should be repeated.'",
")",
"args",
"=",
"parser",
".",
"parse_args",
"(",
")",
"compiler",
"=",
"compiler_info",
"(",
"args",
".",
"compiler",
")",
"results",
"=",
"benchmark",
"(",
"args",
".",
"src_dir",
",",
"args",
".",
"compiler",
",",
"[",
"args",
".",
"include",
",",
"args",
".",
"boost_headers",
"]",
",",
"args",
".",
"repeat_count",
")",
"plot_diagrams",
"(",
"results",
",",
"configs_in",
"(",
"args",
".",
"src_dir",
")",
",",
"compiler",
",",
"args",
".",
"out_dir",
")"
] | The main function of the script | [
"The",
"main",
"function",
"of",
"the",
"script"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/libs/metaparse/tools/benchmark/benchmark.py#L298-L350 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_model.py | load_model | def load_model(location):
"""
Load any Turi Create model that was previously saved.
This function assumes the model (can be any model) was previously saved in
Turi Create model format with model.save(filename).
Parameters
----------
location : string
Location of the model to load. Can be a local path or a remote URL.
Because models are saved as directories, there is no file extension.
Examples
----------
>>> model.save('my_model_file')
>>> loaded_model = tc.load_model('my_model_file')
"""
# Check if the location is a dir_archive, if not, use glunpickler to load
# as pure python model
# If the location is a http location, skip the check, and directly proceed
# to load model as dir_archive. This is because
# 1) exists() does not work with http protocol, and
# 2) GLUnpickler does not support http
protocol = file_util.get_protocol(location)
dir_archive_exists = False
if protocol == '':
model_path = file_util.expand_full_path(location)
dir_archive_exists = file_util.exists(os.path.join(model_path, 'dir_archive.ini'))
else:
model_path = location
if protocol in ['http', 'https']:
dir_archive_exists = True
else:
import posixpath
dir_archive_exists = file_util.exists(posixpath.join(model_path, 'dir_archive.ini'))
if not dir_archive_exists:
raise IOError("Directory %s does not exist" % location)
_internal_url = _make_internal_url(location)
saved_state = glconnect.get_unity().load_model(_internal_url)
saved_state = _wrap_function_return(saved_state)
# The archive version could be both bytes/unicode
key = u'archive_version'
archive_version = saved_state[key] if key in saved_state else saved_state[key.encode()]
if archive_version < 0:
raise ToolkitError("File does not appear to be a Turi Create model.")
elif archive_version > 1:
raise ToolkitError("Unable to load model.\n\n"
"This model looks to have been saved with a future version of Turi Create.\n"
"Please upgrade Turi Create before attempting to load this model file.")
elif archive_version == 1:
name = saved_state['model_name'];
if name in MODEL_NAME_MAP:
cls = MODEL_NAME_MAP[name]
if 'model' in saved_state:
# this is a native model
return cls(saved_state['model'])
else:
# this is a CustomModel
model_data = saved_state['side_data']
model_version = model_data['model_version']
del model_data['model_version']
return cls._load_version(model_data, model_version)
elif hasattr(_extensions, name):
return saved_state["model"]
else:
raise ToolkitError("Unable to load model of name '%s'; model name not registered." % name)
else:
# very legacy model format. Attempt pickle loading
import sys
sys.stderr.write("This model was saved in a legacy model format. Compatibility cannot be guaranteed in future versions.\n")
if _six.PY3:
raise ToolkitError("Unable to load legacy model in Python 3.\n\n"
"To migrate a model, try loading it using Turi Create 4.0 or\n"
"later in Python 2 and then re-save it. The re-saved model should\n"
"work in Python 3.")
if 'graphlab' not in sys.modules:
sys.modules['graphlab'] = sys.modules['turicreate']
# backward compatibility. Otherwise old pickles will not load
sys.modules["turicreate_util"] = sys.modules['turicreate.util']
sys.modules["graphlab_util"] = sys.modules['turicreate.util']
# More backwards compatibility with the turicreate namespace code.
for k, v in list(sys.modules.items()):
if 'turicreate' in k:
sys.modules[k.replace('turicreate', 'graphlab')] = v
#legacy loader
import pickle
model_wrapper = pickle.loads(saved_state[b'model_wrapper'])
return model_wrapper(saved_state[b'model_base']) | python | def load_model(location):
"""
Load any Turi Create model that was previously saved.
This function assumes the model (can be any model) was previously saved in
Turi Create model format with model.save(filename).
Parameters
----------
location : string
Location of the model to load. Can be a local path or a remote URL.
Because models are saved as directories, there is no file extension.
Examples
----------
>>> model.save('my_model_file')
>>> loaded_model = tc.load_model('my_model_file')
"""
# Check if the location is a dir_archive, if not, use glunpickler to load
# as pure python model
# If the location is a http location, skip the check, and directly proceed
# to load model as dir_archive. This is because
# 1) exists() does not work with http protocol, and
# 2) GLUnpickler does not support http
protocol = file_util.get_protocol(location)
dir_archive_exists = False
if protocol == '':
model_path = file_util.expand_full_path(location)
dir_archive_exists = file_util.exists(os.path.join(model_path, 'dir_archive.ini'))
else:
model_path = location
if protocol in ['http', 'https']:
dir_archive_exists = True
else:
import posixpath
dir_archive_exists = file_util.exists(posixpath.join(model_path, 'dir_archive.ini'))
if not dir_archive_exists:
raise IOError("Directory %s does not exist" % location)
_internal_url = _make_internal_url(location)
saved_state = glconnect.get_unity().load_model(_internal_url)
saved_state = _wrap_function_return(saved_state)
# The archive version could be both bytes/unicode
key = u'archive_version'
archive_version = saved_state[key] if key in saved_state else saved_state[key.encode()]
if archive_version < 0:
raise ToolkitError("File does not appear to be a Turi Create model.")
elif archive_version > 1:
raise ToolkitError("Unable to load model.\n\n"
"This model looks to have been saved with a future version of Turi Create.\n"
"Please upgrade Turi Create before attempting to load this model file.")
elif archive_version == 1:
name = saved_state['model_name'];
if name in MODEL_NAME_MAP:
cls = MODEL_NAME_MAP[name]
if 'model' in saved_state:
# this is a native model
return cls(saved_state['model'])
else:
# this is a CustomModel
model_data = saved_state['side_data']
model_version = model_data['model_version']
del model_data['model_version']
return cls._load_version(model_data, model_version)
elif hasattr(_extensions, name):
return saved_state["model"]
else:
raise ToolkitError("Unable to load model of name '%s'; model name not registered." % name)
else:
# very legacy model format. Attempt pickle loading
import sys
sys.stderr.write("This model was saved in a legacy model format. Compatibility cannot be guaranteed in future versions.\n")
if _six.PY3:
raise ToolkitError("Unable to load legacy model in Python 3.\n\n"
"To migrate a model, try loading it using Turi Create 4.0 or\n"
"later in Python 2 and then re-save it. The re-saved model should\n"
"work in Python 3.")
if 'graphlab' not in sys.modules:
sys.modules['graphlab'] = sys.modules['turicreate']
# backward compatibility. Otherwise old pickles will not load
sys.modules["turicreate_util"] = sys.modules['turicreate.util']
sys.modules["graphlab_util"] = sys.modules['turicreate.util']
# More backwards compatibility with the turicreate namespace code.
for k, v in list(sys.modules.items()):
if 'turicreate' in k:
sys.modules[k.replace('turicreate', 'graphlab')] = v
#legacy loader
import pickle
model_wrapper = pickle.loads(saved_state[b'model_wrapper'])
return model_wrapper(saved_state[b'model_base']) | [
"def",
"load_model",
"(",
"location",
")",
":",
"# Check if the location is a dir_archive, if not, use glunpickler to load",
"# as pure python model",
"# If the location is a http location, skip the check, and directly proceed",
"# to load model as dir_archive. This is because",
"# 1) exists() does not work with http protocol, and",
"# 2) GLUnpickler does not support http",
"protocol",
"=",
"file_util",
".",
"get_protocol",
"(",
"location",
")",
"dir_archive_exists",
"=",
"False",
"if",
"protocol",
"==",
"''",
":",
"model_path",
"=",
"file_util",
".",
"expand_full_path",
"(",
"location",
")",
"dir_archive_exists",
"=",
"file_util",
".",
"exists",
"(",
"os",
".",
"path",
".",
"join",
"(",
"model_path",
",",
"'dir_archive.ini'",
")",
")",
"else",
":",
"model_path",
"=",
"location",
"if",
"protocol",
"in",
"[",
"'http'",
",",
"'https'",
"]",
":",
"dir_archive_exists",
"=",
"True",
"else",
":",
"import",
"posixpath",
"dir_archive_exists",
"=",
"file_util",
".",
"exists",
"(",
"posixpath",
".",
"join",
"(",
"model_path",
",",
"'dir_archive.ini'",
")",
")",
"if",
"not",
"dir_archive_exists",
":",
"raise",
"IOError",
"(",
"\"Directory %s does not exist\"",
"%",
"location",
")",
"_internal_url",
"=",
"_make_internal_url",
"(",
"location",
")",
"saved_state",
"=",
"glconnect",
".",
"get_unity",
"(",
")",
".",
"load_model",
"(",
"_internal_url",
")",
"saved_state",
"=",
"_wrap_function_return",
"(",
"saved_state",
")",
"# The archive version could be both bytes/unicode",
"key",
"=",
"u'archive_version'",
"archive_version",
"=",
"saved_state",
"[",
"key",
"]",
"if",
"key",
"in",
"saved_state",
"else",
"saved_state",
"[",
"key",
".",
"encode",
"(",
")",
"]",
"if",
"archive_version",
"<",
"0",
":",
"raise",
"ToolkitError",
"(",
"\"File does not appear to be a Turi Create model.\"",
")",
"elif",
"archive_version",
">",
"1",
":",
"raise",
"ToolkitError",
"(",
"\"Unable to load model.\\n\\n\"",
"\"This model looks to have been saved with a future version of Turi Create.\\n\"",
"\"Please upgrade Turi Create before attempting to load this model file.\"",
")",
"elif",
"archive_version",
"==",
"1",
":",
"name",
"=",
"saved_state",
"[",
"'model_name'",
"]",
"if",
"name",
"in",
"MODEL_NAME_MAP",
":",
"cls",
"=",
"MODEL_NAME_MAP",
"[",
"name",
"]",
"if",
"'model'",
"in",
"saved_state",
":",
"# this is a native model",
"return",
"cls",
"(",
"saved_state",
"[",
"'model'",
"]",
")",
"else",
":",
"# this is a CustomModel",
"model_data",
"=",
"saved_state",
"[",
"'side_data'",
"]",
"model_version",
"=",
"model_data",
"[",
"'model_version'",
"]",
"del",
"model_data",
"[",
"'model_version'",
"]",
"return",
"cls",
".",
"_load_version",
"(",
"model_data",
",",
"model_version",
")",
"elif",
"hasattr",
"(",
"_extensions",
",",
"name",
")",
":",
"return",
"saved_state",
"[",
"\"model\"",
"]",
"else",
":",
"raise",
"ToolkitError",
"(",
"\"Unable to load model of name '%s'; model name not registered.\"",
"%",
"name",
")",
"else",
":",
"# very legacy model format. Attempt pickle loading",
"import",
"sys",
"sys",
".",
"stderr",
".",
"write",
"(",
"\"This model was saved in a legacy model format. Compatibility cannot be guaranteed in future versions.\\n\"",
")",
"if",
"_six",
".",
"PY3",
":",
"raise",
"ToolkitError",
"(",
"\"Unable to load legacy model in Python 3.\\n\\n\"",
"\"To migrate a model, try loading it using Turi Create 4.0 or\\n\"",
"\"later in Python 2 and then re-save it. The re-saved model should\\n\"",
"\"work in Python 3.\"",
")",
"if",
"'graphlab'",
"not",
"in",
"sys",
".",
"modules",
":",
"sys",
".",
"modules",
"[",
"'graphlab'",
"]",
"=",
"sys",
".",
"modules",
"[",
"'turicreate'",
"]",
"# backward compatibility. Otherwise old pickles will not load",
"sys",
".",
"modules",
"[",
"\"turicreate_util\"",
"]",
"=",
"sys",
".",
"modules",
"[",
"'turicreate.util'",
"]",
"sys",
".",
"modules",
"[",
"\"graphlab_util\"",
"]",
"=",
"sys",
".",
"modules",
"[",
"'turicreate.util'",
"]",
"# More backwards compatibility with the turicreate namespace code.",
"for",
"k",
",",
"v",
"in",
"list",
"(",
"sys",
".",
"modules",
".",
"items",
"(",
")",
")",
":",
"if",
"'turicreate'",
"in",
"k",
":",
"sys",
".",
"modules",
"[",
"k",
".",
"replace",
"(",
"'turicreate'",
",",
"'graphlab'",
")",
"]",
"=",
"v",
"#legacy loader",
"import",
"pickle",
"model_wrapper",
"=",
"pickle",
".",
"loads",
"(",
"saved_state",
"[",
"b'model_wrapper'",
"]",
")",
"return",
"model_wrapper",
"(",
"saved_state",
"[",
"b'model_base'",
"]",
")"
] | Load any Turi Create model that was previously saved.
This function assumes the model (can be any model) was previously saved in
Turi Create model format with model.save(filename).
Parameters
----------
location : string
Location of the model to load. Can be a local path or a remote URL.
Because models are saved as directories, there is no file extension.
Examples
----------
>>> model.save('my_model_file')
>>> loaded_model = tc.load_model('my_model_file') | [
"Load",
"any",
"Turi",
"Create",
"model",
"that",
"was",
"previously",
"saved",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_model.py#L30-L124 | train |
apple/turicreate | src/unity/python/turicreate/toolkits/_model.py | _get_default_options_wrapper | def _get_default_options_wrapper(unity_server_model_name,
module_name='',
python_class_name='',
sdk_model = False):
"""
Internal function to return a get_default_options function.
Parameters
----------
unity_server_model_name: str
Name of the class/toolkit as registered with the unity server
module_name: str, optional
Name of the module.
python_class_name: str, optional
Name of the Python class.
sdk_model : bool, optional (default False)
True if the SDK interface was used for the model. False otherwise.
Examples
----------
get_default_options = _get_default_options_wrapper('classifier_svm',
'svm', 'SVMClassifier')
"""
def get_default_options_for_model(output_type = 'sframe'):
"""
Get the default options for the toolkit
:class:`~turicreate.{module_name}.{python_class_name}`.
Parameters
----------
output_type : str, optional
The output can be of the following types.
- `sframe`: A table description each option used in the model.
- `json`: A list of option dictionaries suitable for JSON serialization.
| Each dictionary/row in the dictionary/SFrame object describes the
following parameters of the given model.
+------------------+-------------------------------------------------------+
| Name | Description |
+==================+=======================================================+
| name | Name of the option used in the model. |
+------------------+---------+---------------------------------------------+
| description | A detailed description of the option used. |
+------------------+-------------------------------------------------------+
| type | Option type (REAL, BOOL, INTEGER or CATEGORICAL) |
+------------------+-------------------------------------------------------+
| default_value | The default value for the option. |
+------------------+-------------------------------------------------------+
| possible_values | List of acceptable values (CATEGORICAL only) |
+------------------+-------------------------------------------------------+
| lower_bound | Smallest acceptable value for this option (REAL only) |
+------------------+-------------------------------------------------------+
| upper_bound | Largest acceptable value for this option (REAL only) |
+------------------+-------------------------------------------------------+
Returns
-------
out : dict/SFrame
See Also
--------
turicreate.{module_name}.{python_class_name}.get_current_options
Examples
--------
.. sourcecode:: python
>>> import turicreate
# SFrame formatted output.
>>> out_sframe = turicreate.{module_name}.get_default_options()
# dict formatted output suitable for JSON serialization.
>>> out_json = turicreate.{module_name}.get_default_options('json')
"""
if sdk_model:
response = _tc.extensions._toolkits_sdk_get_default_options(
unity_server_model_name)
else:
response = _tc.extensions._toolkits_get_default_options(
unity_server_model_name)
if output_type == 'json':
return response
else:
json_list = [{'name': k, '': v} for k,v in response.items()]
return _SFrame(json_list).unpack('X1', column_name_prefix='')\
.unpack('X1', column_name_prefix='')
# Change the doc string before returning.
get_default_options_for_model.__doc__ = get_default_options_for_model.\
__doc__.format(python_class_name = python_class_name,
module_name = module_name)
return get_default_options_for_model | python | def _get_default_options_wrapper(unity_server_model_name,
module_name='',
python_class_name='',
sdk_model = False):
"""
Internal function to return a get_default_options function.
Parameters
----------
unity_server_model_name: str
Name of the class/toolkit as registered with the unity server
module_name: str, optional
Name of the module.
python_class_name: str, optional
Name of the Python class.
sdk_model : bool, optional (default False)
True if the SDK interface was used for the model. False otherwise.
Examples
----------
get_default_options = _get_default_options_wrapper('classifier_svm',
'svm', 'SVMClassifier')
"""
def get_default_options_for_model(output_type = 'sframe'):
"""
Get the default options for the toolkit
:class:`~turicreate.{module_name}.{python_class_name}`.
Parameters
----------
output_type : str, optional
The output can be of the following types.
- `sframe`: A table description each option used in the model.
- `json`: A list of option dictionaries suitable for JSON serialization.
| Each dictionary/row in the dictionary/SFrame object describes the
following parameters of the given model.
+------------------+-------------------------------------------------------+
| Name | Description |
+==================+=======================================================+
| name | Name of the option used in the model. |
+------------------+---------+---------------------------------------------+
| description | A detailed description of the option used. |
+------------------+-------------------------------------------------------+
| type | Option type (REAL, BOOL, INTEGER or CATEGORICAL) |
+------------------+-------------------------------------------------------+
| default_value | The default value for the option. |
+------------------+-------------------------------------------------------+
| possible_values | List of acceptable values (CATEGORICAL only) |
+------------------+-------------------------------------------------------+
| lower_bound | Smallest acceptable value for this option (REAL only) |
+------------------+-------------------------------------------------------+
| upper_bound | Largest acceptable value for this option (REAL only) |
+------------------+-------------------------------------------------------+
Returns
-------
out : dict/SFrame
See Also
--------
turicreate.{module_name}.{python_class_name}.get_current_options
Examples
--------
.. sourcecode:: python
>>> import turicreate
# SFrame formatted output.
>>> out_sframe = turicreate.{module_name}.get_default_options()
# dict formatted output suitable for JSON serialization.
>>> out_json = turicreate.{module_name}.get_default_options('json')
"""
if sdk_model:
response = _tc.extensions._toolkits_sdk_get_default_options(
unity_server_model_name)
else:
response = _tc.extensions._toolkits_get_default_options(
unity_server_model_name)
if output_type == 'json':
return response
else:
json_list = [{'name': k, '': v} for k,v in response.items()]
return _SFrame(json_list).unpack('X1', column_name_prefix='')\
.unpack('X1', column_name_prefix='')
# Change the doc string before returning.
get_default_options_for_model.__doc__ = get_default_options_for_model.\
__doc__.format(python_class_name = python_class_name,
module_name = module_name)
return get_default_options_for_model | [
"def",
"_get_default_options_wrapper",
"(",
"unity_server_model_name",
",",
"module_name",
"=",
"''",
",",
"python_class_name",
"=",
"''",
",",
"sdk_model",
"=",
"False",
")",
":",
"def",
"get_default_options_for_model",
"(",
"output_type",
"=",
"'sframe'",
")",
":",
"\"\"\"\n Get the default options for the toolkit\n :class:`~turicreate.{module_name}.{python_class_name}`.\n\n Parameters\n ----------\n output_type : str, optional\n\n The output can be of the following types.\n\n - `sframe`: A table description each option used in the model.\n - `json`: A list of option dictionaries suitable for JSON serialization.\n\n | Each dictionary/row in the dictionary/SFrame object describes the\n following parameters of the given model.\n\n +------------------+-------------------------------------------------------+\n | Name | Description |\n +==================+=======================================================+\n | name | Name of the option used in the model. |\n +------------------+---------+---------------------------------------------+\n | description | A detailed description of the option used. |\n +------------------+-------------------------------------------------------+\n | type | Option type (REAL, BOOL, INTEGER or CATEGORICAL) |\n +------------------+-------------------------------------------------------+\n | default_value | The default value for the option. |\n +------------------+-------------------------------------------------------+\n | possible_values | List of acceptable values (CATEGORICAL only) |\n +------------------+-------------------------------------------------------+\n | lower_bound | Smallest acceptable value for this option (REAL only) |\n +------------------+-------------------------------------------------------+\n | upper_bound | Largest acceptable value for this option (REAL only) |\n +------------------+-------------------------------------------------------+\n\n Returns\n -------\n out : dict/SFrame\n\n See Also\n --------\n turicreate.{module_name}.{python_class_name}.get_current_options\n\n Examples\n --------\n .. sourcecode:: python\n\n >>> import turicreate\n\n # SFrame formatted output.\n >>> out_sframe = turicreate.{module_name}.get_default_options()\n\n # dict formatted output suitable for JSON serialization.\n >>> out_json = turicreate.{module_name}.get_default_options('json')\n \"\"\"",
"if",
"sdk_model",
":",
"response",
"=",
"_tc",
".",
"extensions",
".",
"_toolkits_sdk_get_default_options",
"(",
"unity_server_model_name",
")",
"else",
":",
"response",
"=",
"_tc",
".",
"extensions",
".",
"_toolkits_get_default_options",
"(",
"unity_server_model_name",
")",
"if",
"output_type",
"==",
"'json'",
":",
"return",
"response",
"else",
":",
"json_list",
"=",
"[",
"{",
"'name'",
":",
"k",
",",
"''",
":",
"v",
"}",
"for",
"k",
",",
"v",
"in",
"response",
".",
"items",
"(",
")",
"]",
"return",
"_SFrame",
"(",
"json_list",
")",
".",
"unpack",
"(",
"'X1'",
",",
"column_name_prefix",
"=",
"''",
")",
".",
"unpack",
"(",
"'X1'",
",",
"column_name_prefix",
"=",
"''",
")",
"# Change the doc string before returning.",
"get_default_options_for_model",
".",
"__doc__",
"=",
"get_default_options_for_model",
".",
"__doc__",
".",
"format",
"(",
"python_class_name",
"=",
"python_class_name",
",",
"module_name",
"=",
"module_name",
")",
"return",
"get_default_options_for_model"
] | Internal function to return a get_default_options function.
Parameters
----------
unity_server_model_name: str
Name of the class/toolkit as registered with the unity server
module_name: str, optional
Name of the module.
python_class_name: str, optional
Name of the Python class.
sdk_model : bool, optional (default False)
True if the SDK interface was used for the model. False otherwise.
Examples
----------
get_default_options = _get_default_options_wrapper('classifier_svm',
'svm', 'SVMClassifier') | [
"Internal",
"function",
"to",
"return",
"a",
"get_default_options",
"function",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/src/unity/python/turicreate/toolkits/_model.py#L127-L226 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | reset | def reset ():
""" Clear the module state. This is mainly for testing purposes.
Note that this must be called _after_ resetting the module 'feature'.
"""
global __had_unspecified_value, __had_value, __declared_subfeature
global __init_loc
global __all_signatures, __debug_configuration, __show_configuration
# Stores toolsets without specified initialization values.
__had_unspecified_value = {}
# Stores toolsets with specified initialization values.
__had_value = {}
# Stores toolsets with declared subfeatures.
__declared_subfeature = {}
# Stores all signatures of the toolsets.
__all_signatures = {}
# Stores the initialization locations of each toolset
__init_loc = {}
__debug_configuration = '--debug-configuration' in bjam.variable('ARGV')
__show_configuration = '--show-configuration' in bjam.variable('ARGV')
global __executable_path_variable
OS = bjam.call("peek", [], "OS")[0]
if OS == "NT":
# On Windows the case and capitalization of PATH is not always predictable, so
# let's find out what variable name was really set.
for n in os.environ:
if n.lower() == "path":
__executable_path_variable = n
break
else:
__executable_path_variable = "PATH"
m = {"NT": __executable_path_variable,
"CYGWIN": "PATH",
"MACOSX": "DYLD_LIBRARY_PATH",
"AIX": "LIBPATH",
"HAIKU": "LIBRARY_PATH"}
global __shared_library_path_variable
__shared_library_path_variable = m.get(OS, "LD_LIBRARY_PATH") | python | def reset ():
""" Clear the module state. This is mainly for testing purposes.
Note that this must be called _after_ resetting the module 'feature'.
"""
global __had_unspecified_value, __had_value, __declared_subfeature
global __init_loc
global __all_signatures, __debug_configuration, __show_configuration
# Stores toolsets without specified initialization values.
__had_unspecified_value = {}
# Stores toolsets with specified initialization values.
__had_value = {}
# Stores toolsets with declared subfeatures.
__declared_subfeature = {}
# Stores all signatures of the toolsets.
__all_signatures = {}
# Stores the initialization locations of each toolset
__init_loc = {}
__debug_configuration = '--debug-configuration' in bjam.variable('ARGV')
__show_configuration = '--show-configuration' in bjam.variable('ARGV')
global __executable_path_variable
OS = bjam.call("peek", [], "OS")[0]
if OS == "NT":
# On Windows the case and capitalization of PATH is not always predictable, so
# let's find out what variable name was really set.
for n in os.environ:
if n.lower() == "path":
__executable_path_variable = n
break
else:
__executable_path_variable = "PATH"
m = {"NT": __executable_path_variable,
"CYGWIN": "PATH",
"MACOSX": "DYLD_LIBRARY_PATH",
"AIX": "LIBPATH",
"HAIKU": "LIBRARY_PATH"}
global __shared_library_path_variable
__shared_library_path_variable = m.get(OS, "LD_LIBRARY_PATH") | [
"def",
"reset",
"(",
")",
":",
"global",
"__had_unspecified_value",
",",
"__had_value",
",",
"__declared_subfeature",
"global",
"__init_loc",
"global",
"__all_signatures",
",",
"__debug_configuration",
",",
"__show_configuration",
"# Stores toolsets without specified initialization values.",
"__had_unspecified_value",
"=",
"{",
"}",
"# Stores toolsets with specified initialization values.",
"__had_value",
"=",
"{",
"}",
"# Stores toolsets with declared subfeatures.",
"__declared_subfeature",
"=",
"{",
"}",
"# Stores all signatures of the toolsets.",
"__all_signatures",
"=",
"{",
"}",
"# Stores the initialization locations of each toolset",
"__init_loc",
"=",
"{",
"}",
"__debug_configuration",
"=",
"'--debug-configuration'",
"in",
"bjam",
".",
"variable",
"(",
"'ARGV'",
")",
"__show_configuration",
"=",
"'--show-configuration'",
"in",
"bjam",
".",
"variable",
"(",
"'ARGV'",
")",
"global",
"__executable_path_variable",
"OS",
"=",
"bjam",
".",
"call",
"(",
"\"peek\"",
",",
"[",
"]",
",",
"\"OS\"",
")",
"[",
"0",
"]",
"if",
"OS",
"==",
"\"NT\"",
":",
"# On Windows the case and capitalization of PATH is not always predictable, so",
"# let's find out what variable name was really set.",
"for",
"n",
"in",
"os",
".",
"environ",
":",
"if",
"n",
".",
"lower",
"(",
")",
"==",
"\"path\"",
":",
"__executable_path_variable",
"=",
"n",
"break",
"else",
":",
"__executable_path_variable",
"=",
"\"PATH\"",
"m",
"=",
"{",
"\"NT\"",
":",
"__executable_path_variable",
",",
"\"CYGWIN\"",
":",
"\"PATH\"",
",",
"\"MACOSX\"",
":",
"\"DYLD_LIBRARY_PATH\"",
",",
"\"AIX\"",
":",
"\"LIBPATH\"",
",",
"\"HAIKU\"",
":",
"\"LIBRARY_PATH\"",
"}",
"global",
"__shared_library_path_variable",
"__shared_library_path_variable",
"=",
"m",
".",
"get",
"(",
"OS",
",",
"\"LD_LIBRARY_PATH\"",
")"
] | Clear the module state. This is mainly for testing purposes.
Note that this must be called _after_ resetting the module 'feature'. | [
"Clear",
"the",
"module",
"state",
".",
"This",
"is",
"mainly",
"for",
"testing",
"purposes",
".",
"Note",
"that",
"this",
"must",
"be",
"called",
"_after_",
"resetting",
"the",
"module",
"feature",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L28-L72 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | check_init_parameters | def check_init_parameters(toolset, requirement, *args):
""" The rule for checking toolset parameters. Trailing parameters should all be
parameter name/value pairs. The rule will check that each parameter either has
a value in each invocation or has no value in each invocation. Also, the rule
will check that the combination of all parameter values is unique in all
invocations.
Each parameter name corresponds to a subfeature. This rule will declare a
subfeature the first time a non-empty parameter value is passed and will
extend it with all the values.
The return value from this rule is a condition to be used for flags settings.
"""
assert isinstance(toolset, basestring)
assert is_iterable_typed(requirement, basestring) or requirement is None
from b2.build import toolset as b2_toolset
if requirement is None:
requirement = []
sig = toolset
condition = replace_grist(toolset, '<toolset>')
subcondition = []
for arg in args:
assert(isinstance(arg, tuple))
assert(len(arg) == 2)
name = arg[0]
value = arg[1]
assert(isinstance(name, str))
assert(isinstance(value, str) or value is None)
str_toolset_name = str((toolset, name))
# FIXME: is this the correct translation?
### if $(value)-is-not-empty
if value is not None:
condition = condition + '-' + value
if str_toolset_name in __had_unspecified_value:
raise BaseException("'%s' initialization: parameter '%s' inconsistent\n" \
"no value was specified in earlier initialization\n" \
"an explicit value is specified now" % (toolset, name))
# The logic below is for intel compiler. It calls this rule
# with 'intel-linux' and 'intel-win' as toolset, so we need to
# get the base part of toolset name.
# We can't pass 'intel' as toolset, because it that case it will
# be impossible to register versionles intel-linux and
# intel-win of specific version.
t = toolset
m = __re__before_first_dash.match(toolset)
if m:
t = m.group(1)
if str_toolset_name not in __had_value:
if str((t, name)) not in __declared_subfeature:
feature.subfeature('toolset', t, name, [], ['propagated'])
__declared_subfeature[str((t, name))] = True
__had_value[str_toolset_name] = True
feature.extend_subfeature('toolset', t, name, [value])
subcondition += ['<toolset-' + t + ':' + name + '>' + value ]
else:
if str_toolset_name in __had_value:
raise BaseException ("'%s' initialization: parameter '%s' inconsistent\n" \
"an explicit value was specified in an earlier initialization\n" \
"no value is specified now" % (toolset, name))
__had_unspecified_value[str_toolset_name] = True
if value == None: value = ''
sig = sig + value + '-'
# if a requirement is specified, the signature should be unique
# with that requirement
if requirement:
sig += '-' + '-'.join(requirement)
if sig in __all_signatures:
message = "duplicate initialization of '%s' with the following parameters: " % toolset
for arg in args:
name = arg[0]
value = arg[1]
if value == None: value = '<unspecified>'
message += "'%s' = '%s'\n" % (name, value)
raise BaseException(message)
__all_signatures[sig] = True
# FIXME
__init_loc[sig] = "User location unknown" #[ errors.nearest-user-location ] ;
# If we have a requirement, this version should only be applied under that
# condition. To accomplish this we add a toolset requirement that imposes
# the toolset subcondition, which encodes the version.
if requirement:
r = ['<toolset>' + toolset] + requirement
r = ','.join(r)
b2_toolset.add_requirements([r + ':' + c for c in subcondition])
# We add the requirements, if any, to the condition to scope the toolset
# variables and options to this specific version.
condition = [condition]
if requirement:
condition += requirement
if __show_configuration:
print "notice:", condition
return ['/'.join(condition)] | python | def check_init_parameters(toolset, requirement, *args):
""" The rule for checking toolset parameters. Trailing parameters should all be
parameter name/value pairs. The rule will check that each parameter either has
a value in each invocation or has no value in each invocation. Also, the rule
will check that the combination of all parameter values is unique in all
invocations.
Each parameter name corresponds to a subfeature. This rule will declare a
subfeature the first time a non-empty parameter value is passed and will
extend it with all the values.
The return value from this rule is a condition to be used for flags settings.
"""
assert isinstance(toolset, basestring)
assert is_iterable_typed(requirement, basestring) or requirement is None
from b2.build import toolset as b2_toolset
if requirement is None:
requirement = []
sig = toolset
condition = replace_grist(toolset, '<toolset>')
subcondition = []
for arg in args:
assert(isinstance(arg, tuple))
assert(len(arg) == 2)
name = arg[0]
value = arg[1]
assert(isinstance(name, str))
assert(isinstance(value, str) or value is None)
str_toolset_name = str((toolset, name))
# FIXME: is this the correct translation?
### if $(value)-is-not-empty
if value is not None:
condition = condition + '-' + value
if str_toolset_name in __had_unspecified_value:
raise BaseException("'%s' initialization: parameter '%s' inconsistent\n" \
"no value was specified in earlier initialization\n" \
"an explicit value is specified now" % (toolset, name))
# The logic below is for intel compiler. It calls this rule
# with 'intel-linux' and 'intel-win' as toolset, so we need to
# get the base part of toolset name.
# We can't pass 'intel' as toolset, because it that case it will
# be impossible to register versionles intel-linux and
# intel-win of specific version.
t = toolset
m = __re__before_first_dash.match(toolset)
if m:
t = m.group(1)
if str_toolset_name not in __had_value:
if str((t, name)) not in __declared_subfeature:
feature.subfeature('toolset', t, name, [], ['propagated'])
__declared_subfeature[str((t, name))] = True
__had_value[str_toolset_name] = True
feature.extend_subfeature('toolset', t, name, [value])
subcondition += ['<toolset-' + t + ':' + name + '>' + value ]
else:
if str_toolset_name in __had_value:
raise BaseException ("'%s' initialization: parameter '%s' inconsistent\n" \
"an explicit value was specified in an earlier initialization\n" \
"no value is specified now" % (toolset, name))
__had_unspecified_value[str_toolset_name] = True
if value == None: value = ''
sig = sig + value + '-'
# if a requirement is specified, the signature should be unique
# with that requirement
if requirement:
sig += '-' + '-'.join(requirement)
if sig in __all_signatures:
message = "duplicate initialization of '%s' with the following parameters: " % toolset
for arg in args:
name = arg[0]
value = arg[1]
if value == None: value = '<unspecified>'
message += "'%s' = '%s'\n" % (name, value)
raise BaseException(message)
__all_signatures[sig] = True
# FIXME
__init_loc[sig] = "User location unknown" #[ errors.nearest-user-location ] ;
# If we have a requirement, this version should only be applied under that
# condition. To accomplish this we add a toolset requirement that imposes
# the toolset subcondition, which encodes the version.
if requirement:
r = ['<toolset>' + toolset] + requirement
r = ','.join(r)
b2_toolset.add_requirements([r + ':' + c for c in subcondition])
# We add the requirements, if any, to the condition to scope the toolset
# variables and options to this specific version.
condition = [condition]
if requirement:
condition += requirement
if __show_configuration:
print "notice:", condition
return ['/'.join(condition)] | [
"def",
"check_init_parameters",
"(",
"toolset",
",",
"requirement",
",",
"*",
"args",
")",
":",
"assert",
"isinstance",
"(",
"toolset",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"requirement",
",",
"basestring",
")",
"or",
"requirement",
"is",
"None",
"from",
"b2",
".",
"build",
"import",
"toolset",
"as",
"b2_toolset",
"if",
"requirement",
"is",
"None",
":",
"requirement",
"=",
"[",
"]",
"sig",
"=",
"toolset",
"condition",
"=",
"replace_grist",
"(",
"toolset",
",",
"'<toolset>'",
")",
"subcondition",
"=",
"[",
"]",
"for",
"arg",
"in",
"args",
":",
"assert",
"(",
"isinstance",
"(",
"arg",
",",
"tuple",
")",
")",
"assert",
"(",
"len",
"(",
"arg",
")",
"==",
"2",
")",
"name",
"=",
"arg",
"[",
"0",
"]",
"value",
"=",
"arg",
"[",
"1",
"]",
"assert",
"(",
"isinstance",
"(",
"name",
",",
"str",
")",
")",
"assert",
"(",
"isinstance",
"(",
"value",
",",
"str",
")",
"or",
"value",
"is",
"None",
")",
"str_toolset_name",
"=",
"str",
"(",
"(",
"toolset",
",",
"name",
")",
")",
"# FIXME: is this the correct translation?",
"### if $(value)-is-not-empty",
"if",
"value",
"is",
"not",
"None",
":",
"condition",
"=",
"condition",
"+",
"'-'",
"+",
"value",
"if",
"str_toolset_name",
"in",
"__had_unspecified_value",
":",
"raise",
"BaseException",
"(",
"\"'%s' initialization: parameter '%s' inconsistent\\n\"",
"\"no value was specified in earlier initialization\\n\"",
"\"an explicit value is specified now\"",
"%",
"(",
"toolset",
",",
"name",
")",
")",
"# The logic below is for intel compiler. It calls this rule",
"# with 'intel-linux' and 'intel-win' as toolset, so we need to",
"# get the base part of toolset name.",
"# We can't pass 'intel' as toolset, because it that case it will",
"# be impossible to register versionles intel-linux and",
"# intel-win of specific version.",
"t",
"=",
"toolset",
"m",
"=",
"__re__before_first_dash",
".",
"match",
"(",
"toolset",
")",
"if",
"m",
":",
"t",
"=",
"m",
".",
"group",
"(",
"1",
")",
"if",
"str_toolset_name",
"not",
"in",
"__had_value",
":",
"if",
"str",
"(",
"(",
"t",
",",
"name",
")",
")",
"not",
"in",
"__declared_subfeature",
":",
"feature",
".",
"subfeature",
"(",
"'toolset'",
",",
"t",
",",
"name",
",",
"[",
"]",
",",
"[",
"'propagated'",
"]",
")",
"__declared_subfeature",
"[",
"str",
"(",
"(",
"t",
",",
"name",
")",
")",
"]",
"=",
"True",
"__had_value",
"[",
"str_toolset_name",
"]",
"=",
"True",
"feature",
".",
"extend_subfeature",
"(",
"'toolset'",
",",
"t",
",",
"name",
",",
"[",
"value",
"]",
")",
"subcondition",
"+=",
"[",
"'<toolset-'",
"+",
"t",
"+",
"':'",
"+",
"name",
"+",
"'>'",
"+",
"value",
"]",
"else",
":",
"if",
"str_toolset_name",
"in",
"__had_value",
":",
"raise",
"BaseException",
"(",
"\"'%s' initialization: parameter '%s' inconsistent\\n\"",
"\"an explicit value was specified in an earlier initialization\\n\"",
"\"no value is specified now\"",
"%",
"(",
"toolset",
",",
"name",
")",
")",
"__had_unspecified_value",
"[",
"str_toolset_name",
"]",
"=",
"True",
"if",
"value",
"==",
"None",
":",
"value",
"=",
"''",
"sig",
"=",
"sig",
"+",
"value",
"+",
"'-'",
"# if a requirement is specified, the signature should be unique",
"# with that requirement",
"if",
"requirement",
":",
"sig",
"+=",
"'-'",
"+",
"'-'",
".",
"join",
"(",
"requirement",
")",
"if",
"sig",
"in",
"__all_signatures",
":",
"message",
"=",
"\"duplicate initialization of '%s' with the following parameters: \"",
"%",
"toolset",
"for",
"arg",
"in",
"args",
":",
"name",
"=",
"arg",
"[",
"0",
"]",
"value",
"=",
"arg",
"[",
"1",
"]",
"if",
"value",
"==",
"None",
":",
"value",
"=",
"'<unspecified>'",
"message",
"+=",
"\"'%s' = '%s'\\n\"",
"%",
"(",
"name",
",",
"value",
")",
"raise",
"BaseException",
"(",
"message",
")",
"__all_signatures",
"[",
"sig",
"]",
"=",
"True",
"# FIXME",
"__init_loc",
"[",
"sig",
"]",
"=",
"\"User location unknown\"",
"#[ errors.nearest-user-location ] ;",
"# If we have a requirement, this version should only be applied under that",
"# condition. To accomplish this we add a toolset requirement that imposes",
"# the toolset subcondition, which encodes the version.",
"if",
"requirement",
":",
"r",
"=",
"[",
"'<toolset>'",
"+",
"toolset",
"]",
"+",
"requirement",
"r",
"=",
"','",
".",
"join",
"(",
"r",
")",
"b2_toolset",
".",
"add_requirements",
"(",
"[",
"r",
"+",
"':'",
"+",
"c",
"for",
"c",
"in",
"subcondition",
"]",
")",
"# We add the requirements, if any, to the condition to scope the toolset",
"# variables and options to this specific version.",
"condition",
"=",
"[",
"condition",
"]",
"if",
"requirement",
":",
"condition",
"+=",
"requirement",
"if",
"__show_configuration",
":",
"print",
"\"notice:\"",
",",
"condition",
"return",
"[",
"'/'",
".",
"join",
"(",
"condition",
")",
"]"
] | The rule for checking toolset parameters. Trailing parameters should all be
parameter name/value pairs. The rule will check that each parameter either has
a value in each invocation or has no value in each invocation. Also, the rule
will check that the combination of all parameter values is unique in all
invocations.
Each parameter name corresponds to a subfeature. This rule will declare a
subfeature the first time a non-empty parameter value is passed and will
extend it with all the values.
The return value from this rule is a condition to be used for flags settings. | [
"The",
"rule",
"for",
"checking",
"toolset",
"parameters",
".",
"Trailing",
"parameters",
"should",
"all",
"be",
"parameter",
"name",
"/",
"value",
"pairs",
".",
"The",
"rule",
"will",
"check",
"that",
"each",
"parameter",
"either",
"has",
"a",
"value",
"in",
"each",
"invocation",
"or",
"has",
"no",
"value",
"in",
"each",
"invocation",
".",
"Also",
"the",
"rule",
"will",
"check",
"that",
"the",
"combination",
"of",
"all",
"parameter",
"values",
"is",
"unique",
"in",
"all",
"invocations",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L171-L282 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | get_invocation_command_nodefault | def get_invocation_command_nodefault(
toolset, tool, user_provided_command=[], additional_paths=[], path_last=False):
"""
A helper rule to get the command to invoke some tool. If
'user-provided-command' is not given, tries to find binary named 'tool' in
PATH and in the passed 'additional-path'. Otherwise, verifies that the first
element of 'user-provided-command' is an existing program.
This rule returns the command to be used when invoking the tool. If we can't
find the tool, a warning is issued. If 'path-last' is specified, PATH is
checked after 'additional-paths' when searching for 'tool'.
"""
assert isinstance(toolset, basestring)
assert isinstance(tool, basestring)
assert is_iterable_typed(user_provided_command, basestring)
assert is_iterable_typed(additional_paths, basestring) or additional_paths is None
assert isinstance(path_last, (int, bool))
if not user_provided_command:
command = find_tool(tool, additional_paths, path_last)
if not command and __debug_configuration:
print "warning: toolset", toolset, "initialization: can't find tool, tool"
#FIXME
#print "warning: initialized from" [ errors.nearest-user-location ] ;
else:
command = check_tool(user_provided_command)
if not command and __debug_configuration:
print "warning: toolset", toolset, "initialization:"
print "warning: can't find user-provided command", user_provided_command
#FIXME
#ECHO "warning: initialized from" [ errors.nearest-user-location ]
command = []
if command:
command = ' '.join(command)
return command | python | def get_invocation_command_nodefault(
toolset, tool, user_provided_command=[], additional_paths=[], path_last=False):
"""
A helper rule to get the command to invoke some tool. If
'user-provided-command' is not given, tries to find binary named 'tool' in
PATH and in the passed 'additional-path'. Otherwise, verifies that the first
element of 'user-provided-command' is an existing program.
This rule returns the command to be used when invoking the tool. If we can't
find the tool, a warning is issued. If 'path-last' is specified, PATH is
checked after 'additional-paths' when searching for 'tool'.
"""
assert isinstance(toolset, basestring)
assert isinstance(tool, basestring)
assert is_iterable_typed(user_provided_command, basestring)
assert is_iterable_typed(additional_paths, basestring) or additional_paths is None
assert isinstance(path_last, (int, bool))
if not user_provided_command:
command = find_tool(tool, additional_paths, path_last)
if not command and __debug_configuration:
print "warning: toolset", toolset, "initialization: can't find tool, tool"
#FIXME
#print "warning: initialized from" [ errors.nearest-user-location ] ;
else:
command = check_tool(user_provided_command)
if not command and __debug_configuration:
print "warning: toolset", toolset, "initialization:"
print "warning: can't find user-provided command", user_provided_command
#FIXME
#ECHO "warning: initialized from" [ errors.nearest-user-location ]
command = []
if command:
command = ' '.join(command)
return command | [
"def",
"get_invocation_command_nodefault",
"(",
"toolset",
",",
"tool",
",",
"user_provided_command",
"=",
"[",
"]",
",",
"additional_paths",
"=",
"[",
"]",
",",
"path_last",
"=",
"False",
")",
":",
"assert",
"isinstance",
"(",
"toolset",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"tool",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"user_provided_command",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"additional_paths",
",",
"basestring",
")",
"or",
"additional_paths",
"is",
"None",
"assert",
"isinstance",
"(",
"path_last",
",",
"(",
"int",
",",
"bool",
")",
")",
"if",
"not",
"user_provided_command",
":",
"command",
"=",
"find_tool",
"(",
"tool",
",",
"additional_paths",
",",
"path_last",
")",
"if",
"not",
"command",
"and",
"__debug_configuration",
":",
"print",
"\"warning: toolset\"",
",",
"toolset",
",",
"\"initialization: can't find tool, tool\"",
"#FIXME",
"#print \"warning: initialized from\" [ errors.nearest-user-location ] ;",
"else",
":",
"command",
"=",
"check_tool",
"(",
"user_provided_command",
")",
"if",
"not",
"command",
"and",
"__debug_configuration",
":",
"print",
"\"warning: toolset\"",
",",
"toolset",
",",
"\"initialization:\"",
"print",
"\"warning: can't find user-provided command\"",
",",
"user_provided_command",
"#FIXME",
"#ECHO \"warning: initialized from\" [ errors.nearest-user-location ]",
"command",
"=",
"[",
"]",
"if",
"command",
":",
"command",
"=",
"' '",
".",
"join",
"(",
"command",
")",
"return",
"command"
] | A helper rule to get the command to invoke some tool. If
'user-provided-command' is not given, tries to find binary named 'tool' in
PATH and in the passed 'additional-path'. Otherwise, verifies that the first
element of 'user-provided-command' is an existing program.
This rule returns the command to be used when invoking the tool. If we can't
find the tool, a warning is issued. If 'path-last' is specified, PATH is
checked after 'additional-paths' when searching for 'tool'. | [
"A",
"helper",
"rule",
"to",
"get",
"the",
"command",
"to",
"invoke",
"some",
"tool",
".",
"If",
"user",
"-",
"provided",
"-",
"command",
"is",
"not",
"given",
"tries",
"to",
"find",
"binary",
"named",
"tool",
"in",
"PATH",
"and",
"in",
"the",
"passed",
"additional",
"-",
"path",
".",
"Otherwise",
"verifies",
"that",
"the",
"first",
"element",
"of",
"user",
"-",
"provided",
"-",
"command",
"is",
"an",
"existing",
"program",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L285-L320 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | get_invocation_command | def get_invocation_command(toolset, tool, user_provided_command = [],
additional_paths = [], path_last = False):
""" Same as get_invocation_command_nodefault, except that if no tool is found,
returns either the user-provided-command, if present, or the 'tool' parameter.
"""
assert isinstance(toolset, basestring)
assert isinstance(tool, basestring)
assert is_iterable_typed(user_provided_command, basestring)
assert is_iterable_typed(additional_paths, basestring) or additional_paths is None
assert isinstance(path_last, (int, bool))
result = get_invocation_command_nodefault(toolset, tool,
user_provided_command,
additional_paths,
path_last)
if not result:
if user_provided_command:
result = user_provided_command[0]
else:
result = tool
assert(isinstance(result, str))
return result | python | def get_invocation_command(toolset, tool, user_provided_command = [],
additional_paths = [], path_last = False):
""" Same as get_invocation_command_nodefault, except that if no tool is found,
returns either the user-provided-command, if present, or the 'tool' parameter.
"""
assert isinstance(toolset, basestring)
assert isinstance(tool, basestring)
assert is_iterable_typed(user_provided_command, basestring)
assert is_iterable_typed(additional_paths, basestring) or additional_paths is None
assert isinstance(path_last, (int, bool))
result = get_invocation_command_nodefault(toolset, tool,
user_provided_command,
additional_paths,
path_last)
if not result:
if user_provided_command:
result = user_provided_command[0]
else:
result = tool
assert(isinstance(result, str))
return result | [
"def",
"get_invocation_command",
"(",
"toolset",
",",
"tool",
",",
"user_provided_command",
"=",
"[",
"]",
",",
"additional_paths",
"=",
"[",
"]",
",",
"path_last",
"=",
"False",
")",
":",
"assert",
"isinstance",
"(",
"toolset",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"tool",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"user_provided_command",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"additional_paths",
",",
"basestring",
")",
"or",
"additional_paths",
"is",
"None",
"assert",
"isinstance",
"(",
"path_last",
",",
"(",
"int",
",",
"bool",
")",
")",
"result",
"=",
"get_invocation_command_nodefault",
"(",
"toolset",
",",
"tool",
",",
"user_provided_command",
",",
"additional_paths",
",",
"path_last",
")",
"if",
"not",
"result",
":",
"if",
"user_provided_command",
":",
"result",
"=",
"user_provided_command",
"[",
"0",
"]",
"else",
":",
"result",
"=",
"tool",
"assert",
"(",
"isinstance",
"(",
"result",
",",
"str",
")",
")",
"return",
"result"
] | Same as get_invocation_command_nodefault, except that if no tool is found,
returns either the user-provided-command, if present, or the 'tool' parameter. | [
"Same",
"as",
"get_invocation_command_nodefault",
"except",
"that",
"if",
"no",
"tool",
"is",
"found",
"returns",
"either",
"the",
"user",
"-",
"provided",
"-",
"command",
"if",
"present",
"or",
"the",
"tool",
"parameter",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L323-L347 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | get_absolute_tool_path | def get_absolute_tool_path(command):
"""
Given an invocation command,
return the absolute path to the command. This works even if commnad
has not path element and is present in PATH.
"""
assert isinstance(command, basestring)
if os.path.dirname(command):
return os.path.dirname(command)
else:
programs = path.programs_path()
m = path.glob(programs, [command, command + '.exe' ])
if not len(m):
if __debug_configuration:
print "Could not find:", command, "in", programs
return None
return os.path.dirname(m[0]) | python | def get_absolute_tool_path(command):
"""
Given an invocation command,
return the absolute path to the command. This works even if commnad
has not path element and is present in PATH.
"""
assert isinstance(command, basestring)
if os.path.dirname(command):
return os.path.dirname(command)
else:
programs = path.programs_path()
m = path.glob(programs, [command, command + '.exe' ])
if not len(m):
if __debug_configuration:
print "Could not find:", command, "in", programs
return None
return os.path.dirname(m[0]) | [
"def",
"get_absolute_tool_path",
"(",
"command",
")",
":",
"assert",
"isinstance",
"(",
"command",
",",
"basestring",
")",
"if",
"os",
".",
"path",
".",
"dirname",
"(",
"command",
")",
":",
"return",
"os",
".",
"path",
".",
"dirname",
"(",
"command",
")",
"else",
":",
"programs",
"=",
"path",
".",
"programs_path",
"(",
")",
"m",
"=",
"path",
".",
"glob",
"(",
"programs",
",",
"[",
"command",
",",
"command",
"+",
"'.exe'",
"]",
")",
"if",
"not",
"len",
"(",
"m",
")",
":",
"if",
"__debug_configuration",
":",
"print",
"\"Could not find:\"",
",",
"command",
",",
"\"in\"",
",",
"programs",
"return",
"None",
"return",
"os",
".",
"path",
".",
"dirname",
"(",
"m",
"[",
"0",
"]",
")"
] | Given an invocation command,
return the absolute path to the command. This works even if commnad
has not path element and is present in PATH. | [
"Given",
"an",
"invocation",
"command",
"return",
"the",
"absolute",
"path",
"to",
"the",
"command",
".",
"This",
"works",
"even",
"if",
"commnad",
"has",
"not",
"path",
"element",
"and",
"is",
"present",
"in",
"PATH",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L350-L366 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | find_tool | def find_tool(name, additional_paths = [], path_last = False):
""" Attempts to find tool (binary) named 'name' in PATH and in
'additional-paths'. If found in path, returns 'name'. If
found in additional paths, returns full name. If the tool
is found in several directories, returns the first path found.
Otherwise, returns the empty string. If 'path_last' is specified,
path is checked after 'additional_paths'.
"""
assert isinstance(name, basestring)
assert is_iterable_typed(additional_paths, basestring)
assert isinstance(path_last, (int, bool))
programs = path.programs_path()
match = path.glob(programs, [name, name + '.exe'])
additional_match = path.glob(additional_paths, [name, name + '.exe'])
result = []
if path_last:
result = additional_match
if not result and match:
result = match
else:
if match:
result = match
elif additional_match:
result = additional_match
if result:
return path.native(result[0])
else:
return '' | python | def find_tool(name, additional_paths = [], path_last = False):
""" Attempts to find tool (binary) named 'name' in PATH and in
'additional-paths'. If found in path, returns 'name'. If
found in additional paths, returns full name. If the tool
is found in several directories, returns the first path found.
Otherwise, returns the empty string. If 'path_last' is specified,
path is checked after 'additional_paths'.
"""
assert isinstance(name, basestring)
assert is_iterable_typed(additional_paths, basestring)
assert isinstance(path_last, (int, bool))
programs = path.programs_path()
match = path.glob(programs, [name, name + '.exe'])
additional_match = path.glob(additional_paths, [name, name + '.exe'])
result = []
if path_last:
result = additional_match
if not result and match:
result = match
else:
if match:
result = match
elif additional_match:
result = additional_match
if result:
return path.native(result[0])
else:
return '' | [
"def",
"find_tool",
"(",
"name",
",",
"additional_paths",
"=",
"[",
"]",
",",
"path_last",
"=",
"False",
")",
":",
"assert",
"isinstance",
"(",
"name",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"additional_paths",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"path_last",
",",
"(",
"int",
",",
"bool",
")",
")",
"programs",
"=",
"path",
".",
"programs_path",
"(",
")",
"match",
"=",
"path",
".",
"glob",
"(",
"programs",
",",
"[",
"name",
",",
"name",
"+",
"'.exe'",
"]",
")",
"additional_match",
"=",
"path",
".",
"glob",
"(",
"additional_paths",
",",
"[",
"name",
",",
"name",
"+",
"'.exe'",
"]",
")",
"result",
"=",
"[",
"]",
"if",
"path_last",
":",
"result",
"=",
"additional_match",
"if",
"not",
"result",
"and",
"match",
":",
"result",
"=",
"match",
"else",
":",
"if",
"match",
":",
"result",
"=",
"match",
"elif",
"additional_match",
":",
"result",
"=",
"additional_match",
"if",
"result",
":",
"return",
"path",
".",
"native",
"(",
"result",
"[",
"0",
"]",
")",
"else",
":",
"return",
"''"
] | Attempts to find tool (binary) named 'name' in PATH and in
'additional-paths'. If found in path, returns 'name'. If
found in additional paths, returns full name. If the tool
is found in several directories, returns the first path found.
Otherwise, returns the empty string. If 'path_last' is specified,
path is checked after 'additional_paths'. | [
"Attempts",
"to",
"find",
"tool",
"(",
"binary",
")",
"named",
"name",
"in",
"PATH",
"and",
"in",
"additional",
"-",
"paths",
".",
"If",
"found",
"in",
"path",
"returns",
"name",
".",
"If",
"found",
"in",
"additional",
"paths",
"returns",
"full",
"name",
".",
"If",
"the",
"tool",
"is",
"found",
"in",
"several",
"directories",
"returns",
"the",
"first",
"path",
"found",
".",
"Otherwise",
"returns",
"the",
"empty",
"string",
".",
"If",
"path_last",
"is",
"specified",
"path",
"is",
"checked",
"after",
"additional_paths",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L369-L401 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | check_tool_aux | def check_tool_aux(command):
""" Checks if 'command' can be found either in path
or is a full name to an existing file.
"""
assert isinstance(command, basestring)
dirname = os.path.dirname(command)
if dirname:
if os.path.exists(command):
return command
# Both NT and Cygwin will run .exe files by their unqualified names.
elif on_windows() and os.path.exists(command + '.exe'):
return command
# Only NT will run .bat files by their unqualified names.
elif os_name() == 'NT' and os.path.exists(command + '.bat'):
return command
else:
paths = path.programs_path()
if path.glob(paths, [command]):
return command | python | def check_tool_aux(command):
""" Checks if 'command' can be found either in path
or is a full name to an existing file.
"""
assert isinstance(command, basestring)
dirname = os.path.dirname(command)
if dirname:
if os.path.exists(command):
return command
# Both NT and Cygwin will run .exe files by their unqualified names.
elif on_windows() and os.path.exists(command + '.exe'):
return command
# Only NT will run .bat files by their unqualified names.
elif os_name() == 'NT' and os.path.exists(command + '.bat'):
return command
else:
paths = path.programs_path()
if path.glob(paths, [command]):
return command | [
"def",
"check_tool_aux",
"(",
"command",
")",
":",
"assert",
"isinstance",
"(",
"command",
",",
"basestring",
")",
"dirname",
"=",
"os",
".",
"path",
".",
"dirname",
"(",
"command",
")",
"if",
"dirname",
":",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"command",
")",
":",
"return",
"command",
"# Both NT and Cygwin will run .exe files by their unqualified names.",
"elif",
"on_windows",
"(",
")",
"and",
"os",
".",
"path",
".",
"exists",
"(",
"command",
"+",
"'.exe'",
")",
":",
"return",
"command",
"# Only NT will run .bat files by their unqualified names.",
"elif",
"os_name",
"(",
")",
"==",
"'NT'",
"and",
"os",
".",
"path",
".",
"exists",
"(",
"command",
"+",
"'.bat'",
")",
":",
"return",
"command",
"else",
":",
"paths",
"=",
"path",
".",
"programs_path",
"(",
")",
"if",
"path",
".",
"glob",
"(",
"paths",
",",
"[",
"command",
"]",
")",
":",
"return",
"command"
] | Checks if 'command' can be found either in path
or is a full name to an existing file. | [
"Checks",
"if",
"command",
"can",
"be",
"found",
"either",
"in",
"path",
"or",
"is",
"a",
"full",
"name",
"to",
"an",
"existing",
"file",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L404-L422 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | check_tool | def check_tool(command):
""" Checks that a tool can be invoked by 'command'.
If command is not an absolute path, checks if it can be found in 'path'.
If comand is absolute path, check that it exists. Returns 'command'
if ok and empty string otherwise.
"""
assert is_iterable_typed(command, basestring)
#FIXME: why do we check the first and last elements????
if check_tool_aux(command[0]) or check_tool_aux(command[-1]):
return command | python | def check_tool(command):
""" Checks that a tool can be invoked by 'command'.
If command is not an absolute path, checks if it can be found in 'path'.
If comand is absolute path, check that it exists. Returns 'command'
if ok and empty string otherwise.
"""
assert is_iterable_typed(command, basestring)
#FIXME: why do we check the first and last elements????
if check_tool_aux(command[0]) or check_tool_aux(command[-1]):
return command | [
"def",
"check_tool",
"(",
"command",
")",
":",
"assert",
"is_iterable_typed",
"(",
"command",
",",
"basestring",
")",
"#FIXME: why do we check the first and last elements????",
"if",
"check_tool_aux",
"(",
"command",
"[",
"0",
"]",
")",
"or",
"check_tool_aux",
"(",
"command",
"[",
"-",
"1",
"]",
")",
":",
"return",
"command"
] | Checks that a tool can be invoked by 'command'.
If command is not an absolute path, checks if it can be found in 'path'.
If comand is absolute path, check that it exists. Returns 'command'
if ok and empty string otherwise. | [
"Checks",
"that",
"a",
"tool",
"can",
"be",
"invoked",
"by",
"command",
".",
"If",
"command",
"is",
"not",
"an",
"absolute",
"path",
"checks",
"if",
"it",
"can",
"be",
"found",
"in",
"path",
".",
"If",
"comand",
"is",
"absolute",
"path",
"check",
"that",
"it",
"exists",
".",
"Returns",
"command",
"if",
"ok",
"and",
"empty",
"string",
"otherwise",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L425-L434 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | handle_options | def handle_options(tool, condition, command, options):
""" Handle common options for toolset, specifically sets the following
flag variables:
- CONFIG_COMMAND to 'command'
- OPTIOns for compile to the value of <compileflags> in options
- OPTIONS for compile.c to the value of <cflags> in options
- OPTIONS for compile.c++ to the value of <cxxflags> in options
- OPTIONS for compile.fortran to the value of <fflags> in options
- OPTIONs for link to the value of <linkflags> in options
"""
from b2.build import toolset
assert isinstance(tool, basestring)
assert is_iterable_typed(condition, basestring)
assert command and isinstance(command, basestring)
assert is_iterable_typed(options, basestring)
toolset.flags(tool, 'CONFIG_COMMAND', condition, [command])
toolset.flags(tool + '.compile', 'OPTIONS', condition, feature.get_values('<compileflags>', options))
toolset.flags(tool + '.compile.c', 'OPTIONS', condition, feature.get_values('<cflags>', options))
toolset.flags(tool + '.compile.c++', 'OPTIONS', condition, feature.get_values('<cxxflags>', options))
toolset.flags(tool + '.compile.fortran', 'OPTIONS', condition, feature.get_values('<fflags>', options))
toolset.flags(tool + '.link', 'OPTIONS', condition, feature.get_values('<linkflags>', options)) | python | def handle_options(tool, condition, command, options):
""" Handle common options for toolset, specifically sets the following
flag variables:
- CONFIG_COMMAND to 'command'
- OPTIOns for compile to the value of <compileflags> in options
- OPTIONS for compile.c to the value of <cflags> in options
- OPTIONS for compile.c++ to the value of <cxxflags> in options
- OPTIONS for compile.fortran to the value of <fflags> in options
- OPTIONs for link to the value of <linkflags> in options
"""
from b2.build import toolset
assert isinstance(tool, basestring)
assert is_iterable_typed(condition, basestring)
assert command and isinstance(command, basestring)
assert is_iterable_typed(options, basestring)
toolset.flags(tool, 'CONFIG_COMMAND', condition, [command])
toolset.flags(tool + '.compile', 'OPTIONS', condition, feature.get_values('<compileflags>', options))
toolset.flags(tool + '.compile.c', 'OPTIONS', condition, feature.get_values('<cflags>', options))
toolset.flags(tool + '.compile.c++', 'OPTIONS', condition, feature.get_values('<cxxflags>', options))
toolset.flags(tool + '.compile.fortran', 'OPTIONS', condition, feature.get_values('<fflags>', options))
toolset.flags(tool + '.link', 'OPTIONS', condition, feature.get_values('<linkflags>', options)) | [
"def",
"handle_options",
"(",
"tool",
",",
"condition",
",",
"command",
",",
"options",
")",
":",
"from",
"b2",
".",
"build",
"import",
"toolset",
"assert",
"isinstance",
"(",
"tool",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"condition",
",",
"basestring",
")",
"assert",
"command",
"and",
"isinstance",
"(",
"command",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"options",
",",
"basestring",
")",
"toolset",
".",
"flags",
"(",
"tool",
",",
"'CONFIG_COMMAND'",
",",
"condition",
",",
"[",
"command",
"]",
")",
"toolset",
".",
"flags",
"(",
"tool",
"+",
"'.compile'",
",",
"'OPTIONS'",
",",
"condition",
",",
"feature",
".",
"get_values",
"(",
"'<compileflags>'",
",",
"options",
")",
")",
"toolset",
".",
"flags",
"(",
"tool",
"+",
"'.compile.c'",
",",
"'OPTIONS'",
",",
"condition",
",",
"feature",
".",
"get_values",
"(",
"'<cflags>'",
",",
"options",
")",
")",
"toolset",
".",
"flags",
"(",
"tool",
"+",
"'.compile.c++'",
",",
"'OPTIONS'",
",",
"condition",
",",
"feature",
".",
"get_values",
"(",
"'<cxxflags>'",
",",
"options",
")",
")",
"toolset",
".",
"flags",
"(",
"tool",
"+",
"'.compile.fortran'",
",",
"'OPTIONS'",
",",
"condition",
",",
"feature",
".",
"get_values",
"(",
"'<fflags>'",
",",
"options",
")",
")",
"toolset",
".",
"flags",
"(",
"tool",
"+",
"'.link'",
",",
"'OPTIONS'",
",",
"condition",
",",
"feature",
".",
"get_values",
"(",
"'<linkflags>'",
",",
"options",
")",
")"
] | Handle common options for toolset, specifically sets the following
flag variables:
- CONFIG_COMMAND to 'command'
- OPTIOns for compile to the value of <compileflags> in options
- OPTIONS for compile.c to the value of <cflags> in options
- OPTIONS for compile.c++ to the value of <cxxflags> in options
- OPTIONS for compile.fortran to the value of <fflags> in options
- OPTIONs for link to the value of <linkflags> in options | [
"Handle",
"common",
"options",
"for",
"toolset",
"specifically",
"sets",
"the",
"following",
"flag",
"variables",
":",
"-",
"CONFIG_COMMAND",
"to",
"command",
"-",
"OPTIOns",
"for",
"compile",
"to",
"the",
"value",
"of",
"<compileflags",
">",
"in",
"options",
"-",
"OPTIONS",
"for",
"compile",
".",
"c",
"to",
"the",
"value",
"of",
"<cflags",
">",
"in",
"options",
"-",
"OPTIONS",
"for",
"compile",
".",
"c",
"++",
"to",
"the",
"value",
"of",
"<cxxflags",
">",
"in",
"options",
"-",
"OPTIONS",
"for",
"compile",
".",
"fortran",
"to",
"the",
"value",
"of",
"<fflags",
">",
"in",
"options",
"-",
"OPTIONs",
"for",
"link",
"to",
"the",
"value",
"of",
"<linkflags",
">",
"in",
"options"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L437-L458 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | get_program_files_dir | def get_program_files_dir():
""" returns the location of the "program files" directory on a windows
platform
"""
ProgramFiles = bjam.variable("ProgramFiles")
if ProgramFiles:
ProgramFiles = ' '.join(ProgramFiles)
else:
ProgramFiles = "c:\\Program Files"
return ProgramFiles | python | def get_program_files_dir():
""" returns the location of the "program files" directory on a windows
platform
"""
ProgramFiles = bjam.variable("ProgramFiles")
if ProgramFiles:
ProgramFiles = ' '.join(ProgramFiles)
else:
ProgramFiles = "c:\\Program Files"
return ProgramFiles | [
"def",
"get_program_files_dir",
"(",
")",
":",
"ProgramFiles",
"=",
"bjam",
".",
"variable",
"(",
"\"ProgramFiles\"",
")",
"if",
"ProgramFiles",
":",
"ProgramFiles",
"=",
"' '",
".",
"join",
"(",
"ProgramFiles",
")",
"else",
":",
"ProgramFiles",
"=",
"\"c:\\\\Program Files\"",
"return",
"ProgramFiles"
] | returns the location of the "program files" directory on a windows
platform | [
"returns",
"the",
"location",
"of",
"the",
"program",
"files",
"directory",
"on",
"a",
"windows",
"platform"
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L461-L470 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | variable_setting_command | def variable_setting_command(variable, value):
"""
Returns the command needed to set an environment variable on the current
platform. The variable setting persists through all following commands and is
visible in the environment seen by subsequently executed commands. In other
words, on Unix systems, the variable is exported, which is consistent with the
only possible behavior on Windows systems.
"""
assert isinstance(variable, basestring)
assert isinstance(value, basestring)
if os_name() == 'NT':
return "set " + variable + "=" + value + os.linesep
else:
# (todo)
# The following does not work on CYGWIN and needs to be fixed. On
# CYGWIN the $(nl) variable holds a Windows new-line \r\n sequence that
# messes up the executed export command which then reports that the
# passed variable name is incorrect. This is most likely due to the
# extra \r character getting interpreted as a part of the variable name.
#
# Several ideas pop to mind on how to fix this:
# * One way would be to separate the commands using the ; shell
# command separator. This seems like the quickest possible
# solution but I do not know whether this would break code on any
# platforms I I have no access to.
# * Another would be to not use the terminating $(nl) but that would
# require updating all the using code so it does not simply
# prepend this variable to its own commands.
# * I guess the cleanest solution would be to update Boost Jam to
# allow explicitly specifying \n & \r characters in its scripts
# instead of always relying only on the 'current OS native newline
# sequence'.
#
# Some code found to depend on this behaviour:
# * This Boost Build module.
# * __test__ rule.
# * path-variable-setting-command rule.
# * python.jam toolset.
# * xsltproc.jam toolset.
# * fop.jam toolset.
# (todo) (07.07.2008.) (Jurko)
#
# I think that this works correctly in python -- Steven Watanabe
return variable + "=" + value + os.linesep + "export " + variable + os.linesep | python | def variable_setting_command(variable, value):
"""
Returns the command needed to set an environment variable on the current
platform. The variable setting persists through all following commands and is
visible in the environment seen by subsequently executed commands. In other
words, on Unix systems, the variable is exported, which is consistent with the
only possible behavior on Windows systems.
"""
assert isinstance(variable, basestring)
assert isinstance(value, basestring)
if os_name() == 'NT':
return "set " + variable + "=" + value + os.linesep
else:
# (todo)
# The following does not work on CYGWIN and needs to be fixed. On
# CYGWIN the $(nl) variable holds a Windows new-line \r\n sequence that
# messes up the executed export command which then reports that the
# passed variable name is incorrect. This is most likely due to the
# extra \r character getting interpreted as a part of the variable name.
#
# Several ideas pop to mind on how to fix this:
# * One way would be to separate the commands using the ; shell
# command separator. This seems like the quickest possible
# solution but I do not know whether this would break code on any
# platforms I I have no access to.
# * Another would be to not use the terminating $(nl) but that would
# require updating all the using code so it does not simply
# prepend this variable to its own commands.
# * I guess the cleanest solution would be to update Boost Jam to
# allow explicitly specifying \n & \r characters in its scripts
# instead of always relying only on the 'current OS native newline
# sequence'.
#
# Some code found to depend on this behaviour:
# * This Boost Build module.
# * __test__ rule.
# * path-variable-setting-command rule.
# * python.jam toolset.
# * xsltproc.jam toolset.
# * fop.jam toolset.
# (todo) (07.07.2008.) (Jurko)
#
# I think that this works correctly in python -- Steven Watanabe
return variable + "=" + value + os.linesep + "export " + variable + os.linesep | [
"def",
"variable_setting_command",
"(",
"variable",
",",
"value",
")",
":",
"assert",
"isinstance",
"(",
"variable",
",",
"basestring",
")",
"assert",
"isinstance",
"(",
"value",
",",
"basestring",
")",
"if",
"os_name",
"(",
")",
"==",
"'NT'",
":",
"return",
"\"set \"",
"+",
"variable",
"+",
"\"=\"",
"+",
"value",
"+",
"os",
".",
"linesep",
"else",
":",
"# (todo)",
"# The following does not work on CYGWIN and needs to be fixed. On",
"# CYGWIN the $(nl) variable holds a Windows new-line \\r\\n sequence that",
"# messes up the executed export command which then reports that the",
"# passed variable name is incorrect. This is most likely due to the",
"# extra \\r character getting interpreted as a part of the variable name.",
"#",
"# Several ideas pop to mind on how to fix this:",
"# * One way would be to separate the commands using the ; shell",
"# command separator. This seems like the quickest possible",
"# solution but I do not know whether this would break code on any",
"# platforms I I have no access to.",
"# * Another would be to not use the terminating $(nl) but that would",
"# require updating all the using code so it does not simply",
"# prepend this variable to its own commands.",
"# * I guess the cleanest solution would be to update Boost Jam to",
"# allow explicitly specifying \\n & \\r characters in its scripts",
"# instead of always relying only on the 'current OS native newline",
"# sequence'.",
"#",
"# Some code found to depend on this behaviour:",
"# * This Boost Build module.",
"# * __test__ rule.",
"# * path-variable-setting-command rule.",
"# * python.jam toolset.",
"# * xsltproc.jam toolset.",
"# * fop.jam toolset.",
"# (todo) (07.07.2008.) (Jurko)",
"#",
"# I think that this works correctly in python -- Steven Watanabe",
"return",
"variable",
"+",
"\"=\"",
"+",
"value",
"+",
"os",
".",
"linesep",
"+",
"\"export \"",
"+",
"variable",
"+",
"os",
".",
"linesep"
] | Returns the command needed to set an environment variable on the current
platform. The variable setting persists through all following commands and is
visible in the environment seen by subsequently executed commands. In other
words, on Unix systems, the variable is exported, which is consistent with the
only possible behavior on Windows systems. | [
"Returns",
"the",
"command",
"needed",
"to",
"set",
"an",
"environment",
"variable",
"on",
"the",
"current",
"platform",
".",
"The",
"variable",
"setting",
"persists",
"through",
"all",
"following",
"commands",
"and",
"is",
"visible",
"in",
"the",
"environment",
"seen",
"by",
"subsequently",
"executed",
"commands",
".",
"In",
"other",
"words",
"on",
"Unix",
"systems",
"the",
"variable",
"is",
"exported",
"which",
"is",
"consistent",
"with",
"the",
"only",
"possible",
"behavior",
"on",
"Windows",
"systems",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L481-L525 | train |
apple/turicreate | deps/src/boost_1_68_0/tools/build/src/tools/common.py | path_variable_setting_command | def path_variable_setting_command(variable, paths):
"""
Returns a command to sets a named shell path variable to the given NATIVE
paths on the current platform.
"""
assert isinstance(variable, basestring)
assert is_iterable_typed(paths, basestring)
sep = os.path.pathsep
return variable_setting_command(variable, sep.join(paths)) | python | def path_variable_setting_command(variable, paths):
"""
Returns a command to sets a named shell path variable to the given NATIVE
paths on the current platform.
"""
assert isinstance(variable, basestring)
assert is_iterable_typed(paths, basestring)
sep = os.path.pathsep
return variable_setting_command(variable, sep.join(paths)) | [
"def",
"path_variable_setting_command",
"(",
"variable",
",",
"paths",
")",
":",
"assert",
"isinstance",
"(",
"variable",
",",
"basestring",
")",
"assert",
"is_iterable_typed",
"(",
"paths",
",",
"basestring",
")",
"sep",
"=",
"os",
".",
"path",
".",
"pathsep",
"return",
"variable_setting_command",
"(",
"variable",
",",
"sep",
".",
"join",
"(",
"paths",
")",
")"
] | Returns a command to sets a named shell path variable to the given NATIVE
paths on the current platform. | [
"Returns",
"a",
"command",
"to",
"sets",
"a",
"named",
"shell",
"path",
"variable",
"to",
"the",
"given",
"NATIVE",
"paths",
"on",
"the",
"current",
"platform",
"."
] | 74514c3f99e25b46f22c6e02977fe3da69221c2e | https://github.com/apple/turicreate/blob/74514c3f99e25b46f22c6e02977fe3da69221c2e/deps/src/boost_1_68_0/tools/build/src/tools/common.py#L527-L535 | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.