repo_id stringlengths 21 96 | file_path stringlengths 31 155 | content stringlengths 1 92.9M | __index_level_0__ int64 0 0 |
|---|---|---|---|
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/karate_asymmetric.yaml | name: karate-asymmetric
file_type: .csv
description:
This is an undirected, asymmetric variant of the Karate dataset. The original dataset, which
this is based on, was created by Wayne Zachary in 1977.
author: Nvidia
refs:
W. W. Zachary, An information flow model for conflict and fission in small
groups, Journal of Anthropological Research 33, 452-473 (1977).
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: false
is_multigraph: false
is_symmetric: true
number_of_edges: 78
number_of_nodes: 34
url: https://data.rapids.ai/cugraph/datasets/karate-asymmetric.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/soc-livejournal1.yaml | name: soc-LiveJournal1
file_type: .csv
description: A graph of the LiveJournal social network.
author: L. Backstrom, D. Huttenlocher, J. Kleinberg, X. Lan
refs:
L. Backstrom, D. Huttenlocher, J. Kleinberg, X. Lan. Group Formation in
Large Social Networks Membership, Growth, and Evolution. KDD, 2006.
delim: " "
header: None
col_names:
- src
- dst
col_types:
- int32
- int32
has_loop: true
is_directed: true
is_multigraph: false
is_symmetric: false
number_of_edges: 68993773
number_of_nodes: 4847571
url: https://data.rapids.ai/cugraph/datasets/soc-LiveJournal1.csv | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/karate.yaml | name: karate
file_type: .csv
description:
The graph "karate" contains the network of friendships between the 34 members
of a karate club at a US university, as described by Wayne Zachary in 1977.
author: Zachary W.
refs:
W. W. Zachary, An information flow model for conflict and fission in small groups,
Journal of Anthropological Research 33, 452-473 (1977).
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: true
is_multigraph: false
is_symmetric: true
number_of_edges: 156
number_of_nodes: 34
url: https://data.rapids.ai/cugraph/datasets/karate.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/toy_graph.yaml | name: toy_graph
file_type: .csv
description:
The `toy_graph` dataset was created by Nvidia for testing and demonstration
purposes, and consists of a small (6 nodes) directed graph.
author: null
refs: null
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: true
is_multigraph: false
is_symmetric: true
number_of_edges: 16
number_of_nodes: 6
url: https://data.rapids.ai/cugraph/datasets/toy_graph.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/netscience.yaml | name: netscience
file_type: .csv
description:
The graph netscience contains a coauthorship network of scientists working
on network theory and experiment, as compiled by M. Newman in May 2006.
author: Newman, Mark E.J.
refs: Finding community structure in networks using the eigenvectors of matrices.
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: true
is_multigraph: false
is_symmetric: true
number_of_edges: 5484
number_of_nodes: 1461
url: https://data.rapids.ai/cugraph/datasets/netscience.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/europe_osm.yaml | name: europe_osm
file_type: .csv
description: A graph of OpenStreetMap data for Europe.
author: M. Kobitzsh / Geofabrik GmbH
refs:
Rossi, Ryan. Ahmed, Nesreen. The Network Data Respoistory with Interactive Graph Analytics and Visualization.
delim: " "
header: None
col_names:
- src
- dst
col_types:
- int32
- int32
has_loop: false
is_directed: false
is_multigraph: false
is_symmetric: true
number_of_edges: 54054660
number_of_nodes: 50912018
url: https://data.rapids.ai/cugraph/datasets/europe_osm.csv | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/toy_graph_undirected.yaml | name: toy_graph_undirected
file_type: .csv
description:
The `toy_graph_undirected` dataset was created by Nvidia for testing and
demonstration purposes, and consists of a small (6 nodes) undirected graph.
author: Nvidia
refs: null
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: false
is_multigraph: false
is_symmetric: true
number_of_edges: 8
number_of_nodes: 6
url: https://data.rapids.ai/cugraph/datasets/toy_graph_undirected.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/dining_prefs.yaml | name: dining_prefs
file_type: .csv
description: Classic social networking dataset describes dining preferences for a dormitory in New York state.
author: J.L. Moreno
refs:
J. L. Moreno (1960). The Sociometry Reader. The Free Press, Glencoe, Illinois, pg.35
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- string
- string
- int
has_loop: false
is_directed: false
is_multigraph: false
is_symmetric: true
number_of_edges: 42
number_of_nodes: 26
url: https://data.rapids.ai/cugraph/datasets/dining_prefs.csv | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/email_Eu_core.yaml | name: email-Eu-core
file_type: .csv
description:
The network was generated using anonymized email data from a large European
research institution. There is an edge (u, v) in the network if person u sent
person v at least one email. The e-mails only represent communication between
institution members (the core), and the dataset does not contain incoming messages
from or outgoing messages to the rest of the world.
author: Jure Leskovec
refs:
- Hao Yin, Austin R. Benson, Jure Leskovec, and David F. Gleich. 'Local Higher-order Graph Clustering.' In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017.
- J. Leskovec, J. Kleinberg and C. Faloutsos. Graph Evolution. Densification and Shrinking Diameters. ACM Transactions on Knowledge Discovery from Data (ACM TKDD), 1(1), 2007.
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: true
is_directed: true
is_multigraph: false
is_symmetric: false
number_of_edges: 25571
number_of_nodes: 1005
url: https://data.rapids.ai/cugraph/datasets/email-Eu-core.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/__init__.py | # Copyright (c) 2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/soc-twitter-2010.yaml | name: soc-twitter-2010
file_type: .csv
description: A network of follower relationships from a snapshot of Twitter in 2010, where an edge from i to j indicates that j is a follower of i.
author: H. Kwak, C. Lee, H. Park, S. Moon
refs:
J. Yang, J. Leskovec. Temporal Variation in Online Media. ACM Intl.
Conf. on Web Search and Data Mining (WSDM '11), 2011.
delim: " "
header: None
col_names:
- src
- dst
col_types:
- int32
- int32
has_loop: false
is_directed: false
is_multigraph: false
is_symmetric: false
number_of_edges: 530051354
number_of_nodes: 21297772
url: https://data.rapids.ai/cugraph/datasets/soc-twitter-2010.csv | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/karate_disjoint.yaml | name: karate-disjoint
file_type: .csv
description:
This is disjoint variant of the Karate dataset. The original dataset, which
this is based on, was created by Wayne Zachary in 1977.
author: Nvidia
refs:
W. W. Zachary, An information flow model for conflict and fission in small groups,
Journal of Anthropological Research 33, 452-473 (1977).
delim: " "
header: None
col_names:
- src
- dst
- wgt
col_types:
- int32
- int32
- float32
has_loop: false
is_directed: true
is_multigraph: false
is_symmetric: true
number_of_edges: 312
number_of_nodes: 68
url: https://data.rapids.ai/cugraph/datasets/karate-disjoint.csv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets | rapidsai_public_repos/cugraph/python/cugraph/cugraph/datasets/metadata/cit-patents.yaml | name: cit-Patents
file_type: .csv
description: A citation graph that includes all citations made by patents granted between 1975 and 1999, totaling 16,522,438 citations.
author: NBER
refs:
J. Leskovec, J. Kleinberg and C. Faloutsos. Graphs over Time Densification Laws, Shrinking Diameters and Possible Explanations.
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2005.
delim: " "
header: None
col_names:
- src
- dst
col_types:
- int32
- int32
has_loop: true
is_directed: true
is_multigraph: false
is_symmetric: false
number_of_edges: 16518948
number_of_nodes: 3774768
url: https://data.rapids.ai/cugraph/datasets/cit-Patents.csv | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/components/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources connectivity_wrapper.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX components_
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/components/connectivity_wrapper.pyx | # Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from cugraph.components.connectivity cimport *
from cugraph.structure.graph_primtypes cimport *
from cugraph.structure.graph_utilities cimport *
from cugraph.structure import utils_wrapper
from cugraph.structure import graph_primtypes_wrapper
from libc.stdint cimport uintptr_t
from cugraph.structure.symmetrize import symmetrize
from cugraph.structure.graph_classes import Graph as type_Graph
import cudf
import numpy as np
def strongly_connected_components(input_graph):
"""
Call connected_components
"""
if not input_graph.adjlist:
input_graph.view_adj_list()
[offsets, indices] = graph_primtypes_wrapper.datatype_cast([input_graph.adjlist.offsets, input_graph.adjlist.indices], [np.int32])
num_verts = input_graph.number_of_vertices()
num_edges = input_graph.number_of_edges(directed_edges=True)
df = cudf.DataFrame()
df['vertex'] = cudf.Series(np.zeros(num_verts, dtype=np.int32))
df['labels'] = cudf.Series(np.zeros(num_verts, dtype=np.int32))
cdef uintptr_t c_offsets = offsets.__cuda_array_interface__['data'][0]
cdef uintptr_t c_indices = indices.__cuda_array_interface__['data'][0]
cdef uintptr_t c_identifier = df['vertex'].__cuda_array_interface__['data'][0];
cdef uintptr_t c_labels_val = df['labels'].__cuda_array_interface__['data'][0];
cdef GraphCSRView[int,int,float] g
g = GraphCSRView[int,int,float](<int*>c_offsets, <int*>c_indices, <float*>NULL, num_verts, num_edges)
cdef cugraph_cc_t connect_type=CUGRAPH_STRONG
connected_components(g, <cugraph_cc_t>connect_type, <int *>c_labels_val)
g.get_vertex_identifiers(<int*>c_identifier)
return df
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/components/connectivity.pxd | # Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from cugraph.structure.graph_primtypes cimport *
from cugraph.structure.graph_utilities cimport *
cdef extern from "cugraph/algorithms.hpp" namespace "cugraph":
ctypedef enum cugraph_cc_t:
CUGRAPH_STRONG "cugraph::cugraph_cc_t::CUGRAPH_STRONG"
cdef void connected_components[VT,ET,WT](
const GraphCSRView[VT,ET,WT] &graph,
cugraph_cc_t connect_type,
VT *labels) except +
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/components/connectivity.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities import (
df_score_to_dictionary,
ensure_cugraph_obj,
is_matrix_type,
is_cp_matrix_type,
is_nx_graph_type,
cupy_package as cp,
)
from cugraph.structure import Graph
from cugraph.components import connectivity_wrapper
import cudf
from pylibcugraph import weakly_connected_components as pylibcugraph_wcc
from pylibcugraph import ResourceHandle
def _ensure_args(api_name, G, directed, connection, return_labels):
"""
Ensures the args passed in are usable for the API api_name and returns the
args with proper defaults if not specified, or raises TypeError or
ValueError if incorrectly specified.
"""
G_type = type(G)
# Check for Graph-type inputs and set defaults if unset
if (G_type in [Graph]) or is_nx_graph_type(G_type):
exc_value = "'%s' cannot be specified for a Graph-type input"
if directed is not None:
raise TypeError(exc_value % "directed")
if return_labels is not None:
raise TypeError(exc_value % "return_labels")
directed = True
return_labels = True
# Check for non-Graph-type inputs and set defaults if unset
else:
directed = True if (directed is None) else directed
return_labels = True if (return_labels is None) else return_labels
# Handle connection type, based on API being called
if api_name == "strongly_connected_components":
if (connection is not None) and (connection != "strong"):
raise TypeError("'connection' must be 'strong' for " f"{api_name}()")
connection = "strong"
elif api_name == "weakly_connected_components":
if (connection is not None) and (connection != "weak"):
raise TypeError("'connection' must be 'weak' for " f"{api_name}()")
connection = "weak"
else:
raise RuntimeError("invalid API name specified (internal): " f"{api_name}")
return (directed, connection, return_labels)
def _convert_df_to_output_type(df, input_type, return_labels):
"""
Given a cudf.DataFrame df, convert it to a new type appropriate for the
graph algos in this module, based on input_type.
return_labels is only used for return values from cupy/scipy input types.
"""
if input_type in [Graph]:
return df
elif is_nx_graph_type(input_type):
return df_score_to_dictionary(df, "labels", "vertex")
elif is_matrix_type(input_type):
# Convert DF of 2 columns (labels, vertices) to the SciPy-style return
# value:
# n_components: int
# The number of connected components (number of unique labels).
# labels: ndarray
# The length-N array of labels of the connected components.
n_components = df["labels"].nunique()
sorted_df = df.sort_values("vertex")
if return_labels:
if is_cp_matrix_type(input_type):
labels = cp.from_dlpack(sorted_df["labels"].to_dlpack())
else:
labels = sorted_df["labels"].to_numpy()
return (n_components, labels)
else:
return n_components
else:
raise TypeError(f"input type {input_type} is not a supported type.")
def weakly_connected_components(G, directed=None, connection=None, return_labels=None):
"""
Generate the Weakly Connected Components and attach a component label to
each vertex.
Parameters
----------
G : cugraph.Graph, networkx.Graph, CuPy or SciPy sparse matrix
Graph or matrix object, which should contain the connectivity
information (edge weights are not used for this algorithm). If using a
graph object, the graph must be undirected where an
undirected edge is represented by a directed edge in both directions.
The adjacency list will be computed if not already present. The number
of vertices should fit into a 32b int.
directed : bool, optional (default=None)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only.
Raises TypeError if used with a Graph object.
If True, then convert the input matrix to a Graph(directed=True)
and only move from point i to point j along paths csgraph[i, j]. If
False, then find the shortest path on an undirected graph: the
algorithm can progress from point i to j along csgraph[i, j] or
csgraph[j, i].
connection : str, optional (default=None)
Added for SciPy compatibility, can only be specified for non-Graph-type
(eg. sparse matrix) values of G only (raises TypeError if used with a
Graph object), and can only be set to "weak" for this API.
return_labels : bool, optional (default=True)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only. Raises
TypeError if used with a Graph object.
If True, then return the labels for each of the connected
components.
Returns
-------
Return value type is based on the input type. If G is a cugraph.Graph,
returns:
cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding component identifier.
df['vertex']
Contains the vertex identifier
df['labels']
The component identifier
If G is a networkx.Graph, returns:
python dictionary, where keys are vertices and values are the component
identifiers.
If G is a CuPy or SciPy matrix, returns:
CuPy ndarray (if CuPy matrix input) or Numpy ndarray (if SciPy matrix
input) of shape (<num vertices>, 2), where column 0 contains component
identifiers and column 1 contains vertices.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.weakly_connected_components(G)
"""
(directed, connection, return_labels) = _ensure_args(
"weakly_connected_components", G, directed, connection, return_labels
)
# FIXME: allow nx_weight_attr to be specified
(G, input_type) = ensure_cugraph_obj(
G, nx_weight_attr="weight", matrix_graph_type=Graph(directed=directed)
)
if G.is_directed():
raise ValueError("input graph must be undirected")
vertex, labels = pylibcugraph_wcc(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
offsets=None,
indices=None,
weights=None,
labels=None,
do_expensive_check=False,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["labels"] = labels
if G.renumbered:
df = G.unrenumber(df, "vertex")
return _convert_df_to_output_type(df, input_type, return_labels)
def strongly_connected_components(
G, directed=None, connection=None, return_labels=None
):
"""
Generate the Strongly Connected Components and attach a component label to
each vertex.
Parameters
----------
G : cugraph.Graph, networkx.Graph, CuPy or SciPy sparse matrix
Graph or matrix object, which should contain the connectivity
information (edge weights are not used for this algorithm). If using a
graph object, the graph can be either directed or undirected where an
undirected edge is represented by a directed edge in both directions.
The adjacency list will be computed if not already present. The number
of vertices should fit into a 32b int.
directed : bool, optional (default=True)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only.
Raises TypeError if used with a Graph object.
If True, then convert the input matrix to a Graph(directed=True)
and only move from point i to point j along paths csgraph[i, j]. If
False, then find the shortest path on an undirected graph: the
algorithm can progress from point i to j along csgraph[i, j] or
csgraph[j, i].
connection : str, optional (default=None)
Added for SciPy compatibility, can only be specified for non-Graph-type
(eg. sparse matrix) values of G only (raises TypeError if used with a
Graph object), and can only be set to "strong" for this API.
return_labels : bool, optional (default=True)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only. Raises
TypeError if used with a Graph object.
If True, then return the labels for each of the connected
components.
Returns
-------
Return value type is based on the input type. If G is a cugraph.Graph,
returns:
cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding component identifier.
df['vertex']
Contains the vertex identifier
df['labels']
The component identifier
If G is a networkx.Graph, returns:
python dictionary, where keys are vertices and values are the component
identifiers.
If G is a CuPy or SciPy matrix, returns:
CuPy ndarray (if CuPy matrix input) or Numpy ndarray (if SciPy matrix
input) of shape (<num vertices>, 2), where column 0 contains component
identifiers and column 1 contains vertices.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.strongly_connected_components(G)
"""
(directed, connection, return_labels) = _ensure_args(
"strongly_connected_components", G, directed, connection, return_labels
)
# FIXME: allow nx_weight_attr to be specified
(G, input_type) = ensure_cugraph_obj(
G, nx_weight_attr="weight", matrix_graph_type=Graph(directed=directed)
)
# Renumber the vertices so that they are contiguous (required)
# FIXME: Remove 'renumbering' once the algo leverage the CAPI graph
if not G.renumbered:
edgelist = G.edgelist.edgelist_df
renumbered_edgelist_df, renumber_map = G.renumber_map.renumber(
edgelist, ["src"], ["dst"]
)
renumbered_src_col_name = renumber_map.renumbered_src_col_name
renumbered_dst_col_name = renumber_map.renumbered_dst_col_name
G.edgelist.edgelist_df = renumbered_edgelist_df.rename(
columns={renumbered_src_col_name: "src", renumbered_dst_col_name: "dst"}
)
G.properties.renumbered = True
G.renumber_map = renumber_map
df = connectivity_wrapper.strongly_connected_components(G)
if G.renumbered:
df = G.unrenumber(df, "vertex")
return _convert_df_to_output_type(df, input_type, return_labels)
def connected_components(G, directed=None, connection="weak", return_labels=None):
"""
Generate either the strongly or weakly connected components and attach a
component label to each vertex.
Parameters
----------
G : cugraph.Graph, networkx.Graph, CuPy or SciPy sparse matrix
Graph or matrix object, which should contain the connectivity
information (edge weights are not used for this algorithm). If using a
graph object, the graph can be either directed or undirected where an
undirected edge is represented by a directed edge in both directions.
The adjacency list will be computed if not already present. The number
of vertices should fit into a 32b int.
directed : bool, optional (default=True)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only. Raises
TypeError if used with a Graph object.
If True, then convert the input matrix to a Graph(directed=True)
and only move from point i to point j along paths csgraph[i, j]. If
False, then find the shortest path on an undirected graph: the
algorithm can progress from point i to j along csgraph[i, j] or
csgraph[j, i].
connection : str, optional (default='weak')
NOTE
For Graph-type values of G, weak components are only
supported for undirected graphs.
[‘weak’|’strong’]. Return either weakly or strongly connected
components.
return_labels : bool, optional (default=True)
NOTE
For non-Graph-type (eg. sparse matrix) values of G only. Raises
TypeError if used with a Graph object.
If True, then return the labels for each of the connected
components.
Returns
-------
Return value type is based on the input type. If G is a cugraph.Graph,
returns:
cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding component identifier.
df['vertex']
Contains the vertex identifier
df['labels']
The component identifier
If G is a networkx.Graph, returns:
python dictionary, where keys are vertices and values are the component
identifiers.
If G is a CuPy or SciPy matrix, returns:
CuPy ndarray (if CuPy matrix input) or Numpy ndarray (if SciPy matrix
input) of shape (<num vertices>, 2), where column 0 contains component
identifiers and column 1 contains vertices.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.connected_components(G, connection="weak")
"""
if connection == "weak":
return weakly_connected_components(G, directed, connection, return_labels)
elif connection == "strong":
return strongly_connected_components(G, directed, connection, return_labels)
else:
raise ValueError(
f"invalid connection type: {connection}, "
"must be either 'strong' or 'weak'"
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/components/__init__.py | # Copyright (c) 2019-2021, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.components.connectivity import connected_components
from cugraph.components.connectivity import weakly_connected_components
from cugraph.components.connectivity import strongly_connected_components
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/testing/mg_utils.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tempfile
from pprint import pformat
import time
from dask.distributed import wait, default_client
from dask import persist
from dask.distributed import Client
from dask.base import is_dask_collection
from dask_cuda import LocalCUDACluster
from dask_cuda.initialize import initialize
from cugraph.dask.comms import comms as Comms
from cugraph.dask.common.mg_utils import get_visible_devices
from cugraph.generators import rmat
import numpy as np
def start_dask_client(
protocol=None,
rmm_async=False,
rmm_pool_size=None,
dask_worker_devices=None,
jit_unspill=False,
device_memory_limit=0.8,
):
"""
Creates a new dask client, and possibly also a cluster, and returns them as
a tuple (client, cluster).
If the env var SCHEDULER_FILE is set, it is assumed to contain the path to
a JSON file generated by a running dask scheduler that can be used to
configure the new dask client (the new client object returned will be a
client to that scheduler), and the value of cluster will be None. If
SCHEDULER_FILE is not set, a new LocalCUDACluster will be created and
returned as the value of cluster.
If the env var DASK_WORKER_DEVICES is set, it will be assumed to be a list
of comma-separated GPU devices (ex. "0,1,2" for those 3 devices) for the
LocalCUDACluster to use when setting up individual workers (1 worker per
device). If not set, the parameter dask_worker_devices will be used the
same way instead. If neither are set, the new LocalCUDACluster instance
will default to one worker per device visible to this process.
If the env var DASK_LOCAL_DIRECTORY is set, it will be used as the
"local_directory" arg to LocalCUDACluster, for all temp files generated.
Upon successful creation of a client (either to a LocalCUDACluster or
otherwise), the cugraph.dask.comms.comms singleton is initialized using
"p2p=True".
Parameters
----------
protocol : str or None, default None
The "protocol" arg to LocalCUDACluster (ex. "tcp"), see docs for
dask_cuda.LocalCUDACluster for details. This parameter is ignored if
the env var SCHEDULER_FILE is set which implies the dask cluster has
already been created.
rmm_pool_size : int, str or None, default None
The "rmm_pool_size" arg to LocalCUDACluster (ex. "20GB"), see docs for
dask_cuda.LocalCUDACluster for details. This parameter is ignored if
the env var SCHEDULER_FILE is set which implies the dask cluster has
already been created.
dask_worker_devices : str, list of int, or None, default None
GPUs to restrict activity to. Can be a string (like ``"0,1,2,3"``),
list (like ``[0, 1, 2, 3]``), or ``None`` to use all available GPUs.
This parameter is overridden by the value of env var
DASK_WORKER_DEVICES. This parameter is ignored if the env var
SCHEDULER_FILE is set which implies the dask cluster has already been
created.
jit_unspill : bool or None, default None
The "jit_unspill" arg to LocalCUDACluster to enable just-in-time
spilling, see docs for dask_cuda.LocalCUDACluster for details. This
parameter is ignored if the env var SCHEDULER_FILE is set which implies
the dask cluster has already been created.
device_memory_limit : int, float, str, or None, default 0.8
The "device_memory_limit" arg to LocalCUDACluster to determine when
workers start spilling to host memory, see docs for
dask_cuda.LocalCUDACluster for details. This parameter is ignored if
the env var SCHEDULER_FILE is set which implies the dask cluster has
already been created.
"""
dask_scheduler_file = os.environ.get("SCHEDULER_FILE")
dask_local_directory = os.getenv("DASK_LOCAL_DIRECTORY")
# Allow the DASK_WORKER_DEVICES env var to override a value passed in. If
# neither are set, this will be None.
dask_worker_devices = os.getenv("DASK_WORKER_DEVICES", dask_worker_devices)
cluster = None
client = None
tempdir_object = None
if dask_scheduler_file:
if protocol is not None:
print(
f"WARNING: {protocol=} is ignored in start_dask_client() when using "
"dask SCHEDULER_FILE"
)
if rmm_pool_size is not None:
print(
f"WARNING: {rmm_pool_size=} is ignored in start_dask_client() when "
"using dask SCHEDULER_FILE"
)
if dask_worker_devices is not None:
print(
f"WARNING: {dask_worker_devices=} is ignored in start_dask_client() "
"when using dask SCHEDULER_FILE"
)
initialize()
client = Client(scheduler_file=dask_scheduler_file)
# FIXME: use proper logging, INFO or DEBUG level
print("\nDask client created using " f"{dask_scheduler_file}")
else:
if dask_local_directory is None:
# The tempdir created by tempdir_object should be cleaned up once
# tempdir_object is deleted.
tempdir_object = tempfile.TemporaryDirectory()
local_directory = tempdir_object.name
else:
local_directory = dask_local_directory
cluster = LocalCUDACluster(
local_directory=local_directory,
protocol=protocol,
rmm_pool_size=rmm_pool_size,
rmm_async=rmm_async,
CUDA_VISIBLE_DEVICES=dask_worker_devices,
jit_unspill=jit_unspill,
device_memory_limit=device_memory_limit,
)
client = Client(cluster)
if dask_worker_devices is None:
num_workers = len(get_visible_devices())
else:
if isinstance(dask_worker_devices, list):
num_workers = len(dask_worker_devices)
else:
# FIXME: this assumes a properly formatted string with commas
num_workers = len(dask_worker_devices.split(","))
client.wait_for_workers(num_workers)
# Add a reference to tempdir_object to the client to prevent it from
# being deleted when this function returns. This will be deleted in
# stop_dask_client()
client.tempdir_object = tempdir_object
# FIXME: use proper logging, INFO or DEBUG level
print("\nDask client/cluster created using LocalCUDACluster")
Comms.initialize(p2p=True)
return (client, cluster)
def stop_dask_client(client, cluster=None):
"""
Shutdown/cleanup a client and possibly cluster object returned from
start_dask_client(). This also stops the cugraph.dask.comms.comms
singleton.
"""
Comms.destroy()
client.close()
if cluster:
cluster.close()
# Remove a TemporaryDirectory object that may have been assigned to the
# client, which should remove it and all the contents from disk.
if hasattr(client, "tempdir_object"):
del client.tempdir_object
# FIXME: use proper logging, INFO or DEBUG level
print("\nDask client closed.")
def restart_client(client):
"""
Restart the Dask client
"""
Comms.destroy()
client.restart()
client = client.run(enable_spilling)
Comms.initialize(p2p=True)
def enable_spilling():
import cudf
cudf.set_option("spill", True)
def generate_edgelist_rmat(
scale,
edgefactor,
seed=None,
unweighted=False,
mg=True,
):
"""
Returns a cudf/dask_cudf DataFrame created using the R-MAT graph generator.
The resulting graph is weighted with random values of a uniform distribution
from the interval [0, 1)
Args:
scale:
scale is used to determine the number of vertices to be generated (num_verts
= 2^scale), which is also used to determine the data type for the vertex ID
values in the DataFrame.
edgefactor:
edgefactor determies the number of edges (num_edges = num_edges*edgefactor)
seed:
seed, if specified, will be used as the seed to the RNG.
unweighted:
unweighted determines if the resulting edgelist will have randomly-generated
weights ranging in value between [0, 1). If True, an edgelist with only 2
columns is returned.
mg:
mg determines if the resulting edgelist will be a multi-GPU edgelist.
If True, returns a dask_cudf.DataFrame and
if False, returns a cudf.DataFrame.
"""
ddf = rmat(
scale,
(2**scale) * edgefactor,
0.57, # from Graph500
0.19, # from Graph500
0.19, # from Graph500
seed or 42,
clip_and_flip=False,
scramble_vertex_ids=True,
create_using=None, # return edgelist instead of Graph instance
mg=mg,
)
if not unweighted:
rng = np.random.default_rng(seed)
ddf["weight"] = ddf.map_partitions(lambda df: rng.random(size=len(df)))
return ddf
def set_statistics_adaptor():
"""
Sets the current device resource to a StatisticsResourceAdaptor
"""
import rmm
rmm.mr.set_current_device_resource(
rmm.mr.StatisticsResourceAdaptor(rmm.mr.get_current_device_resource())
)
def _get_allocation_counts():
"""
Returns the allocation counts from the current device resource
"""
import rmm
mr = rmm.mr.get_current_device_resource()
if not hasattr(mr, "allocation_counts"):
if hasattr(mr, "upstream_mr"):
return _get_allocation_counts(mr.upstream_mr)
else:
return -1
else:
return mr.allocation_counts
def persist_dask_object(arg):
"""
Persist if it is a dask object
"""
if is_dask_collection(arg) or hasattr(arg, "persist"):
arg = persist(arg)
wait(arg)
arg = arg[0]
return arg
# Function to convert bytes into human readable format
def sizeof_fmt(num, suffix="B"):
if isinstance(num, str):
if num[-2:] == "GB":
return num[:-2] + "G"
elif num[-2:] == "MB":
return num[:-2] + "M"
elif num[-2:] == "KB":
return num[:-2] + "K"
else:
raise ValueError("unknown unit")
for unit in ["", "K", "M", "G", "T", "P", "E", "Z"]:
if abs(num) < 1024.0:
return "%3.1f%s%s" % (num, unit, suffix)
num /= 1024.0
return "%.1f%s%s" % (num, "Yi", suffix)
def _parse_allocation_counts(allocation_counts):
"""
Parses the allocation counts from the current device resource
into human readable format
"""
return {k: sizeof_fmt(v) for k, v in allocation_counts.items() if "bytes" in k}
# Decorator to set the statistics adaptor
# and calls the allocation_counts function
def get_allocation_counts_dask_lazy(return_allocations=False, logging=True):
def decorator(func):
def wrapper(*args, **kwargs):
client = default_client()
client.run(set_statistics_adaptor)
st = time.time()
return_val = func(*args, **kwargs)
et = time.time()
allocation_counts = client.run(_get_allocation_counts)
if logging:
_print_allocation_statistics(
func, args, kwargs, et - st, allocation_counts
)
client.run(set_statistics_adaptor)
if return_allocations:
return return_val, allocation_counts
else:
return return_val
return wrapper
return decorator
def get_allocation_counts_dask_persist(return_allocations=False, logging=True):
def decorator(func):
def wrapper(*args, **kwargs):
args = [persist_dask_object(a) for a in args]
kwargs = {k: persist_dask_object(v) for k, v in kwargs.items()}
client = default_client()
client.run(set_statistics_adaptor)
st = time.time()
return_val = func(*args, **kwargs)
return_val = persist_dask_object(return_val)
if isinstance(return_val, (list, tuple)):
return_val = [persist_dask_object(d) for d in return_val]
et = time.time()
allocation_counts = client.run(_get_allocation_counts)
if logging:
_print_allocation_statistics(
func, args, kwargs, et - st, allocation_counts
)
client.run(set_statistics_adaptor)
if return_allocations:
return return_val, allocation_counts
else:
return return_val
return wrapper
return decorator
def _get_allocation_stats_string(func, args, kwargs, execution_time, allocation_counts):
allocation_counts_parsed = {
worker_id: _parse_allocation_counts(worker_allocations)
for worker_id, worker_allocations in allocation_counts.items()
}
return (
f"function: {func.__name__}\n"
+ f"function args: {args} kwargs: {kwargs}\n"
+ f"execution_time: {execution_time}\n"
+ "allocation_counts:\n"
+ f"{pformat(allocation_counts_parsed, indent=4, width=1, compact=True)}"
)
def _print_allocation_statistics(func, args, kwargs, execution_time, allocation_counts):
print(
_get_allocation_stats_string(
func, args, kwargs, execution_time, allocation_counts
)
)
def get_peak_output_ratio_across_workers(allocation_counts):
peak_ratio = -1
for w_allocations in allocation_counts.values():
w_peak_ratio = w_allocations["peak_bytes"] / w_allocations["current_bytes"]
peak_ratio = max(w_peak_ratio, peak_ratio)
return peak_ratio
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/testing/resultset.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tarfile
import urllib.request
import cudf
from cugraph.datasets.dataset import (
DefaultDownloadDir,
default_download_dir,
)
# results_dir_path = utils.RAPIDS_DATASET_ROOT_DIR_PATH / "tests" / "resultsets"
class Resultset:
"""
A Resultset Object, used to store golden results to easily run tests that
need to access said results without the overhead of running an algorithm
to get the results.
Parameters
----------
data_dictionary : dict
The existing algorithm output, expected as a dictionary
"""
def __init__(self, data_dictionary):
self._data_dictionary = data_dictionary
def get_cudf_dataframe(self):
"""
Converts the existing algorithm output from a dictionary to
a cudf.DataFrame before writing the DataFrame to output into a csv
"""
return cudf.DataFrame(self._data_dictionary)
_resultsets = {}
def get_resultset(resultset_name, **kwargs):
"""
Returns the golden results for a specific test.
Parameters
----------
resultset_name : String
Name of the test's module (currently just 'traversal' is supported)
kwargs :
All distinct test details regarding the choice of algorithm, dataset,
and graph
"""
arg_dict = dict(kwargs)
arg_dict["resultset_name"] = resultset_name
# Example:
# {'a': 1, 'z': 9, 'c': 5, 'b': 2} becomes 'a-1-b-2-c-5-z-9'
resultset_key = "-".join(
[
str(val)
for arg_dict_pair in sorted(arg_dict.items())
for val in arg_dict_pair
]
)
uuid = _resultsets.get(resultset_key)
if uuid is None:
raise KeyError(f"results for {arg_dict} not found")
results_dir_path = default_resultset_download_dir.path
results_filename = results_dir_path / (uuid + ".csv")
return cudf.read_csv(results_filename)
default_resultset_download_dir = DefaultDownloadDir(subdir="tests/resultsets")
def load_resultset(resultset_name, resultset_download_url):
"""
Read a mapping file (<resultset_name>.csv) in the _results_dir and save the
mappings between each unique set of args/identifiers to UUIDs to the
_resultsets dictionary. If <resultset_name>.csv does not exist in
_results_dir, use resultset_download_url to download a file to
install/unpack/etc. to _results_dir first.
"""
# curr_resultset_download_dir = get_resultset_download_dir()
curr_resultset_download_dir = default_resultset_download_dir.path
# curr_download_dir = path
curr_download_dir = default_download_dir.path
mapping_file_path = curr_resultset_download_dir / (resultset_name + "_mappings.csv")
if not mapping_file_path.exists():
# Downloads a tar gz from s3 bucket, then unpacks the results files
compressed_file_dir = curr_download_dir / "tests"
compressed_file_path = compressed_file_dir / "resultsets.tar.gz"
if not curr_resultset_download_dir.exists():
curr_resultset_download_dir.mkdir(parents=True, exist_ok=True)
if not compressed_file_path.exists():
urllib.request.urlretrieve(resultset_download_url, compressed_file_path)
tar = tarfile.open(str(compressed_file_path), "r:gz")
tar.extractall(str(curr_resultset_download_dir))
tar.close()
# FIXME: This assumes separator is " ", but should this be configurable?
sep = " "
with open(mapping_file_path) as mapping_file:
for line in mapping_file.readlines():
if line.startswith("#"):
continue
(uuid, *row_args) = line.split(sep)
if (len(row_args) % 2) != 0:
raise ValueError(
f'bad row in {mapping_file_path}: "{line}", must '
"contain UUID followed by an even number of items"
)
row_keys = row_args[::2]
row_vals = row_args[1::2]
row_keys = " ".join(row_keys).split()
row_vals = " ".join(row_vals).split()
arg_dict = dict(zip(row_keys, row_vals))
arg_dict["resultset_name"] = resultset_name
# Create a unique string key for the _resultsets dict based on
# sorted row_keys. Looking up results based on args will also have
# to sort, but this will ensure results can looked up without
# requiring maintaining a specific order. Example:
# {'a': 1, 'z': 9, 'c': 5, 'b': 2} becomes 'a-1-b-2-c-5-z-9'
resultset_key = "-".join(
[
str(val)
for arg_dict_pair in sorted(arg_dict.items())
for val in arg_dict_pair
]
)
_resultsets[resultset_key] = uuid
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/testing/generate_resultsets.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempfile import NamedTemporaryFile
import random
import numpy as np
import networkx as nx
import cudf
import cugraph
from cugraph.datasets import dolphins, netscience, karate_disjoint, karate
# from cugraph.testing import utils, Resultset, SMALL_DATASETS, results_dir_path
from cugraph.testing import (
utils,
Resultset,
SMALL_DATASETS,
default_resultset_download_dir,
)
_resultsets = {}
def add_resultset(result_data_dictionary, **kwargs):
rs = Resultset(result_data_dictionary)
hashable_dict_repr = tuple((k, kwargs[k]) for k in sorted(kwargs.keys()))
_resultsets[hashable_dict_repr] = rs
if __name__ == "__main__":
# =============================================================================
# Parameters
# =============================================================================
SEEDS = [42]
DIRECTED_GRAPH_OPTIONS = [True, False]
DEPTH_LIMITS = [None, 1, 5, 18]
DATASETS = [dolphins, netscience, karate_disjoint]
# =============================================================================
# tests/traversal/test_bfs.py
# =============================================================================
test_bfs_results = {}
for ds in DATASETS + [karate]:
for seed in SEEDS:
for depth_limit in DEPTH_LIMITS:
for dirctd in DIRECTED_GRAPH_OPTIONS:
# this is used for get_cu_graph_golden_results_and_params
Gnx = utils.generate_nx_graph_from_file(
ds.get_path(), directed=dirctd
)
random.seed(seed)
start_vertex = random.sample(list(Gnx.nodes()), 1)[0]
golden_values = nx.single_source_shortest_path_length(
Gnx, start_vertex, cutoff=depth_limit
)
vertices = cudf.Series(golden_values.keys())
distances = cudf.Series(golden_values.values())
add_resultset(
{"vertex": vertices, "distance": distances},
graph_dataset=ds.metadata["name"],
graph_directed=str(dirctd),
algo="single_source_shortest_path_length",
start_vertex=str(start_vertex),
cutoff=str(depth_limit),
)
# these are pandas dataframes
for dirctd in DIRECTED_GRAPH_OPTIONS:
Gnx = utils.generate_nx_graph_from_file(karate.get_path(), directed=dirctd)
golden_result = cugraph.bfs_edges(Gnx, source=7)
cugraph_df = cudf.from_pandas(golden_result)
add_resultset(
cugraph_df,
graph_dataset="karate",
graph_directed=str(dirctd),
algo="bfs_edges",
source="7",
)
# =============================================================================
# tests/traversal/test_sssp.py
# =============================================================================
test_sssp_results = {}
SOURCES = [1]
for ds in SMALL_DATASETS:
for source in SOURCES:
Gnx = utils.generate_nx_graph_from_file(ds.get_path(), directed=True)
golden_paths = nx.single_source_dijkstra_path_length(Gnx, source)
vertices = cudf.Series(golden_paths.keys())
distances = cudf.Series(golden_paths.values())
add_resultset(
{"vertex": vertices, "distance": distances},
graph_dataset=ds.metadata["name"],
graph_directed="True",
algo="single_source_dijkstra_path_length",
source=str(source),
)
M = utils.read_csv_for_nx(ds.get_path(), read_weights_in_sp=True)
edge_attr = "weight"
Gnx = nx.from_pandas_edgelist(
M,
source="0",
target="1",
edge_attr=edge_attr,
create_using=nx.DiGraph(),
)
M["weight"] = M["weight"].astype(np.int32)
Gnx = nx.from_pandas_edgelist(
M,
source="0",
target="1",
edge_attr="weight",
create_using=nx.DiGraph(),
)
golden_paths_datatypeconv = nx.single_source_dijkstra_path_length(
Gnx, source
)
vertices_datatypeconv = cudf.Series(golden_paths_datatypeconv.keys())
distances_datatypeconv = cudf.Series(golden_paths_datatypeconv.values())
add_resultset(
{"vertex": vertices_datatypeconv, "distance": distances_datatypeconv},
graph_dataset=ds.metadata["name"],
graph_directed="True",
algo="single_source_dijkstra_path_length",
test="data_type_conversion",
source=str(source),
)
for dirctd in DIRECTED_GRAPH_OPTIONS:
for source in SOURCES:
Gnx = utils.generate_nx_graph_from_file(
karate.get_path(), directed=dirctd, edgevals=True
)
add_resultset(
cugraph.sssp(Gnx, source),
graph_dataset="karate",
graph_directed=str(dirctd),
algo="sssp_nonnative",
source=str(source),
)
Gnx = nx.Graph()
Gnx.add_edge(0, 1, other=10)
Gnx.add_edge(1, 2, other=20)
df = cugraph.sssp(Gnx, 0, edge_attr="other")
add_resultset(df, algo="sssp_nonnative", test="network_edge_attr")
# =============================================================================
# tests/traversal/test_paths.py
# =============================================================================
CONNECTED_GRAPH = """1,5,3
1,4,1
1,2,1
1,6,2
1,7,2
4,5,1
2,3,1
7,6,2
"""
DISCONNECTED_GRAPH = CONNECTED_GRAPH + "8,9,4"
paths = [("1", "1"), ("1", "5"), ("1", "3"), ("1", "6")]
invalid_paths = {
"connected": [("-1", "1"), ("0", "42")],
"disconnected": [("1", "10"), ("1", "8")],
}
with NamedTemporaryFile(mode="w+", suffix=".csv") as graph_tf:
graph_tf.writelines(DISCONNECTED_GRAPH)
graph_tf.seek(0)
Gnx_DIS = nx.read_weighted_edgelist(graph_tf.name, delimiter=",")
res1 = nx.shortest_path_length(Gnx_DIS, source="1", weight="weight")
vertices = cudf.Series(res1.keys())
distances = cudf.Series(res1.values())
add_resultset(
{"vertex": vertices, "distance": distances},
algo="shortest_path_length",
graph_dataset="DISCONNECTED",
graph_directed="True",
source="1",
weight="weight",
)
# NOTE: Currently, only traversal result files are generated
random.seed(24)
traversal_mappings = cudf.DataFrame(
columns=[
"#UUID",
"arg0",
"arg0val",
"arg1",
"arg1val",
"arg2",
"arg2val",
"arg3",
"arg3val",
"arg4",
"arg4val",
"arg5",
"arg5val",
"arg6",
"arg6val",
"arg7",
"arg7val",
"arg8",
"arg8val",
"arg9",
"arg9val",
]
)
# Generating ALL results files
results_dir_path = default_resultset_download_dir.path
if not results_dir_path.exists():
results_dir_path.mkdir(parents=True, exist_ok=True)
for temp in _resultsets:
res = _resultsets[temp].get_cudf_dataframe()
temp_filename = str(random.getrandbits(50))
temp_dict = dict(temp)
argnames, argvals = [t for t in temp_dict.keys()], [
t for t in temp_dict.values()
]
single_mapping = np.empty(21, dtype=object)
dict_length = len(argnames)
single_mapping[0] = temp_filename
for i in np.arange(dict_length):
single_mapping[2 * i + 1] = argnames[i]
single_mapping[2 * i + 2] = argvals[i]
temp_mapping = cudf.DataFrame(
[single_mapping],
columns=[
"#UUID",
"arg0",
"arg0val",
"arg1",
"arg1val",
"arg2",
"arg2val",
"arg3",
"arg3val",
"arg4",
"arg4val",
"arg5",
"arg5val",
"arg6",
"arg6val",
"arg7",
"arg7val",
"arg8",
"arg8val",
"arg9",
"arg9val",
],
)
traversal_mappings = cudf.concat(
[traversal_mappings, temp_mapping], axis=0, ignore_index=True
)
res.to_csv(results_dir_path / (temp_filename + ".csv"), index=False)
traversal_mappings.to_csv(
results_dir_path / "traversal_mappings.csv", index=False, sep=" "
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/testing/__init__.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.testing.utils import (
RAPIDS_DATASET_ROOT_DIR_PATH,
RAPIDS_DATASET_ROOT_DIR,
)
from cugraph.testing.resultset import (
Resultset,
load_resultset,
get_resultset,
default_resultset_download_dir,
)
from cugraph.datasets import (
cyber,
dining_prefs,
dolphins,
karate,
karate_disjoint,
polbooks,
netscience,
small_line,
small_tree,
email_Eu_core,
toy_graph,
toy_graph_undirected,
soc_livejournal,
cit_patents,
europe_osm,
hollywood,
# twitter,
)
#
# Moved Dataset Batches
#
UNDIRECTED_DATASETS = [karate, dolphins]
SMALL_DATASETS = [karate, dolphins, polbooks]
WEIGHTED_DATASETS = [
dining_prefs,
dolphins,
karate,
karate_disjoint,
netscience,
polbooks,
small_line,
small_tree,
]
ALL_DATASETS = [
dining_prefs,
dolphins,
karate,
karate_disjoint,
polbooks,
netscience,
small_line,
small_tree,
email_Eu_core,
toy_graph,
toy_graph_undirected,
]
DEFAULT_DATASETS = [dolphins, netscience, karate_disjoint]
BENCHMARKING_DATASETS = [soc_livejournal, cit_patents, europe_osm, hollywood]
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/testing/utils.py | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
# Assume test environment has the following dependencies installed
import pytest
import pandas as pd
import networkx as nx
import numpy as np
import cupy as cp
from cupyx.scipy.sparse import coo_matrix as cp_coo_matrix
from cupyx.scipy.sparse import csr_matrix as cp_csr_matrix
from cupyx.scipy.sparse import csc_matrix as cp_csc_matrix
from scipy.sparse import coo_matrix as sp_coo_matrix
from scipy.sparse import csr_matrix as sp_csr_matrix
from scipy.sparse import csc_matrix as sp_csc_matrix
from pathlib import Path
import cudf
import dask_cudf
import cugraph
from cugraph.dask.common.mg_utils import get_client
CUPY_MATRIX_TYPES = [cp_coo_matrix, cp_csr_matrix, cp_csc_matrix]
SCIPY_MATRIX_TYPES = [sp_coo_matrix, sp_csr_matrix, sp_csc_matrix]
RAPIDS_DATASET_ROOT_DIR = os.getenv(
"RAPIDS_DATASET_ROOT_DIR", os.path.join(os.path.dirname(__file__), "../datasets")
)
RAPIDS_DATASET_ROOT_DIR_PATH = Path(RAPIDS_DATASET_ROOT_DIR)
#
# Datasets
#
DATASETS_UNDIRECTED = [
RAPIDS_DATASET_ROOT_DIR_PATH / f for f in ["karate.csv", "dolphins.csv"]
]
DATASETS_UNDIRECTED_WEIGHTS = [RAPIDS_DATASET_ROOT_DIR_PATH / "netscience.csv"]
DATASETS_UNRENUMBERED = [Path(RAPIDS_DATASET_ROOT_DIR) / "karate-disjoint.csv"]
DATASETS = [
RAPIDS_DATASET_ROOT_DIR_PATH / f
for f in ["karate-disjoint.csv", "dolphins.csv", "netscience.csv"]
]
DATASETS_MULTI_EDGES = [
RAPIDS_DATASET_ROOT_DIR_PATH / f
for f in ["karate_multi_edge.csv", "dolphins_multi_edge.csv"]
]
DATASETS_STR_ISLT_V = [
RAPIDS_DATASET_ROOT_DIR_PATH / f for f in ["karate_mod.mtx", "karate_str.mtx"]
]
DATASETS_SELF_LOOPS = [
RAPIDS_DATASET_ROOT_DIR_PATH / f
for f in ["karate_s_loop.csv", "dolphins_s_loop.csv"]
]
# '../datasets/email-Eu-core.csv']
STRONGDATASETS = [
RAPIDS_DATASET_ROOT_DIR_PATH / f
for f in ["dolphins.csv", "netscience.csv", "email-Eu-core.csv"]
]
DATASETS_KTRUSS = [
(
RAPIDS_DATASET_ROOT_DIR_PATH / "polbooks.csv",
RAPIDS_DATASET_ROOT_DIR_PATH / "ref/ktruss/polbooks.csv",
)
]
DATASETS_TSPLIB = [
(RAPIDS_DATASET_ROOT_DIR_PATH / f,) + (d,)
for (f, d) in [
("gil262.tsp", 2378),
("eil51.tsp", 426),
("kroA100.tsp", 21282),
("tsp225.tsp", 3916),
]
]
DATASETS_SMALL = [
RAPIDS_DATASET_ROOT_DIR_PATH / f
for f in ["karate.csv", "dolphins.csv", "polbooks.csv"]
]
MATRIX_INPUT_TYPES = [
pytest.param(cp_coo_matrix, marks=pytest.mark.matrix_types, id="CuPy.coo_matrix"),
pytest.param(cp_csr_matrix, marks=pytest.mark.matrix_types, id="CuPy.csr_matrix"),
pytest.param(cp_csc_matrix, marks=pytest.mark.matrix_types, id="CuPy.csc_matrix"),
]
NX_INPUT_TYPES = [
pytest.param(nx.Graph, marks=pytest.mark.nx_types, id="nx.Graph"),
]
NX_DIR_INPUT_TYPES = [
pytest.param(nx.Graph, marks=pytest.mark.nx_types, id="nx.DiGraph"),
]
CUGRAPH_INPUT_TYPES = [
pytest.param(cugraph.Graph(), marks=pytest.mark.cugraph_types, id="cugraph.Graph"),
]
CUGRAPH_DIR_INPUT_TYPES = [
pytest.param(
cugraph.Graph(directed=True),
marks=pytest.mark.cugraph_types,
id="cugraph.Graph(directed=True)",
),
]
def read_csv_for_nx(csv_file, read_weights_in_sp=True, read_weights=True):
if read_weights:
if read_weights_in_sp is True:
df = pd.read_csv(
csv_file,
delimiter=" ",
header=None,
names=["0", "1", "weight"],
dtype={"0": "int32", "1": "int32", "weight": "float32"},
)
else:
df = pd.read_csv(
csv_file,
delimiter=" ",
header=None,
names=["0", "1", "weight"],
dtype={"0": "int32", "1": "int32", "weight": "float64"},
)
else:
df = pd.read_csv(
csv_file,
delimiter=" ",
header=None,
names=["0", "1"],
usecols=["0", "1"],
dtype={"0": "int32", "1": "int32"},
)
return df
def create_obj_from_csv(
csv_file_name, obj_type, csv_has_weights=True, edgevals=False, directed=False
):
"""
Return an object based on obj_type populated with the contents of
csv_file_name
"""
if obj_type in [cugraph.Graph]:
return generate_cugraph_graph_from_file(
csv_file_name,
directed=directed,
edgevals=edgevals,
)
elif isinstance(obj_type, cugraph.Graph):
return generate_cugraph_graph_from_file(
csv_file_name,
directed=directed,
edgevals=edgevals,
)
elif obj_type in SCIPY_MATRIX_TYPES + CUPY_MATRIX_TYPES:
# FIXME: assuming float32
if csv_has_weights:
(rows, cols, weights) = np.genfromtxt(
csv_file_name, delimiter=" ", dtype=np.float32, unpack=True
)
else:
(rows, cols) = np.genfromtxt(
csv_file_name, delimiter=" ", dtype=np.float32, unpack=True
)
if (csv_has_weights is False) or (edgevals is False):
# COO matrices must have a value array. Also if edgevals are to be
# ignored (False), reset all weights to 1.
weights = np.array([1] * len(rows))
if obj_type in CUPY_MATRIX_TYPES:
coo = cp_coo_matrix(
(cp.asarray(weights), (cp.asarray(rows), cp.asarray(cols))),
dtype=np.float32,
)
else:
coo = sp_coo_matrix(
(weights, (np.array(rows, dtype=int), np.array(cols, dtype=int))),
)
if obj_type in [cp_csr_matrix, sp_csr_matrix]:
return coo.tocsr(copy=False)
elif obj_type in [cp_csc_matrix, sp_csc_matrix]:
return coo.tocsc(copy=False)
else:
return coo
elif obj_type in [nx.Graph, nx.DiGraph]:
return generate_nx_graph_from_file(
csv_file_name, directed=(obj_type is nx.DiGraph), edgevals=edgevals
)
else:
raise TypeError(f"unsupported type: {obj_type}")
def read_csv_file(csv_file, read_weights_in_sp=True):
if read_weights_in_sp is True:
return cudf.read_csv(
csv_file,
delimiter=" ",
dtype={"0": "int32", "1": "int32", "2": "float32"},
header=None,
)
else:
return cudf.read_csv(
csv_file,
delimiter=" ",
dtype={"0": "int32", "1": "int32", "2": "float64"},
header=None,
)
def read_dask_cudf_csv_file(csv_file, read_weights_in_sp=True, single_partition=True):
if read_weights_in_sp is True:
if single_partition:
chunksize = os.path.getsize(csv_file)
return dask_cudf.read_csv(
csv_file,
chunksize=chunksize,
delimiter=" ",
names=["src", "dst", "weight"],
dtype=["int32", "int32", "float32"],
header=None,
)
else:
return dask_cudf.read_csv(
csv_file,
delimiter=" ",
names=["src", "dst", "weight"],
dtype=["int32", "int32", "float32"],
header=None,
)
else:
if single_partition:
chunksize = os.path.getsize(csv_file)
return dask_cudf.read_csv(
csv_file,
chunksize=chunksize,
delimiter=" ",
names=["src", "dst", "weight"],
dtype=["int32", "int32", "float32"],
header=None,
)
else:
return dask_cudf.read_csv(
csv_file,
delimiter=" ",
names=["src", "dst", "weight"],
dtype=["int32", "int32", "float64"],
header=None,
)
def generate_nx_graph_from_file(graph_file, directed=True, edgevals=False):
M = read_csv_for_nx(graph_file, read_weights_in_sp=edgevals)
edge_attr = "weight" if edgevals else None
Gnx = nx.from_pandas_edgelist(
M,
create_using=(nx.DiGraph() if directed else nx.Graph()),
source="0",
target="1",
edge_attr=edge_attr,
)
return Gnx
def generate_cugraph_graph_from_file(graph_file, directed=True, edgevals=False):
cu_M = read_csv_file(graph_file)
G = cugraph.Graph(directed=directed)
if edgevals:
G.from_cudf_edgelist(cu_M, source="0", destination="1", edge_attr="2")
else:
G.from_cudf_edgelist(cu_M, source="0", destination="1")
return G
def generate_mg_batch_cugraph_graph_from_file(graph_file, directed=True):
client = get_client()
_ddf = read_dask_cudf_csv_file(graph_file)
ddf = client.persist(_ddf)
G = cugraph.Graph(directed=directed)
G.from_dask_cudf_edgelist(ddf)
return G
def build_cu_and_nx_graphs(graph_file, directed=True, edgevals=False):
G = generate_cugraph_graph_from_file(
graph_file, directed=directed, edgevals=edgevals
)
Gnx = generate_nx_graph_from_file(graph_file, directed=directed, edgevals=edgevals)
return G, Gnx
def build_mg_batch_cu_and_nx_graphs(graph_file, directed=True):
G = generate_mg_batch_cugraph_graph_from_file(graph_file, directed=directed)
Gnx = generate_nx_graph_from_file(graph_file, directed=directed)
return G, Gnx
def random_edgelist(
e=1024,
ef=16,
dtypes={"src": np.int32, "dst": np.int32, "val": float},
drop_duplicates=True,
seed=None,
):
"""Create a random edge list
Parameters
----------
e : int
Number of edges
ef : int
Edge factor (average number of edges per vertex)
dtypes : dict
Mapping of column names to types.
Supported type is {"src": int, "dst": int, "val": float}
drop_duplicates
Drop duplicates
seed : int (optional)
Randomstate seed
Examples
--------
>>> from cugraph.testing import utils
>>> # genrates 20 df with 100M edges each and write to disk
>>> for x in range(20):
>>> df = utils.random_edgelist(e=100000000, ef=64,
>>> dtypes={'src':np.int32, 'dst':np.int32},
>>> seed=x)
>>> df.to_csv('df'+str(x), header=False, index=False)
>>> #df.to_parquet('files_parquet/df'+str(x), index=False)
"""
state = np.random.RandomState(seed)
columns = dict((k, make[dt](e // ef, e, state)) for k, dt in dtypes.items())
df = pd.DataFrame(columns)
if drop_duplicates:
df = df.drop_duplicates(subset=["src", "dst"])
print("Generated " + str(df.shape[0]) + " edges")
return df
def make_int32(v, e, rstate):
return rstate.randint(low=0, high=v, size=e, dtype=np.int32)
def make_int64(v, e, rstate):
return rstate.randint(low=0, high=v, size=e, dtype=np.int64)
def make_float(v, e, rstate):
return rstate.rand(e)
make = {float: make_float, np.int32: make_int32, np.int64: make_int64}
# shared between min and max spanning tree tests
def compare_mst(mst_cugraph, mst_nx):
mst_nx_df = nx.to_pandas_edgelist(mst_nx)
edgelist_df = mst_cugraph.view_edge_list()
assert len(mst_nx_df) == len(edgelist_df)
# check cycles
Gnx = nx.from_pandas_edgelist(
edgelist_df.to_pandas(),
create_using=nx.Graph(),
source="src",
target="dst",
)
try:
lc = nx.find_cycle(Gnx, source=None, orientation="ignore")
print(lc)
except nx.NetworkXNoCycle:
pass
# check total weight
cg_sum = edgelist_df[mst_cugraph.weight_column].sum()
nx_sum = mst_nx_df["weight"].sum()
print(cg_sum)
print(nx_sum)
assert np.isclose(cg_sum, nx_sum)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/__init__.py | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .link_analysis.pagerank import pagerank
from .link_analysis.hits import hits
from .traversal.bfs import bfs
from .traversal.sssp import sssp
from .common.read_utils import get_chunksize
from .common.read_utils import get_n_workers
from .community.louvain import louvain
from .community.triangle_count import triangle_count
from .community.egonet import ego_graph
from .community.induced_subgraph import induced_subgraph
from .centrality.katz_centrality import katz_centrality
from .components.connectivity import weakly_connected_components
from .sampling.uniform_neighbor_sample import uniform_neighbor_sample
from .sampling.random_walks import random_walks
from .centrality.eigenvector_centrality import eigenvector_centrality
from .cores.core_number import core_number
from .centrality.betweenness_centrality import betweenness_centrality
from .centrality.betweenness_centrality import edge_betweenness_centrality
from .cores.k_core import k_core
from .link_prediction.jaccard import jaccard
from .link_prediction.sorensen import sorensen
from .link_prediction.overlap import overlap
from .community.leiden import leiden
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/components/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources mg_connectivity_wrapper.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX components_
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/components/connectivity.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from pylibcugraph import ResourceHandle
from pylibcugraph import weakly_connected_components as pylibcugraph_wcc
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_vertex, cupy_labels = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertex
df["labels"] = cupy_labels
return df
def _call_plc_wcc(sID, mg_graph_x, do_expensive_check):
return pylibcugraph_wcc(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
offsets=None,
indices=None,
weights=None,
labels=None,
do_expensive_check=do_expensive_check,
)
def weakly_connected_components(input_graph):
"""
Generate the Weakly Connected Components and attach a component label to
each vertex.
Parameters
----------
input_graph : cugraph.Graph
The graph descriptor should contain the connectivity information
and weights. The adjacency list will be computed if not already
present.
The current implementation only supports undirected graphs.
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['vertex']: dask_cudf.Series
Contains the vertex identifiers
ddf['labels']: dask_cudf.Series
Contains the wcc labels
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=False)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> result = dcg.weakly_connected_components(dg)
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
# Initialize dask client
client = default_client()
do_expensive_check = False
result = [
client.submit(
_call_plc_wcc,
Comms.get_session_id(),
input_graph._plc_graph[w],
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/components/__init__.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/triangle_count.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from pylibcugraph import ResourceHandle, triangle_count as pylibcugraph_triangle_count
def _call_triangle_count(
sID,
mg_graph_x,
start_list,
do_expensive_check,
):
return pylibcugraph_triangle_count(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
start_list=start_list,
do_expensive_check=do_expensive_check,
)
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_vertices, cupy_counts = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["counts"] = cupy_counts
return df
def triangle_count(input_graph, start_list=None):
"""
Computes the number of triangles (cycles of length three) and the number
per vertex in the input graph.
Parameters
----------
input_graph : cugraph.graph
cuGraph graph descriptor, should contain the connectivity information,
(edge weights are not used in this algorithm).
The current implementation only supports undirected graphs.
start_list : list or cudf.Series
list of vertices for triangle count. if None the entire set of vertices
in the graph is processed
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['vertex']: dask_cudf.Series
Contains the triangle counting vertices
ddf['counts']: dask_cudf.Series
Contains the triangle counting counts
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
# Initialize dask client
client = default_client()
if start_list is not None:
if isinstance(start_list, int):
start_list = [start_list]
if isinstance(start_list, list):
start_list = cudf.Series(start_list)
if not isinstance(start_list, cudf.Series):
raise TypeError(
f"'start_list' must be either a list or a cudf.Series,"
f"got: {start_list.dtype}"
)
# start_list uses "external" vertex IDs, but since the graph has been
# renumbered, the start vertex IDs must also be renumbered.
if input_graph.renumbered:
start_list = input_graph.lookup_internal_vertex_id(start_list).compute()
do_expensive_check = False
result = [
client.submit(
_call_triangle_count,
Comms.get_session_id(),
input_graph._plc_graph[w],
start_list,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/induced_subgraph.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
import cupy as cp
from cugraph.dask.common.input_utils import get_distributed_data
from typing import Union, Tuple
from pylibcugraph import (
ResourceHandle,
induced_subgraph as pylibcugraph_induced_subgraph,
)
def _call_induced_subgraph(
sID: bytes,
mg_graph_x,
vertices: cudf.Series,
offsets: cudf.Series,
do_expensive_check: bool,
) -> Tuple[cp.ndarray, cp.ndarray, cp.ndarray, cp.ndarray]:
return pylibcugraph_induced_subgraph(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
subgraph_vertices=vertices,
subgraph_offsets=offsets,
do_expensive_check=do_expensive_check,
)
def consolidate_results(df: cudf.DataFrame, offsets: cudf.Series) -> cudf.DataFrame:
"""
Each rank returns its induced_subgraph dataframe with its corresponding
offsets array. This is ideal if the user operates on distributed memory
but when attempting to bring the result into a single machine,
the induced_subgraph dataframes generated from each seed cannot be extracted
using the offsets array. This function consolidate the final result by
performing segmented copies.
Returns: consolidated induced_subgraph dataframe
"""
for i in range(len(offsets) - 1):
df_tmp = df[offsets[i] : offsets[i + 1]]
df_tmp["labels"] = i
if i == 0:
df_consolidate = df_tmp
else:
df_consolidate = cudf.concat([df_consolidate, df_tmp])
return df_consolidate
def convert_to_cudf(cp_arrays: cp.ndarray) -> cudf.DataFrame:
cp_src, cp_dst, cp_weight, cp_offsets = cp_arrays
df = cudf.DataFrame()
df["src"] = cp_src
df["dst"] = cp_dst
df["weight"] = cp_weight
offsets = cudf.Series(cp_offsets)
return consolidate_results(df, offsets)
def induced_subgraph(
input_graph,
vertices: Union[cudf.Series, cudf.DataFrame],
offsets: Union[list, cudf.Series] = None,
) -> Tuple[dask_cudf.DataFrame, dask_cudf.Series]:
"""
Compute a subgraph of the existing graph including only the specified
vertices. This algorithm works with both directed and undirected graphs
and does not actually traverse the edges, but instead simply pulls out any
edges that are incident on vertices that are both contained in the vertices
list.
If no subgraph can be extracted from the vertices provided, a 'None' value
will be returned.
Parameters
----------
input_graph : cugraph.Graph
Graph or matrix object, which should contain the connectivity
information. Edge weights, if present, should be single or double
precision floating point values.
vertices : cudf.Series or cudf.DataFrame
Specifies the vertices of the induced subgraph. For multi-column
vertices, vertices should be provided as a cudf.DataFrame
offsets : list or cudf.Series, optional
Specifies the subgraph offsets into the subgraph vertices.
If no offsets array is provided, a default array [0, len(vertices)]
will be used.
Returns
-------
ego_edge_lists : dask_cudf.DataFrame
Distributed GPU data frame containing all induced sources identifiers,
destination identifiers, edge weights
seeds_offsets: dask_cudf.Series
Distributed Series containing the starting offset in the returned edge list
for each seed.
"""
# Initialize dask client
client = default_client()
if isinstance(vertices, (int, list)):
vertices = cudf.Series(vertices)
elif not isinstance(
vertices, (cudf.Series, dask_cudf.Series, cudf.DataFrame, dask_cudf.DataFrame)
):
raise TypeError(
f"'vertices' must be either an integer or a list or a "
f"cudf or dask_cudf Series or DataFrame, got: {type(vertices)}"
)
if isinstance(offsets, list):
offsets = cudf.Series(offsets)
if offsets is None:
offsets = cudf.Series([0, len(vertices)])
if not isinstance(offsets, cudf.Series):
raise TypeError(
f"'offsets' must be either 'None', a list or a "
f"cudf Series, got: {type(offsets)}"
)
# vertices uses "external" vertex IDs, but since the graph has been
# renumbered, the node ID must also be renumbered.
if input_graph.renumbered:
vertices = input_graph.lookup_internal_vertex_id(vertices)
vertices_type = input_graph.edgelist.edgelist_df.dtypes[0]
else:
vertices_type = input_graph.input_df.dtypes[0]
if isinstance(vertices, (cudf.Series, cudf.DataFrame)):
vertices = dask_cudf.from_cudf(
vertices, npartitions=min(input_graph._npartitions, len(vertices))
)
vertices = vertices.astype(vertices_type)
vertices = get_distributed_data(vertices)
wait(vertices)
vertices = vertices.worker_to_parts
do_expensive_check = False
result = [
client.submit(
_call_induced_subgraph,
Comms.get_session_id(),
input_graph._plc_graph[w],
vertices[w][0],
offsets,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
if len(ddf) == 0:
return None, None
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
ddf = ddf.sort_values("labels")
# extract offsets from segmented induced_subgraph dataframes
offsets = ddf["labels"].value_counts().compute().sort_index()
offsets = cudf.concat([cudf.Series(0), offsets])
offsets = (
dask_cudf.from_cudf(
offsets, npartitions=min(input_graph._npartitions, len(vertices))
)
.cumsum()
.astype(vertices_type)
)
ddf = ddf.drop(columns="labels")
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "src")
ddf = input_graph.unrenumber(ddf, "dst")
return ddf, offsets
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/louvain.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import annotations
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import dask
from dask import delayed
import cudf
import cupy as cp
import numpy
from pylibcugraph import ResourceHandle
from pylibcugraph import louvain as pylibcugraph_louvain
from typing import Tuple, TYPE_CHECKING
import warnings
if TYPE_CHECKING:
from cugraph import Graph
def convert_to_cudf(result: cp.ndarray) -> Tuple[cudf.DataFrame, float]:
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_vertex, cupy_partition, modularity = result
df = cudf.DataFrame()
df["vertex"] = cupy_vertex
df["partition"] = cupy_partition
return df, modularity
def _call_plc_louvain(
sID: bytes,
mg_graph_x,
max_level: int,
threshold: float,
resolution: float,
do_expensive_check: bool,
) -> Tuple[cp.ndarray, cp.ndarray, float]:
return pylibcugraph_louvain(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
max_level=max_level,
threshold=threshold,
resolution=resolution,
do_expensive_check=do_expensive_check,
)
# FIXME: max_level should default to 100 once max_iter is removed
def louvain(
input_graph: Graph,
max_level: int = None,
max_iter: int = None,
resolution: float = 1.0,
threshold: float = 1e-7,
) -> Tuple[dask_cudf.DataFrame, float]:
"""
Compute the modularity optimizing partition of the input graph using the
Louvain method
It uses the Louvain method described in:
VD Blondel, J-L Guillaume, R Lambiotte and E Lefebvre: Fast unfolding of
community hierarchies in large networks, J Stat Mech P10008 (2008),
http://arxiv.org/abs/0803.0476
Parameters
----------
G : cugraph.Graph
The graph descriptor should contain the connectivity information
and weights. The adjacency list will be computed if not already
present.
The current implementation only supports undirected graphs.
max_level : integer, optional (default=100)
This controls the maximum number of levels of the Louvain
algorithm. When specified the algorithm will terminate after no more
than the specified number of levels. No error occurs when the
algorithm terminates early in this manner.
max_iter : integer, optional (default=None)
This parameter is deprecated in favor of max_level. Previously
it was used to control the maximum number of levels of the Louvain
algorithm.
resolution: float, optional (default=1.0)
Called gamma in the modularity formula, this changes the size
of the communities. Higher resolutions lead to more smaller
communities, lower resolutions lead to fewer larger communities.
threshold: float, optional (default=1e-7)
Modularity gain threshold for each level. If the gain of
modularity between 2 levels of the algorithm is less than the
given threshold then the algorithm stops and returns the
resulting communities.
Returns
-------
parts : dask_cudf.DataFrame
GPU data frame of size V containing two columns the vertex id and the
partition id it is assigned to.
ddf['vertex'] : cudf.Series
Contains the vertex identifiers
ddf['partition'] : cudf.Series
Contains the partition assigned to the vertices
modularity_score : float
a floating point number containing the global modularity score of the
partitioning.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(fetch=True)
>>> parts = cugraph.louvain(G)
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
# FIXME: This max_iter logic and the max_level defaulting can be deleted
# in favor of defaulting max_level in call once max_iter is deleted
if max_iter:
if max_level:
raise ValueError(
"max_iter is deprecated. Cannot specify both max_iter and max_level"
)
warning_msg = (
"max_iter has been renamed max_level. Use of max_iter is "
"deprecated and will no longer be supported in the next releases. "
)
warnings.warn(warning_msg, FutureWarning)
max_level = max_iter
if max_level is None:
max_level = 100
# Initialize dask client
client = default_client()
do_expensive_check = False
result = [
client.submit(
_call_plc_louvain,
Comms.get_session_id(),
input_graph._plc_graph[w],
max_level,
threshold,
resolution,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
part_mod_score = [client.submit(convert_to_cudf, r) for r in result]
wait(part_mod_score)
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
empty_df = cudf.DataFrame(
{
"vertex": numpy.empty(shape=0, dtype=vertex_dtype),
"partition": numpy.empty(shape=0, dtype="int32"),
}
)
part_mod_score = [delayed(lambda x: x, nout=2)(r) for r in part_mod_score]
ddf = dask_cudf.from_delayed(
[r[0] for r in part_mod_score], meta=empty_df, verify_meta=False
).persist()
mod_score = dask.array.from_delayed(
part_mod_score[0][1], shape=(1,), dtype=float
).compute()
wait(ddf)
wait(mod_score)
wait([r.release() for r in part_mod_score])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf, mod_score
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/__init__.py | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .louvain import louvain
from .triangle_count import triangle_count
from .induced_subgraph import induced_subgraph
from .leiden import leiden
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/leiden.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import annotations
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import dask
from dask import delayed
import cudf
from pylibcugraph import ResourceHandle
from pylibcugraph import leiden as pylibcugraph_leiden
import numpy
import cupy as cp
from typing import Tuple, TYPE_CHECKING
if TYPE_CHECKING:
from cugraph import Graph
def convert_to_cudf(result: cp.ndarray) -> Tuple[cudf.DataFrame, float]:
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_vertex, cupy_partition, modularity = result
df = cudf.DataFrame()
df["vertex"] = cupy_vertex
df["partition"] = cupy_partition
return df, modularity
def _call_plc_leiden(
sID: bytes,
mg_graph_x,
max_iter: int,
resolution: int,
random_state: int,
theta: int,
do_expensive_check: bool,
) -> Tuple[cp.ndarray, cp.ndarray, float]:
return pylibcugraph_leiden(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
random_state=random_state,
graph=mg_graph_x,
max_level=max_iter,
resolution=resolution,
theta=theta,
do_expensive_check=do_expensive_check,
)
def leiden(
input_graph: Graph,
max_iter: int = 100,
resolution: int = 1.0,
random_state: int = None,
theta: int = 1.0,
) -> Tuple[dask_cudf.DataFrame, float]:
"""
Compute the modularity optimizing partition of the input graph using the
Leiden method
Traag, V. A., Waltman, L., & van Eck, N. J. (2019). From Louvain to Leiden:
guaranteeing well-connected communities. Scientific reports, 9(1), 5233.
doi: 10.1038/s41598-019-41695-z
Parameters
----------
G : cugraph.Graph
The graph descriptor should contain the connectivity information
and weights. The adjacency list will be computed if not already
present.
The current implementation only supports undirected graphs.
max_iter : integer, optional (default=100)
This controls the maximum number of levels/iterations of the Leiden
algorithm. When specified the algorithm will terminate after no more
than the specified number of iterations. No error occurs when the
algorithm terminates early in this manner.
resolution: float, optional (default=1.0)
Called gamma in the modularity formula, this changes the size
of the communities. Higher resolutions lead to more smaller
communities, lower resolutions lead to fewer larger communities.
Defaults to 1.
random_state: int, optional(default=None)
Random state to use when generating samples. Optional argument,
defaults to a hash of process id, time, and hostname.
theta: float, optional (default=1.0)
Called theta in the Leiden algorithm, this is used to scale
modularity gain in Leiden refinement phase, to compute
the probability of joining a random leiden community.
Returns
-------
parts : dask_cudf.DataFrame
GPU data frame of size V containing two columns the vertex id and the
partition id it is assigned to.
ddf['vertex'] : cudf.Series
Contains the vertex identifiers
ddf['partition'] : cudf.Series
Contains the partition assigned to the vertices
modularity_score : float
a floating point number containing the global modularity score of the
partitioning.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(fetch=True)
>>> parts, modularity_score = cugraph.leiden(G)
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
# Return a client if one has started
client = default_client()
do_expensive_check = False
result = [
client.submit(
_call_plc_leiden,
Comms.get_session_id(),
input_graph._plc_graph[w],
max_iter,
resolution,
random_state,
theta,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
part_mod_score = [client.submit(convert_to_cudf, r) for r in result]
wait(part_mod_score)
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
empty_df = cudf.DataFrame(
{
"vertex": numpy.empty(shape=0, dtype=vertex_dtype),
"partition": numpy.empty(shape=0, dtype="int32"),
}
)
part_mod_score = [delayed(lambda x: x, nout=2)(r) for r in part_mod_score]
ddf = dask_cudf.from_delayed(
[r[0] for r in part_mod_score], meta=empty_df, verify_meta=False
).persist()
mod_score = dask.array.from_delayed(
part_mod_score[0][1], shape=(1,), dtype=float
).compute()
wait(ddf)
wait(mod_score)
wait([r.release() for r in part_mod_score])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf, mod_score
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/community/egonet.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from cugraph.dask.common.input_utils import get_distributed_data
from pylibcugraph import ResourceHandle, ego_graph as pylibcugraph_ego_graph
def _call_ego_graph(
sID,
mg_graph_x,
n,
radius,
do_expensive_check,
):
return pylibcugraph_ego_graph(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
source_vertices=n,
radius=radius,
do_expensive_check=do_expensive_check,
)
def consolidate_results(df, offsets):
"""
Each rank returns its ego_graph dataframe with its corresponding
offsets array. This is ideal if the user operates on distributed memory
but when attempting to bring the result into a single machine,
the ego_graph dataframes generated from each seed cannot be extracted
using the offsets array. This function consolidate the final result by
performing segmented copies.
Returns: consolidated ego_graph dataframe
"""
for i in range(len(offsets) - 1):
df_tmp = df[offsets[i] : offsets[i + 1]]
df_tmp["labels"] = i
if i == 0:
df_consolidate = df_tmp
else:
df_consolidate = cudf.concat([df_consolidate, df_tmp])
return df_consolidate
def convert_to_cudf(cp_arrays):
cp_src, cp_dst, cp_weight, cp_offsets = cp_arrays
df = cudf.DataFrame()
df["src"] = cp_src
df["dst"] = cp_dst
if cp_weight is None:
df["weight"] = None
else:
df["weight"] = cp_weight
offsets = cudf.Series(cp_offsets)
return consolidate_results(df, offsets)
def ego_graph(input_graph, n, radius=1, center=True):
"""
Compute the induced subgraph of neighbors centered at node n,
within a given radius.
Parameters
----------
input_graph : cugraph.Graph
Graph or matrix object, which should contain the connectivity
information. Edge weights, if present, should be single or double
precision floating point values.
n : int, list or cudf Series or Dataframe, dask_cudf Series or DataFrame
A node or a list or cudf.Series of nodes or a cudf.DataFrame if nodes
are represented with multiple columns. If a cudf.DataFrame is provided,
only the first row is taken as the node input.
radius: integer, optional (default=1)
Include all neighbors of distance<=radius from n.
center: bool, optional
Defaults to True. False is not supported
Returns
-------
ego_edge_lists : dask_cudf.DataFrame
Distributed GPU data frame containing all induced sources identifiers,
destination identifiers, edge weights
seeds_offsets: dask_cudf.Series
Distributed Series containing the starting offset in the returned edge list
for each seed.
"""
# Initialize dask client
client = default_client()
if isinstance(n, (int, list)):
n = cudf.Series(n)
elif not isinstance(
n, (cudf.Series, dask_cudf.Series, cudf.DataFrame, dask_cudf.DataFrame)
):
raise TypeError(
f"'n' must be either an integer or a list or a "
f"cudf or dask_cudf Series or DataFrame, got: {type(n)}"
)
# n uses "external" vertex IDs, but since the graph has been
# renumbered, the node ID must also be renumbered.
if input_graph.renumbered:
n = input_graph.lookup_internal_vertex_id(n)
n_type = input_graph.edgelist.edgelist_df.dtypes[0]
else:
n_type = input_graph.input_df.dtypes[0]
if isinstance(n, (cudf.Series, cudf.DataFrame)):
n = dask_cudf.from_cudf(n, npartitions=min(input_graph._npartitions, len(n)))
n = n.astype(n_type)
n = get_distributed_data(n)
wait(n)
n = n.worker_to_parts
do_expensive_check = False
result = [
client.submit(
_call_ego_graph,
Comms.get_session_id(),
input_graph._plc_graph[w],
n[w][0],
radius,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
ddf = ddf.sort_values("labels")
# extract offsets from segmented ego_graph dataframes
offsets = ddf["labels"].value_counts().compute().sort_index()
offsets = cudf.concat([cudf.Series(0), offsets])
offsets = (
dask_cudf.from_cudf(offsets, npartitions=min(input_graph._npartitions, len(n)))
.cumsum()
.astype(n_type)
)
ddf = ddf.drop(columns="labels")
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "src")
ddf = input_graph.unrenumber(ddf, "dst")
return ddf, offsets
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/comms/comms_wrapper.pyx | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from pylibraft.common.handle cimport *
from cugraph.dask.comms.comms cimport init_subcomm as c_init_subcomm
def init_subcomms(handle, row_comm_size):
cdef size_t handle_size_t = <size_t>handle.getHandle()
handle_ = <handle_t*>handle_size_t
c_init_subcomm(handle_[0], row_comm_size)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/comms/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources comms_wrapper.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX comms_
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/comms/comms.py | # Copyright (c) 2018-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# FIXME: these raft imports break the library if ucx-py is
# not available. They are necessary only when doing MG work.
from cugraph.dask.common.read_utils import MissingUCXPy
try:
from raft_dask.common.comms import Comms as raftComms
from raft_dask.common.comms import get_raft_comm_state
except ImportError as err:
# FIXME: Generalize since err.name is arr when
# libnuma.so.1 is not available
if err.name == "ucp" or err.name == "arr":
raftComms = MissingUCXPy()
get_raft_comm_state = MissingUCXPy()
else:
raise
from pylibraft.common.handle import Handle
from cugraph.dask.comms.comms_wrapper import init_subcomms as c_init_subcomms
from dask.distributed import default_client, get_worker
from cugraph.dask.common import read_utils
import math
__instance = None
__default_handle = None
__subcomm = None
def __get_2D_div(ngpus):
prows = int(math.sqrt(ngpus))
while ngpus % prows != 0:
prows = prows - 1
return prows, int(ngpus / prows)
def subcomm_init(prows, pcols, partition_type):
sID = get_session_id()
ngpus = get_n_workers()
if prows is None and pcols is None:
if partition_type == 1:
pcols, prows = __get_2D_div(ngpus)
else:
prows, pcols = __get_2D_div(ngpus)
else:
if prows is not None and pcols is not None:
if ngpus != prows * pcols:
raise Exception(
"prows*pcols should be equal to the\
number of processes"
)
elif prows is not None:
if ngpus % prows != 0:
raise Exception(
"prows must be a factor of the number\
of processes"
)
pcols = int(ngpus / prows)
elif pcols is not None:
if ngpus % pcols != 0:
raise Exception(
"pcols must be a factor of the number\
of processes"
)
prows = int(ngpus / pcols)
client = default_client()
client.run(_subcomm_init, sID, pcols)
global __subcomm
__subcomm = (prows, pcols, partition_type)
def _subcomm_init(sID, partition_row_size, dask_worker=None):
handle = get_handle(sID, dask_worker)
c_init_subcomms(handle, partition_row_size)
def initialize(comms=None, p2p=False, prows=None, pcols=None, partition_type=1):
"""
Initialize a communicator for multi-node/multi-gpu communications. It is
expected to be called right after client initialization for running
multi-GPU algorithms (this wraps raft comms that manages underlying NCCL
and UCX comms handles across the workers of a Dask cluster).
It is recommended to also call `destroy()` when the comms are no longer
needed so the underlying resources can be cleaned up.
Parameters
----------
comms : raft Comms, optional (default=None)
A pre-initialized raft communicator. If provided, this is used for mnmg
communications. If not provided, default comms are initialized as per
client information.
p2p : bool, optional (default=False)
Initialize UCX endpoints if True.
prows : int, optional (default=None)
Specifies the number of rows when performing a 2D partitioning of the
input graph. If specified, this must be a factor of the total number of
parallel processes. When specified with pcols, prows*pcols should be
equal to the total number of parallel processes.
pcols : int, optional (default=None)
Specifies the number of columns when performing a 2D partitioning of
the input graph. If specified, this must be a factor of the total
number of parallel processes. When specified with prows, prows*pcols
should be equal to the total number of parallel processes.
partition_type : int, optional (default=1)
Valid values are currently 1 or any int other than 1. A value of 1 (the
default) represents a partitioning resulting in prows*pcols
partitions. A non-1 value currently results in a partitioning of
p*pcols partitions, where p is the number of GPUs.
Examples
--------
>>> from dask.distributed import Client
>>> from dask_cuda import LocalCUDACluster
>>> import cugraph.dask.comms as Comms
>>> cluster = LocalCUDACluster()
>>> client = Client(cluster)
>>> Comms.initialize(p2p=True)
>>> # DO WORK HERE
>>> # All done, clean up
>>> Comms.destroy()
>>> client.close()
>>> cluster.close()
"""
global __instance
if __instance is None:
global __default_handle
__default_handle = None
if comms is None:
# Initialize communicator
if not p2p:
raise Exception("Set p2p to True for running mnmg algorithms")
__instance = raftComms(comms_p2p=p2p)
__instance.init()
# Initialize subcommunicator
subcomm_init(prows, pcols, partition_type)
else:
__instance = comms
else:
raise Exception("Communicator is already initialized")
def is_initialized():
"""
Returns True if comms was initialized, False otherwise.
"""
global __instance
if __instance is not None:
return True
else:
return False
def get_comms():
"""
Returns raft Comms instance
"""
global __instance
return __instance
def get_workers():
"""
Returns the workers in the Comms instance, or None if Comms is not
initialized.
"""
if is_initialized():
global __instance
return __instance.worker_addresses
def get_session_id():
"""
Returns the sessionId for finding sessionstate of workers, or None if Comms
is not initialized.
"""
if is_initialized():
global __instance
return __instance.sessionId
def get_2D_partition():
"""
Returns a tuple representing the 2D partition information: (prows, pcols,
partition_type)
"""
global __subcomm
if __subcomm is not None:
return __subcomm
def destroy():
"""
Shuts down initialized comms and cleans up resources.
"""
global __instance
if is_initialized():
__instance.destroy()
__instance = None
def get_default_handle():
"""
Returns the default handle. This does not perform nccl initialization.
"""
global __default_handle
if __default_handle is None:
__default_handle = Handle()
return __default_handle
# Functions to be called from within workers
def get_handle(sID, dask_worker=None):
"""
Returns the handle from within the worker using the sessionstate.
"""
if dask_worker is None:
dask_worker = get_worker()
sessionstate = get_raft_comm_state(sID, dask_worker)
return sessionstate["handle"]
def get_worker_id(sID, dask_worker=None):
"""
Returns the worker's sessionId from within the worker.
"""
if dask_worker is None:
dask_worker = get_worker()
sessionstate = get_raft_comm_state(sID, dask_worker)
return sessionstate["wid"]
# FIXME: There are several similar instances of utility functions for getting
# the number of workers, including:
# * get_n_workers() (from cugraph.dask.common.read_utils)
# * len(get_visible_devices())
# * len(numba.cuda.gpus)
# Consider consolidating these or emphasizing why different
# functions/techniques are needed.
def get_n_workers(sID=None, dask_worker=None):
if sID is None:
return read_utils.get_n_workers()
else:
if dask_worker is None:
dask_worker = get_worker()
sessionstate = get_raft_comm_state(sID, dask_worker)
return sessionstate["nworkers"]
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/comms/comms.pxd | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from pylibraft.common.handle cimport *
cdef extern from "cugraph/partition_manager.hpp" namespace "cugraph::partition_manager":
cdef void init_subcomm(handle_t &handle,
size_t row_comm_size)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/comms/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/sampling/random_walks.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dask.distributed import wait, default_client
import dask_cudf
import cudf
import operator as op
from pylibcugraph import ResourceHandle
from pylibcugraph import (
uniform_random_walks as pylibcugraph_uniform_random_walks,
)
from cugraph.dask.comms import comms as Comms
from cugraph.dask.common.input_utils import get_distributed_data
def convert_to_cudf(cp_paths, number_map=None, is_vertex_paths=False):
"""
Creates cudf Series from cupy arrays from pylibcugraph wrapper
"""
if is_vertex_paths and len(cp_paths) > 0:
if number_map.implementation.numbered:
df_ = cudf.DataFrame()
df_["vertex_paths"] = cp_paths
df_ = number_map.unrenumber(
df_, "vertex_paths", preserve_order=True
).compute()
vertex_paths = cudf.Series(df_["vertex_paths"]).fillna(-1)
return vertex_paths
return cudf.Series(cp_paths)
def _call_plc_uniform_random_walks(sID, mg_graph_x, st_x, max_depth):
return pylibcugraph_uniform_random_walks(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
input_graph=mg_graph_x,
start_vertices=st_x,
max_length=max_depth,
)
def random_walks(
input_graph,
random_walks_type="uniform",
start_vertices=None,
max_depth=None,
use_padding=None,
legacy_result_type=None,
):
"""
compute random walks for each nodes in 'start_vertices' and returns a
padded result along with the maximum path length. Vertices with no outgoing
edges will be padded with -1.
parameters
----------
input_graph : cuGraph.Graph
The graph can be either directed or undirected.
random_walks_type : str, optional (default='uniform')
Type of random walks: 'uniform', 'biased', 'node2vec'.
Only 'uniform' random walks is currently supported
start_vertices : int or list or cudf.Series or cudf.DataFrame
A single node or a list or a cudf.Series of nodes from which to run
the random walks. In case of multi-column vertices it should be
a cudf.DataFrame
max_depth : int
The maximum depth of the random walks
use_padding : bool
This parameter is here for SG compatibility and ignored
legacy_result_type : bool
This parameter is here for SG compatibility and ignored
Returns
-------
vertex_paths : dask_cudf.Series or dask_cudf.DataFrame
Series containing the vertices of edges/paths in the random walk.
edge_weight_paths: dask_cudf.Series
Series containing the edge weights of edges represented by the
returned vertex_paths
max_path_length : int
The maximum path length
"""
if isinstance(start_vertices, int):
start_vertices = [start_vertices]
if isinstance(start_vertices, list):
start_vertices = cudf.Series(start_vertices)
# start_vertices uses "external" vertex IDs, but if the graph has been
# renumbered, the start vertex IDs must also be renumbered.
if input_graph.renumbered:
# FIXME: This should match start_vertices type to the renumbered df type
# but verify that. If not retrieve the type and cast it when creating
# the dask_cudf from a cudf
start_vertices = input_graph.lookup_internal_vertex_id(start_vertices).compute()
start_vertices_type = input_graph.edgelist.edgelist_df.dtypes[0]
else:
# FIXME: Get the 'src' column names instead and retrieve the type
start_vertices_type = input_graph.input_df.dtypes[0]
start_vertices = dask_cudf.from_cudf(
start_vertices, npartitions=min(input_graph._npartitions, len(start_vertices))
)
start_vertices = start_vertices.astype(start_vertices_type)
start_vertices = get_distributed_data(start_vertices)
wait(start_vertices)
start_vertices = start_vertices.worker_to_parts
client = default_client()
result = [
client.submit(
_call_plc_uniform_random_walks,
Comms.get_session_id(),
input_graph._plc_graph[w],
start_vertices[w][0],
max_depth,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
result_vertex_paths = [client.submit(op.getitem, f, 0) for f in result]
result_edge_wgt_paths = [client.submit(op.getitem, f, 1) for f in result]
max_path_length = [client.submit(op.getitem, f, 2) for f in result]
cudf_vertex_paths = [
client.submit(convert_to_cudf, cp_vertex_paths, input_graph.renumber_map, True)
for cp_vertex_paths in result_vertex_paths
]
cudf_edge_wgt_paths = [
client.submit(convert_to_cudf, cp_edge_wgt_paths)
for cp_edge_wgt_paths in result_edge_wgt_paths
]
wait([cudf_vertex_paths, cudf_edge_wgt_paths])
max_path_length = max_path_length[0].result()
ddf_vertex_paths = dask_cudf.from_delayed(cudf_vertex_paths).persist()
ddf_edge_wgt_paths = dask_cudf.from_delayed(cudf_edge_wgt_paths).persist()
wait([ddf_vertex_paths, ddf_edge_wgt_paths])
# Wait until the inactive futures are released
wait(
[
(r.release(), c_v.release(), c_e.release())
for r, c_v, c_e in zip(result, cudf_vertex_paths, cudf_edge_wgt_paths)
]
)
return ddf_vertex_paths, ddf_edge_wgt_paths, max_path_length
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/sampling/uniform_neighbor_sample.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import warnings
import numpy
from dask import delayed
from dask.distributed import Lock, get_client, wait
import dask_cudf
import cudf
import cupy as cp
from pylibcugraph import ResourceHandle
from pylibcugraph import uniform_neighbor_sample as pylibcugraph_uniform_neighbor_sample
from cugraph.dask.comms import comms as Comms
from cugraph.dask import get_n_workers
from typing import Sequence, List, Union, Tuple
from typing import TYPE_CHECKING
from cugraph.dask.common.part_utils import (
get_persisted_df_worker_map,
persist_dask_df_equal_parts_per_worker,
)
if TYPE_CHECKING:
from cugraph import Graph
src_n = "sources"
dst_n = "destinations"
indices_n = "indices"
weight_n = "weight"
edge_id_n = "edge_id"
edge_type_n = "edge_type"
batch_id_n = "batch_id"
offsets_n = "offsets"
hop_id_n = "hop_id"
map_n = "map"
map_offsets_n = "renumber_map_offsets"
start_col_name = "_START_"
batch_col_name = "_BATCH_"
def create_empty_df(indices_t, weight_t):
df = cudf.DataFrame(
{
src_n: numpy.empty(shape=0, dtype=indices_t),
dst_n: numpy.empty(shape=0, dtype=indices_t),
indices_n: numpy.empty(shape=0, dtype=weight_t),
}
)
return df
def create_empty_df_with_edge_props(
indices_t,
weight_t,
return_offsets=False,
renumber=False,
use_legacy_names=True,
include_hop_column=True,
compression="COO",
):
if compression != "COO":
majors_name = "major_offsets"
else:
majors_name = src_n if use_legacy_names else "majors"
minors_name = dst_n if use_legacy_names else "minors"
if renumber:
empty_df_renumber = cudf.DataFrame(
{
map_n: numpy.empty(shape=0, dtype=indices_t),
map_offsets_n: numpy.empty(shape=0, dtype="int32"),
}
)
if return_offsets:
df = cudf.DataFrame(
{
majors_name: numpy.empty(shape=0, dtype=indices_t),
minors_name: numpy.empty(shape=0, dtype=indices_t),
weight_n: numpy.empty(shape=0, dtype=weight_t),
edge_id_n: numpy.empty(shape=0, dtype=indices_t),
edge_type_n: numpy.empty(shape=0, dtype="int32"),
}
)
if include_hop_column:
df[hop_id_n] = numpy.empty(shape=0, dtype="int32")
empty_df_offsets = cudf.DataFrame(
{
offsets_n: numpy.empty(shape=0, dtype="int32"),
batch_id_n: numpy.empty(shape=0, dtype="int32"),
}
)
if renumber:
return df, empty_df_offsets, empty_df_renumber
else:
return df, empty_df_offsets
else:
df = cudf.DataFrame(
{
majors_name: numpy.empty(shape=0, dtype=indices_t),
minors_name: numpy.empty(shape=0, dtype=indices_t),
weight_n: numpy.empty(shape=0, dtype=weight_t),
edge_id_n: numpy.empty(shape=0, dtype=indices_t),
edge_type_n: numpy.empty(shape=0, dtype="int32"),
batch_id_n: numpy.empty(shape=0, dtype="int32"),
hop_id_n: numpy.empty(shape=0, dtype="int32"),
}
)
if renumber:
return df, empty_df_renumber
else:
return df
def __get_label_to_output_comm_rank(min_batch_id, max_batch_id, n_workers):
num_batches = max_batch_id - min_batch_id + 1
num_batches = int(num_batches)
z = cp.zeros(num_batches, dtype="int32")
s = cp.array_split(cp.arange(num_batches), n_workers)
for i, t in enumerate(s):
z[t] = i
return z
def _call_plc_uniform_neighbor_sample(
sID,
mg_graph_x,
st_x,
keep_batches_together,
n_workers,
min_batch_id,
max_batch_id,
fanout_vals,
with_replacement,
weight_t,
with_edge_properties,
random_state=None,
return_offsets=False,
return_hops=True,
prior_sources_behavior=None,
deduplicate_sources=False,
renumber=False,
use_legacy_names=True,
include_hop_column=True,
compress_per_hop=False,
compression="COO",
):
st_x = st_x[0]
start_list_x = st_x[start_col_name]
batch_id_list_x = st_x[batch_col_name] if batch_col_name in st_x else None
label_list = None
label_to_output_comm_rank = None
if keep_batches_together:
label_list = cp.arange(min_batch_id, max_batch_id + 1, dtype="int32")
label_to_output_comm_rank = __get_label_to_output_comm_rank(
min_batch_id, max_batch_id, n_workers
)
cupy_array_dict = pylibcugraph_uniform_neighbor_sample(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
input_graph=mg_graph_x,
start_list=start_list_x,
label_list=label_list,
label_to_output_comm_rank=label_to_output_comm_rank,
h_fan_out=fanout_vals,
with_replacement=with_replacement,
do_expensive_check=False,
with_edge_properties=with_edge_properties,
batch_id_list=batch_id_list_x,
random_state=random_state,
prior_sources_behavior=prior_sources_behavior,
deduplicate_sources=deduplicate_sources,
return_hops=return_hops,
renumber=renumber,
compression=compression,
compress_per_hop=compress_per_hop,
return_dict=True,
)
# have to import here due to circular import issue
from cugraph.sampling.sampling_utilities import (
sampling_results_from_cupy_array_dict,
)
return sampling_results_from_cupy_array_dict(
cupy_array_dict,
weight_t,
len(fanout_vals),
with_edge_properties=with_edge_properties,
return_offsets=return_offsets,
renumber=renumber,
use_legacy_names=use_legacy_names,
include_hop_column=include_hop_column,
)
def _mg_call_plc_uniform_neighbor_sample(
client,
session_id,
input_graph,
ddf,
keep_batches_together,
min_batch_id,
max_batch_id,
fanout_vals,
with_replacement,
weight_t,
indices_t,
with_edge_properties,
random_state,
return_offsets=False,
return_hops=True,
prior_sources_behavior=None,
deduplicate_sources=False,
renumber=False,
use_legacy_names=True,
include_hop_column=True,
compress_per_hop=False,
compression="COO",
):
n_workers = None
if keep_batches_together:
n_workers = get_n_workers()
if hasattr(min_batch_id, "compute"):
min_batch_id = min_batch_id.compute()
if hasattr(max_batch_id, "compute"):
max_batch_id = max_batch_id.compute()
result = [
client.submit(
_call_plc_uniform_neighbor_sample,
session_id,
input_graph._plc_graph[w],
starts,
keep_batches_together,
n_workers,
min_batch_id,
max_batch_id,
fanout_vals,
with_replacement,
weight_t=weight_t,
with_edge_properties=with_edge_properties,
# FIXME accept and properly transmute a numpy/cupy random state.
random_state=hash((random_state, w)),
return_offsets=return_offsets,
return_hops=return_hops,
prior_sources_behavior=prior_sources_behavior,
deduplicate_sources=deduplicate_sources,
renumber=renumber,
use_legacy_names=use_legacy_names, # remove in 23.12
include_hop_column=include_hop_column, # remove in 23.12
compress_per_hop=compress_per_hop,
compression=compression,
allow_other_workers=False,
pure=False,
)
for w, starts in ddf.items()
]
del ddf
empty_df = (
create_empty_df_with_edge_props(
indices_t,
weight_t,
return_offsets=return_offsets,
renumber=renumber,
use_legacy_names=use_legacy_names,
compression=compression,
include_hop_column=include_hop_column,
)
if with_edge_properties
else create_empty_df(indices_t, weight_t)
)
if not isinstance(empty_df, (list, tuple)):
empty_df = [empty_df]
wait(result)
nout = 1
if return_offsets:
nout += 1
if renumber:
nout += 1
result_split = [delayed(lambda x: x, nout=nout)(r) for r in result]
ddf = dask_cudf.from_delayed(
[r[0] for r in result_split], meta=empty_df[0], verify_meta=False
).persist()
return_dfs = [ddf]
if return_offsets:
ddf_offsets = dask_cudf.from_delayed(
[r[1] for r in result_split], meta=empty_df[1], verify_meta=False
).persist()
return_dfs.append(ddf_offsets)
if renumber:
ddf_renumber = dask_cudf.from_delayed(
[r[-1] for r in result_split], meta=empty_df[-1], verify_meta=False
).persist()
return_dfs.append(ddf_renumber)
wait(return_dfs)
wait([r.release() for r in result_split])
wait([r.release() for r in result])
del result
if len(return_dfs) == 1:
return return_dfs[0]
else:
return tuple(return_dfs)
def uniform_neighbor_sample(
input_graph: Graph,
start_list: Sequence,
fanout_vals: List[int],
*,
with_replacement: bool = True,
with_edge_properties: bool = False, # deprecated
with_batch_ids: bool = False,
keep_batches_together=False,
min_batch_id=None,
max_batch_id=None,
random_state: int = None,
return_offsets: bool = False,
return_hops: bool = True,
include_hop_column: bool = True, # deprecated
prior_sources_behavior: str = None,
deduplicate_sources: bool = False,
renumber: bool = False,
use_legacy_names=True, # deprecated
compress_per_hop=False,
compression="COO",
_multiple_clients: bool = False,
) -> Union[dask_cudf.DataFrame, Tuple[dask_cudf.DataFrame, dask_cudf.DataFrame]]:
"""
Does neighborhood sampling, which samples nodes from a graph based on the
current node's neighbors, with a corresponding fanout value at each hop.
Parameters
----------
input_graph : cugraph.Graph
cuGraph graph, which contains connectivity information as dask cudf
edge list dataframe
start_list : int, list, cudf.Series, or dask_cudf.Series (int32 or int64)
a list of starting vertices for sampling
fanout_vals : list
List of branching out (fan-out) degrees per starting vertex for each
hop level.
with_replacement: bool, optional (default=True)
Flag to specify if the random sampling is done with replacement
with_edge_properties: bool, optional (default=False)
Deprecated.
Flag to specify whether to return edge properties (weight, edge id,
edge type, batch id, hop id) with the sampled edges.
with_batch_ids: bool, optional (default=False)
Flag to specify whether batch ids are present in the start_list
keep_batches_together: bool (optional, default=False)
If True, will ensure that the returned samples for each batch are on the
same partition.
min_batch_id: int (optional, default=None)
Required for the keep_batches_together option. The minimum batch id.
max_batch_id: int (optional, default=None)
Required for the keep_batches_together option. The maximum batch id.
random_state: int, optional
Random seed to use when making sampling calls.
return_offsets: bool, optional (default=False)
Whether to return the sampling results with batch ids
included as one dataframe, or to instead return two
dataframes, one with sampling results and one with
batch ids and their start offsets per rank.
return_hops: bool, optional (default=True)
Whether to return the sampling results with hop ids
corresponding to the hop where the edge appeared.
Defaults to True.
include_hop_column: bool, optional (default=True)
Deprecated. Defaults to True.
If True, will include the hop column even if
return_offsets is True. This option will
be removed in release 23.12.
prior_sources_behavior: str (Optional)
Options are "carryover", and "exclude".
Default will leave the source list as-is.
Carryover will carry over sources from previous hops to the
current hop.
Exclude will exclude sources from previous hops from reappearing
as sources in future hops.
deduplicate_sources: bool, optional (default=False)
Whether to first deduplicate the list of possible sources
from the previous destinations before performing next
hop.
renumber: bool, optional (default=False)
Whether to renumber on a per-batch basis. If True,
will return the renumber map and renumber map offsets
as an additional dataframe.
use_legacy_names: bool, optional (default=True)
Whether to use the legacy column names (sources, destinations).
If True, will use "sources" and "destinations" as the column names.
If False, will use "majors" and "minors" as the column names.
Deprecated. Will be removed in release 23.12 in favor of always
using the new names "majors" and "minors".
compress_per_hop: bool, optional (default=False)
Whether to compress globally (default), or to produce a separate
compressed edgelist per hop.
compression: str, optional (default=COO)
Sets the compression type for the output minibatches.
Valid options are COO (default), CSR, CSC, DCSR, and DCSC.
_multiple_clients: bool, optional (default=False)
internal flag to ensure sampling works with multiple dask clients
set to True to prevent hangs in multi-client environment
Returns
-------
result : dask_cudf.DataFrame or Tuple[dask_cudf.DataFrame, dask_cudf.DataFrame]
GPU distributed data frame containing several dask_cudf.Series
If with_edge_properties=True:
ddf['sources']: dask_cudf.Series
Contains the source vertices from the sampling result
ddf['destinations']: dask_cudf.Series
Contains the destination vertices from the sampling result
ddf['indices']: dask_cudf.Series
Contains the indices from the sampling result for path
reconstruction
If with_edge_properties=False:
If return_offsets=False:
df['sources']: dask_cudf.Series
Contains the source vertices from the sampling result
df['destinations']: dask_cudf.Series
Contains the destination vertices from the sampling result
df['edge_weight']: dask_cudf.Series
Contains the edge weights from the sampling result
df['edge_id']: dask_cudf.Series
Contains the edge ids from the sampling result
df['edge_type']: dask_cudf.Series
Contains the edge types from the sampling result
df['batch_id']: dask_cudf.Series
Contains the batch ids from the sampling result
df['hop_id']: dask_cudf.Series
Contains the hop ids from the sampling result
If renumber=True:
(adds the following dataframe)
renumber_df['map']: dask_cudf.Series
Contains the renumber maps for each batch
renumber_df['offsets']: dask_cudf.Series
Contains the batch offsets for the renumber maps
If return_offsets=True:
df['sources']: dask_cudf.Series
Contains the source vertices from the sampling result
df['destinations']: dask_cudf.Series
Contains the destination vertices from the sampling result
df['edge_weight']: dask_cudf.Series
Contains the edge weights from the sampling result
df['edge_id']: dask_cudf.Series
Contains the edge ids from the sampling result
df['edge_type']: dask_cudf.Series
Contains the edge types from the sampling result
df['hop_id']: dask_cudf.Series
Contains the hop ids from the sampling result
offsets_df['batch_id']: dask_cudf.Series
Contains the batch ids from the sampling result
offsets_df['offsets']: dask_cudf.Series
Contains the offsets of each batch in the sampling result
If renumber=True:
(adds the following dataframe)
renumber_df['map']: dask_cudf.Series
Contains the renumber maps for each batch
renumber_df['offsets']: dask_cudf.Series
Contains the batch offsets for the renumber maps
"""
if compression not in ["COO", "CSR", "CSC", "DCSR", "DCSC"]:
raise ValueError("compression must be one of COO, CSR, CSC, DCSR, or DCSC")
if with_edge_properties:
warning_msg = (
"The with_edge_properties flag is deprecated"
" and will be removed in the next release."
)
warnings.warn(warning_msg, FutureWarning)
if (
(compression != "COO")
and (not compress_per_hop)
and prior_sources_behavior != "exclude"
):
raise ValueError(
"hop-agnostic compression is only supported with"
" the exclude prior sources behavior due to limitations "
"of the libcugraph C++ API"
)
if compress_per_hop and prior_sources_behavior != "carryover":
raise ValueError(
"Compressing the edgelist per hop is only supported "
"with the carryover prior sources behavior due to limitations"
" of the libcugraph C++ API"
)
if include_hop_column:
warning_msg = (
"The include_hop_column flag is deprecated and will be"
" removed in the next release in favor of always "
"excluding the hop column when return_offsets is True"
)
warnings.warn(warning_msg, FutureWarning)
if compression != "COO":
raise ValueError(
"Including the hop id column is only supported with COO compression."
)
if isinstance(start_list, int):
start_list = [start_list]
if isinstance(start_list, list):
start_list = cudf.Series(
start_list,
dtype=input_graph.edgelist.edgelist_df[
input_graph.renumber_map.renumbered_src_col_name
].dtype,
)
elif with_edge_properties and not with_batch_ids:
if isinstance(start_list, (cudf.DataFrame, dask_cudf.DataFrame)):
raise ValueError("expected 1d input for start list without batch ids")
start_list = start_list.to_frame()
if isinstance(start_list, dask_cudf.DataFrame):
start_list = start_list.map_partitions(
lambda df: df.assign(
**{batch_id_n: cudf.Series(cp.zeros(len(df), dtype="int32"))}
)
).persist()
else:
start_list = start_list.reset_index(drop=True).assign(
**{batch_id_n: cudf.Series(cp.zeros(len(start_list), dtype="int32"))}
)
if keep_batches_together and min_batch_id is None:
raise ValueError(
"must provide min_batch_id if using keep_batches_together option"
)
if keep_batches_together and max_batch_id is None:
raise ValueError(
"must provide max_batch_id if using keep_batches_together option"
)
if renumber and not keep_batches_together:
raise ValueError(
"mg uniform_neighbor_sample requires that keep_batches_together=True "
"when performing renumbering."
)
# fanout_vals must be passed to pylibcugraph as a host array
if isinstance(fanout_vals, numpy.ndarray):
fanout_vals = fanout_vals.astype("int32")
elif isinstance(fanout_vals, list):
fanout_vals = numpy.asarray(fanout_vals, dtype="int32")
elif isinstance(fanout_vals, cp.ndarray):
fanout_vals = fanout_vals.get().astype("int32")
elif isinstance(fanout_vals, cudf.Series):
fanout_vals = fanout_vals.values_host.astype("int32")
else:
raise TypeError("fanout_vals must be a sequence, " f"got: {type(fanout_vals)}")
if "value" in input_graph.edgelist.edgelist_df:
weight_t = input_graph.edgelist.edgelist_df["value"].dtype
else:
weight_t = "float32"
if "_SRC_" in input_graph.edgelist.edgelist_df:
indices_t = input_graph.edgelist.edgelist_df["_SRC_"].dtype
elif src_n in input_graph.edgelist.edgelist_df:
indices_t = input_graph.edgelist.edgelist_df[src_n].dtype
else:
indices_t = numpy.int32
if isinstance(start_list, (cudf.Series, dask_cudf.Series)):
start_list = start_list.rename(start_col_name)
ddf = start_list.to_frame()
else:
ddf = start_list
columns = ddf.columns
ddf = ddf.rename(
columns={columns[0]: start_col_name, columns[-1]: batch_col_name}
)
if input_graph.renumbered:
ddf = input_graph.lookup_internal_vertex_id(ddf, column_name=start_col_name)
client = get_client()
session_id = Comms.get_session_id()
n_workers = get_n_workers()
if isinstance(ddf, cudf.DataFrame):
ddf = dask_cudf.from_cudf(ddf, npartitions=n_workers)
ddf = ddf.repartition(npartitions=n_workers)
ddf = persist_dask_df_equal_parts_per_worker(ddf, client)
ddf = get_persisted_df_worker_map(ddf, client)
sample_call_kwargs = {
"client": client,
"session_id": session_id,
"input_graph": input_graph,
"ddf": ddf,
"keep_batches_together": keep_batches_together,
"min_batch_id": min_batch_id,
"max_batch_id": max_batch_id,
"fanout_vals": fanout_vals,
"with_replacement": with_replacement,
"weight_t": weight_t,
"indices_t": indices_t,
"with_edge_properties": with_edge_properties,
"random_state": random_state,
"return_offsets": return_offsets,
"return_hops": return_hops,
"prior_sources_behavior": prior_sources_behavior,
"deduplicate_sources": deduplicate_sources,
"renumber": renumber,
"use_legacy_names": use_legacy_names,
"include_hop_column": include_hop_column,
"compress_per_hop": compress_per_hop,
"compression": compression,
}
if _multiple_clients:
# Distributed centralized lock to allow
# two disconnected processes (clients) to coordinate a lock
# https://docs.dask.org/en/stable/futures.html?highlight=lock#distributed.Lock
lock = Lock("plc_graph_access")
if lock.acquire(timeout=100):
try:
ddf = _mg_call_plc_uniform_neighbor_sample(**sample_call_kwargs)
finally:
lock.release()
else:
raise RuntimeError(
"Failed to acquire lock(plc_graph_access) while trying to sampling"
)
else:
ddf = _mg_call_plc_uniform_neighbor_sample(**sample_call_kwargs)
if return_offsets:
if renumber:
ddf, offsets_df, renumber_df = ddf
else:
ddf, offsets_ddf = ddf
else:
if renumber:
ddf, renumber_df = ddf
if input_graph.renumbered and not renumber:
if use_legacy_names:
ddf = input_graph.unrenumber(ddf, "sources", preserve_order=True)
ddf = input_graph.unrenumber(ddf, "destinations", preserve_order=True)
else:
ddf = input_graph.unrenumber(ddf, "majors", preserve_order=True)
ddf = input_graph.unrenumber(ddf, "minors", preserve_order=True)
if return_offsets:
if renumber:
return ddf, offsets_df, renumber_df
else:
return ddf, offsets_ddf
if renumber:
return ddf, renumber_df
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/sampling/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/cores/core_number.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from pylibcugraph import ResourceHandle, core_number as pylibcugraph_core_number
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_vertices, cupy_core_number = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["core_number"] = cupy_core_number
return df
def _call_plc_core_number(sID, mg_graph_x, dt_x, do_expensive_check):
return pylibcugraph_core_number(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
degree_type=dt_x,
do_expensive_check=do_expensive_check,
)
def core_number(input_graph, degree_type="bidirectional"):
"""
Compute the core numbers for the nodes of the graph G. A k-core of a graph
is a maximal subgraph that contains nodes of degree k or more.
A node has a core number of k if it belongs a k-core but not to k+1-core.
This call does not support a graph with self-loops and parallel
edges.
Parameters
----------
input_graph : cugraph.graph
cuGraph graph descriptor, should contain the connectivity information,
(edge weights are not used in this algorithm).
The current implementation only supports undirected graphs.
degree_type: str, (default="bidirectional")
This option determines if the core number computation should be based
on input, output, or both directed edges, with valid values being
"incoming", "outgoing", and "bidirectional" respectively.
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['vertex']: dask_cudf.Series
Contains the core number vertices
ddf['core_number']: dask_cudf.Series
Contains the core number of vertices
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
if degree_type not in ["incoming", "outgoing", "bidirectional"]:
raise ValueError(
f"'degree_type' must be either incoming, "
f"outgoing or bidirectional, got: {degree_type}"
)
# Initialize dask client
client = default_client()
do_expensive_check = False
result = [
client.submit(
_call_plc_core_number,
Comms.get_session_id(),
input_graph._plc_graph[w],
degree_type,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/cores/k_core.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
from cugraph.dask.common.input_utils import get_distributed_data
import dask_cudf
import cudf
import cugraph.dask as dcg
from pylibcugraph import ResourceHandle, k_core as pylibcugraph_k_core
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_src_vertices, cupy_dst_vertices, cupy_weights = cp_arrays
df = cudf.DataFrame()
df["src"] = cupy_src_vertices
df["dst"] = cupy_dst_vertices
df["weights"] = cupy_weights
return df
def _call_plc_k_core(sID, mg_graph_x, k, degree_type, core_result, do_expensive_check):
return pylibcugraph_k_core(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
k=k,
degree_type=degree_type,
core_result=core_result,
do_expensive_check=do_expensive_check,
)
def k_core(input_graph, k=None, core_number=None, degree_type="bidirectional"):
"""
Compute the k-core of the graph G based on the out degree of its nodes. A
k-core of a graph is a maximal subgraph that contains nodes of degree k or
more. This call does not support a graph with self-loops and parallel
edges.
Parameters
----------
input_graph : cuGraph.Graph
cuGraph graph descriptor with connectivity information. The graph
should contain undirected edges where undirected edges are represented
as directed edges in both directions. While this graph can contain edge
weights, they don't participate in the calculation of the k-core.
The current implementation only supports undirected graphs.
k : int, optional (default=None)
Order of the core. This value must not be negative. If set to None, the
main core is returned.
degree_type: str (default="bidirectional")
This option determines if the core number computation should be based
on input, output, or both directed edges, with valid values being
"incoming", "outgoing", and "bidirectional" respectively.
core_number : cudf.DataFrame or das_cudf.DataFrame, optional (default=None)
Precomputed core number of the nodes of the graph G containing two
cudf.Series of size V: the vertex identifiers and the corresponding
core number values. If set to None, the core numbers of the nodes are
calculated internally.
core_number['vertex'] : cudf.Series or dask_cudf.Series
Contains the vertex identifiers
core_number['values'] : cudf.Series or dask_cudf.Series
Contains the core number of vertices
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing the K Core of the input graph
ddf['src']: dask_cudf.Series
Contains sources of the K Core
ddf['dst']: dask_cudf.Series
Contains destinations of the K Core
and/or
ddf['weights']: dask_cudf.Series
Contains weights of the K Core
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=False)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> KCore_df = dcg.k_core(dg)
"""
if degree_type not in ["incoming", "outgoing", "bidirectional"]:
raise ValueError(
f"'degree_type' must be either incoming, "
f"outgoing or bidirectional, got: {degree_type}"
)
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
if core_number is None:
core_number = dcg.core_number(input_graph)
if input_graph.renumbered is True:
if len(input_graph.renumber_map.implementation.col_names) > 1:
cols = core_number.columns[:-1].to_list()
else:
cols = "vertex"
core_number = input_graph.add_internal_vertex_id(
core_number, "vertex", cols
)
if not isinstance(core_number, dask_cudf.DataFrame):
if isinstance(core_number, cudf.DataFrame):
# convert to dask_cudf in order to distribute the edges
core_number = dask_cudf.from_cudf(core_number, input_graph._npartitions)
else:
raise TypeError(
f"'core_number' must be either None or of"
f"type cudf/dask_cudf, got: {type(core_number)}"
)
core_number = core_number.rename(columns={"core_number": "values"})
if k is None:
k = core_number["values"].max().compute()
core_number = get_distributed_data(core_number)
wait(core_number)
core_number = core_number.worker_to_parts
client = default_client()
do_expensive_check = False
result = [
client.submit(
_call_plc_k_core,
Comms.get_session_id(),
input_graph._plc_graph[w],
k,
degree_type,
core_number[w][0],
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# FIXME: Dask doesn't always release it fast enough.
# For instance if the algo is run several times with
# the same PLC graph, the current iteration might try to cache
# the past iteration's futures and this can cause a hang if some
# of those futures get released midway
del result
del cudf_result
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "src")
ddf = input_graph.unrenumber(ddf, "dst")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/cores/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .core_number import core_number
from .k_core import k_core
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/centrality/eigenvector_centrality.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
from pylibcugraph import (
eigenvector_centrality as pylib_eigen,
ResourceHandle,
)
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
import warnings
def _call_plc_eigenvector_centrality(
sID,
mg_graph_x,
max_iterations,
epsilon,
do_expensive_check,
):
return pylib_eigen(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
epsilon=epsilon,
max_iterations=max_iterations,
do_expensive_check=do_expensive_check,
)
def convert_to_cudf(cp_arrays):
"""
create a cudf DataFrame from cupy arrays
"""
cupy_vertices, cupy_values = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["eigenvector_centrality"] = cupy_values
return df
def eigenvector_centrality(input_graph, max_iter=100, tol=1.0e-6):
"""
Compute the eigenvector centrality for a graph G.
Eigenvector centrality computes the centrality for a node based on the
centrality of its neighbors. The eigenvector centrality for node i is the
i-th element of the vector x defined by the eigenvector equation.
Parameters
----------
input_graph : cuGraph.Graph or networkx.Graph
cuGraph graph descriptor with connectivity information. The graph can
contain either directed or undirected edges.
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned. This can
be used to limit the execution time and do an early exit before the
solver reaches the convergence tolerance.
tol : float, optional (default=1e-6)
Set the tolerance the approximation, this parameter should be a small
magnitude value.
The lower the tolerance the better the approximation. If this value is
0.0f, cuGraph will use the default value which is 1.0e-6.
Setting too small a tolerance can lead to non-convergence due to
numerical roundoff. Usually values between 1e-2 and 1e-6 are
acceptable.
normalized : not supported
If True normalize the resulting eigenvector centrality values
Returns
-------
df : dask_cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding eigenvector centrality values.
df['vertex'] : cudf.Series
Contains the vertex identifiers
df['eigenvector_centrality'] : cudf.Series
Contains the eigenvector centrality of vertices
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph()
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> ec = dcg.eigenvector_centrality(dg)
"""
client = default_client()
if input_graph.store_transposed is False:
warning_msg = (
"Eigenvector centrality expects the 'store_transposed' "
"flag to be set to 'True' for optimal performance "
"during the graph creation"
)
warnings.warn(warning_msg, UserWarning)
# FIXME: should we add this parameter as an option?
do_expensive_check = False
cupy_result = [
client.submit(
_call_plc_eigenvector_centrality,
Comms.get_session_id(),
input_graph._plc_graph[w],
max_iter,
tol,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(cupy_result)
cudf_result = [
client.submit(
convert_to_cudf, cp_arrays, workers=client.who_has(cp_arrays)[cp_arrays.key]
)
for cp_arrays in cupy_result
]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(cupy_result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/centrality/betweenness_centrality.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, get_client
from pylibcugraph import (
ResourceHandle,
betweenness_centrality as pylibcugraph_betweenness_centrality,
edge_betweenness_centrality as pylibcugraph_edge_betweenness_centrality,
)
import cugraph.dask.comms.comms as Comms
from cugraph.dask.common.input_utils import get_distributed_data
import dask_cudf
import cudf
import cupy as cp
import warnings
import dask
from typing import Union
def convert_to_cudf(cp_arrays: cp.ndarray, edge_bc: bool) -> cudf.DataFrame:
"""
create a cudf DataFrame from cupy arrays
"""
df = cudf.DataFrame()
if edge_bc:
cupy_src_vertices, cupy_dst_vertices, cupy_values, cupy_edge_ids = cp_arrays
df["src"] = cupy_src_vertices
df["dst"] = cupy_dst_vertices
df["betweenness_centrality"] = cupy_values
if cupy_edge_ids is not None:
df["edge_id"] = cupy_edge_ids
else:
cupy_vertices, cupy_values = cp_arrays
df["vertex"] = cupy_vertices
df["betweenness_centrality"] = cupy_values
return df
def _call_plc_betweenness_centrality(
mg_graph_x,
sID: bytes,
k: Union[int, cudf.Series],
random_state: int,
normalized: bool,
endpoints: bool,
do_expensive_check: bool,
edge_bc: bool,
) -> cudf.DataFrame:
if edge_bc:
cp_arrays = pylibcugraph_edge_betweenness_centrality(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
k=k,
random_state=random_state,
normalized=normalized,
do_expensive_check=do_expensive_check,
)
else:
cp_arrays = pylibcugraph_betweenness_centrality(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
k=k,
random_state=random_state,
normalized=normalized,
include_endpoints=endpoints,
do_expensive_check=do_expensive_check,
)
return convert_to_cudf(cp_arrays, edge_bc)
def _mg_call_plc_betweenness_centrality(
input_graph,
client: dask.distributed.client.Client,
sID: bytes,
k: dict,
random_state: int,
normalized: bool,
do_expensive_check: bool,
endpoints: bool = False,
edge_bc: bool = False,
) -> dask_cudf.DataFrame:
result = [
client.submit(
_call_plc_betweenness_centrality,
input_graph._plc_graph[w],
sID,
k if isinstance(k, (int, type(None))) else k[w][0],
hash((random_state, i)),
normalized,
endpoints,
do_expensive_check,
edge_bc,
workers=[w],
allow_other_workers=False,
pure=False,
)
for i, w in enumerate(Comms.get_workers())
]
wait(result)
ddf = dask_cudf.from_delayed(result, verify_meta=False).persist()
wait(ddf)
wait([r.release() for r in result])
return ddf
def betweenness_centrality(
input_graph,
k: Union[
int, list, cudf.Series, cudf.DataFrame, dask_cudf.Series, dask_cudf.DataFrame
] = None,
normalized: bool = True,
weight: cudf.DataFrame = None,
endpoints: bool = False,
random_state: int = None,
) -> dask_cudf.DataFrame:
"""
Compute the betweenness centrality for all vertices of the graph G.
Betweenness centrality is a measure of the number of shortest paths that
pass through a vertex. A vertex with a high betweenness centrality score
has more paths passing through it and is therefore believed to be more
important.
To improve performance. rather than doing an all-pair shortest path,
a sample of k starting vertices can be used.
CuGraph does not currently support 'weight' parameters.
Parameters
----------
input_graph: cuGraph.Graph
The graph can be either directed (Graph(directed=True)) or undirected.
The current implementation uses a parallel variation of the Brandes
Algorithm (2001) to compute exact or approximate betweenness.
If weights are provided in the edgelist, they will not be used.
k : int, list or (dask)cudf object or None, optional (default=None)
If k is not None, use k node samples to estimate betweenness. Higher
values give better approximation. If k is either a list, a cudf DataFrame,
or a dask_cudf DataFrame, then its contents are assumed to be vertex
identifiers to be used for estimation. If k is None (the default), all the
vertices are used to estimate betweenness. Vertices obtained through
sampling or defined as a list will be used as sources for traversals inside
the algorithm.
normalized : bool, optional (default=True)
If True, normalize the resulting betweenness centrality values by
__2 / ((n - 1) * (n - 2))__ for undirected Graphs, and
__1 / ((n - 1) * (n - 2))__ for directed Graphs
where n is the number of nodes in G.
Normalization will ensure that values are in [0, 1],
this normalization scales for the highest possible value where one
node is crossed by every single shortest path.
weight : (dask)cudf.DataFrame, optional (default=None)
Specifies the weights to be used for each edge.
Should contain a mapping between
edges and weights.
(Not Supported)
endpoints : bool, optional (default=False)
If true, include the endpoints in the shortest path counts.
random_state : int, optional (default=None)
if k is specified and k is an integer, use random_state to initialize the
random number generator.
Using None defaults to a hash of process id, time, and hostname
If k is either None or list or cudf objects: random_state parameter is
ignored.
Returns
-------
betweenness_centrality : dask_cudf.DataFrame
GPU distributed data frame containing two dask_cudf.Series of size V:
the vertex identifiers and the corresponding betweenness centrality values.
ddf['vertex'] : dask_cudf.Series
Contains the vertex identifiers
ddf['betweenness_centrality'] : dask_cudf.Series
Contains the betweenness centrality of vertices
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst')
>>> pr = dcg.betweenness_centrality(dg)
"""
if input_graph.store_transposed is True:
warning_msg = (
"Betweenness centrality expects the 'store_transposed' flag "
"to be set to 'False' for optimal performance during "
"the graph creation"
)
warnings.warn(warning_msg, UserWarning)
if weight is not None:
raise NotImplementedError(
"weighted implementation of betweenness "
"centrality not currently supported"
)
if not isinstance(k, (dask_cudf.DataFrame, dask_cudf.Series)):
if isinstance(k, (cudf.DataFrame, cudf.Series, list)):
if isinstance(k, list):
k_dtype = input_graph.nodes().dtype
k = cudf.Series(k, dtype=k_dtype)
if isinstance(k, (cudf.Series, cudf.DataFrame)):
splits = cp.array_split(cp.arange(len(k)), len(Comms.get_workers()))
k = {w: [k.iloc[splits[i]]] for i, w in enumerate(Comms.get_workers())}
else:
if k is not None:
k = get_distributed_data(k)
wait(k)
k = k.worker_to_parts
if input_graph.renumbered:
if isinstance(k, dask_cudf.DataFrame):
tmp_col_names = k.columns
elif isinstance(k, dask_cudf.Series):
tmp_col_names = None
if isinstance(k, (dask_cudf.DataFrame, dask_cudf.Series)):
k = input_graph.lookup_internal_vertex_id(k, tmp_col_names)
# FIXME: should we add this parameter as an option?
do_expensive_check = False
client = get_client()
ddf = _mg_call_plc_betweenness_centrality(
input_graph=input_graph,
client=client,
sID=Comms.get_session_id(),
k=k,
random_state=random_state,
normalized=normalized,
endpoints=endpoints,
do_expensive_check=do_expensive_check,
)
if input_graph.renumbered:
return input_graph.unrenumber(ddf, "vertex")
return ddf
def edge_betweenness_centrality(
input_graph,
k: Union[
int, list, cudf.Series, cudf.DataFrame, dask_cudf.Series, dask_cudf.DataFrame
] = None,
normalized: bool = True,
weight: cudf.DataFrame = None,
random_state: int = None,
) -> dask_cudf.DataFrame:
"""
Compute the edge betweenness centrality for all edges of the graph G.
Betweenness centrality is a measure of the number of shortest paths
that pass over an edge. An edge with a high betweenness centrality
score has more paths passing over it and is therefore believed to be
more important.
To improve performance. rather than doing an all-pair shortest path,
a sample of k starting vertices can be used.
CuGraph does not currently support the 'weight' parameter.
Parameters
----------
input_graph: cuGraph.Graph
The graph can be either directed (Graph(directed=True)) or undirected.
The current implementation uses a parallel variation of the Brandes
Algorithm (2001) to compute exact or approximate betweenness.
If weights are provided in the edgelist, they will not be used.
k : int, list or (dask)cudf object or None, optional (default=None)
If k is not None, use k node samples to estimate betweenness. Higher
values give better approximation. If k is either a list, a cudf DataFrame,
or a dask_cudf DataFrame, then its contents are assumed to be vertex
identifiers to be used for estimation. If k is None (the default), all the
vertices are used to estimate betweenness. Vertices obtained through
sampling or defined as a list will be used as sources for traversals inside
the algorithm.
normalized : bool, optional (default=True)
If True, normalize the resulting betweenness centrality values by
__2 / (n * (n - 1))__ for undirected Graphs, and
__1 / (n * (n - 1))__ for directed Graphs
where n is the number of nodes in G.
Normalization will ensure that values are in [0, 1],
this normalization scales for the highest possible value where one
edge is crossed by every single shortest path.
weight : (dask)cudf.DataFrame, optional (default=None)
Specifies the weights to be used for each edge.
Should contain a mapping between
edges and weights.
(Not Supported)
random_state : int, optional (default=None)
if k is specified and k is an integer, use random_state to initialize the
random number generator.
Using None defaults to a hash of process id, time, and hostname
If k is either None or list or cudf objects: random_state parameter is
ignored.
Returns
-------
betweenness_centrality : dask_cudf.DataFrame
GPU distributed data frame containing two dask_cudf.Series of size V:
the vertex identifiers and the corresponding betweenness centrality values.
ddf['src'] : dask_cudf.Series
Contains the vertex identifiers of the source of each edge
ddf['dst'] : dask_cudf.Series
Contains the vertex identifiers of the destination of each edge
ddf['betweenness_centrality'] : dask_cudf.Series
Contains the betweenness centrality of edges
ddf["edge_id"] : dask_cudf.Series
Contains the edge ids of edges if present.
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst')
>>> pr = dcg.edge_betweenness_centrality(dg)
"""
if input_graph.store_transposed is True:
warning_msg = (
"Betweenness centrality expects the 'store_transposed' flag "
"to be set to 'False' for optimal performance during "
"the graph creation"
)
warnings.warn(warning_msg, UserWarning)
if weight is not None:
raise NotImplementedError(
"weighted implementation of edge betweenness "
"centrality not currently supported"
)
if not isinstance(k, (dask_cudf.DataFrame, dask_cudf.Series)):
if isinstance(k, (cudf.DataFrame, cudf.Series, list)):
if isinstance(k, list):
k_dtype = input_graph.nodes().dtype
k = cudf.Series(k, dtype=k_dtype)
if isinstance(k, (cudf.Series, cudf.DataFrame)):
splits = cp.array_split(cp.arange(len(k)), len(Comms.get_workers()))
k = {w: [k.iloc[splits[i]]] for i, w in enumerate(Comms.get_workers())}
else:
if k is not None:
k = get_distributed_data(k)
wait(k)
k = k.worker_to_parts
if input_graph.renumbered:
if isinstance(k, dask_cudf.DataFrame):
tmp_col_names = k.columns
elif isinstance(k, dask_cudf.Series):
tmp_col_names = None
if isinstance(k, (dask_cudf.DataFrame, dask_cudf.Series)):
k = input_graph.lookup_internal_vertex_id(k, tmp_col_names)
# FIXME: should we add this parameter as an option?
do_expensive_check = False
client = get_client()
ddf = _mg_call_plc_betweenness_centrality(
input_graph=input_graph,
client=client,
sID=Comms.get_session_id(),
k=k,
random_state=random_state,
normalized=normalized,
do_expensive_check=do_expensive_check,
edge_bc=True,
)
if input_graph.renumbered:
return input_graph.unrenumber(ddf, "vertex")
if input_graph.is_directed() is False:
# swap the src and dst vertices for the lower triangle only. Because
# this is a symmeterized graph, this operation results in a df with
# multiple src/dst entries.
ddf["src"], ddf["dst"] = ddf[["src", "dst"]].min(axis=1), ddf[
["src", "dst"]
].max(axis=1)
# overwrite the df with the sum of the values for all alike src/dst
# vertex pairs, resulting in half the edges of the original df from the
# symmeterized graph.
ddf = ddf.groupby(by=["src", "dst"]).sum().reset_index()
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/centrality/__init__.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.centrality.betweenness_centrality import (
betweenness_centrality,
edge_betweenness_centrality,
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/centrality/katz_centrality.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
from pylibcugraph import ResourceHandle, katz_centrality as pylibcugraph_katz
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
import warnings
def _call_plc_katz_centrality(
sID, mg_graph_x, betas, alpha, beta, epsilon, max_iterations, do_expensive_check
):
return pylibcugraph_katz(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
betas=betas,
alpha=alpha,
beta=beta,
epsilon=epsilon,
max_iterations=max_iterations,
do_expensive_check=do_expensive_check,
)
def convert_to_cudf(cp_arrays):
"""
create a cudf DataFrame from cupy arrays
"""
cupy_vertices, cupy_values = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["katz_centrality"] = cupy_values
return df
def katz_centrality(
input_graph,
alpha=None,
beta=1.0,
max_iter=100,
tol=1.0e-6,
nstart=None,
normalized=True,
):
"""
Compute the Katz centrality for the nodes of the graph G.
Parameters
----------
input_graph : cuGraph.Graph
cuGraph graph descriptor with connectivity information. The graph can
contain either directed or undirected edges.
alpha : float, optional (default=None)
Attenuation factor. If alpha is not specified then
it is internally calculated as 1/(degree_max) where degree_max is the
maximum out degree.
NOTE
The maximum acceptable value of alpha for convergence
alpha_max = 1/(lambda_max) where lambda_max is the largest
eigenvalue of the graph.
Since lambda_max is always lesser than or equal to degree_max for a
graph, alpha_max will always be greater than or equal to
(1/degree_max). Therefore, setting alpha to (1/degree_max) will
guarantee that it will never exceed alpha_max thus in turn
fulfilling the requirement for convergence.
beta : float, optional (default=None)
Weight scalar added to each vertex's new Katz Centrality score in every
iteration. If beta is not specified then it is set as 1.0.
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned. This can
be used to limit the execution time and do an early exit before the
solver reaches the convergence tolerance.
If this value is lower or equal to 0 cuGraph will use the default
value, which is 100.
tol : float, optional (default=1.0e-5)
Set the tolerance the approximation, this parameter should be a small
magnitude value.
The lower the tolerance the better the approximation. If this value is
0.0f, cuGraph will use the default value which is 1.0e-6.
Setting too small a tolerance can lead to non-convergence due to
numerical roundoff. Usually values between 1e-2 and 1e-6 are
acceptable.
nstart : dask_cudf.Dataframe, optional (default=None)
Distributed GPU Dataframe containing the initial guess for katz
centrality.
nstart['vertex'] : dask_cudf.Series
Contains the vertex identifiers
nstart['values'] : dask_cudf.Series
Contains the katz centrality values of vertices
normalized : not supported
If True normalize the resulting katz centrality values
Returns
-------
katz_centrality : dask_cudf.DataFrame
GPU distributed data frame containing two dask_cudf.Series of size V:
the vertex identifiers and the corresponding katz centrality values.
ddf['vertex'] : dask_cudf.Series
Contains the vertex identifiers
ddf['katz_centrality'] : dask_cudf.Series
Contains the katz centrality of vertices
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst')
>>> pr = dcg.katz_centrality(dg)
"""
client = default_client()
if input_graph.store_transposed is False:
warning_msg = (
"Katz centrality expects the 'store_transposed' flag "
"to be set to 'True' for optimal performance during "
"the graph creation"
)
warnings.warn(warning_msg, UserWarning)
if alpha is None:
degree_max = input_graph.degree()["degree"].max().compute()
alpha = 1 / (degree_max)
if (alpha is not None) and (alpha <= 0.0):
raise ValueError(f"'alpha' must be a positive float or None, " f"got: {alpha}")
# FIXME: should we add this parameter as an option?
do_expensive_check = False
initial_hubs_guess_values = None
if nstart:
if input_graph.renumbered:
if len(input_graph.renumber_map.implementation.col_names) > 1:
cols = nstart.columns[:-1].to_list()
else:
cols = "vertex"
nstart = input_graph.add_internal_vertex_id(nstart, "vertex", cols)
initial_hubs_guess_values = nstart[nstart.columns[0]].compute()
else:
initial_hubs_guess_values = nstart["values"]
if isinstance(nstart, dask_cudf.DataFrame):
initial_hubs_guess_values = initial_hubs_guess_values.compute()
cupy_result = [
client.submit(
_call_plc_katz_centrality,
Comms.get_session_id(),
input_graph._plc_graph[w],
initial_hubs_guess_values,
alpha,
beta,
tol,
max_iter,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(cupy_result)
cudf_result = [
client.submit(
convert_to_cudf, cp_arrays, workers=client.who_has(cp_arrays)[cp_arrays.key]
)
for cp_arrays in cupy_result
]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(cupy_result, cudf_result)])
if input_graph.renumbered:
return input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/traversal/bfs.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from pylibcugraph import ResourceHandle, bfs as pylibcugraph_bfs
from dask.distributed import wait, default_client
from cugraph.dask.common.input_utils import get_distributed_data
import cugraph.dask.comms.comms as Comms
import cudf
import dask_cudf
import warnings
def convert_to_cudf(cp_arrays):
"""
create a cudf DataFrame from cupy arrays
"""
cupy_distances, cupy_predecessors, cupy_vertices = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["distance"] = cupy_distances
df["predecessor"] = cupy_predecessors
return df
def _call_plc_bfs(
sID,
mg_graph_x,
st_x,
depth_limit=None,
direction_optimizing=False,
return_distances=True,
do_expensive_check=False,
):
return pylibcugraph_bfs(
ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
sources=st_x,
direction_optimizing=direction_optimizing,
depth_limit=depth_limit if depth_limit is not None else 0,
compute_predecessors=return_distances,
do_expensive_check=do_expensive_check,
)
def bfs(input_graph, start, depth_limit=None, return_distances=True, check_start=True):
"""
Find the distances and predecessors for a breadth-first traversal of a
graph.
The input graph must contain edge list as a dask-cudf dataframe with
one partition per GPU.
Parameters
----------
input_graph : cugraph.Graph
cuGraph graph instance, should contain the connectivity information
as dask cudf edge list dataframe (edge weights are not used for this
algorithm).
start : Integer or list or cudf object or dask_cudf object
The id(s) of the graph vertex from which the traversal begins
in each component of the graph. Only one vertex per connected
component of the graph is allowed.
depth_limit : Integer or None, optional (default=None)
Limit the depth of the search
return_distances : bool, optional (default=True)
Indicates if distances should be returned
check_start : bool, optional (default=True)
If True, performs more extensive tests on the start vertices
to ensure validitity, at the expense of increased run time.
Returns
-------
df : dask_cudf.DataFrame
df['vertex'] gives the vertex id
df['distance'] gives the path distance from the
starting vertex (Only if return_distances is True)
df['predecessor'] gives the vertex it was
reached from in the traversal
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> df = dcg.bfs(dg, 0)
"""
client = default_client()
invalid_dtype = False
if not isinstance(start, (dask_cudf.DataFrame, dask_cudf.Series)):
if not isinstance(start, (cudf.DataFrame, cudf.Series)):
vertex_dtype = input_graph.nodes().dtype
start = cudf.Series(start, dtype=vertex_dtype)
# convert into a dask_cudf
start = dask_cudf.from_cudf(start, input_graph._npartitions)
if check_start:
if isinstance(start, dask_cudf.Series):
vertex_dtype = input_graph.nodes().dtype
if start.dtype is not vertex_dtype:
invalid_dtype = True
else:
# Multicolumn vertices case
start_dtype = start.dtypes.reset_index(drop=True)
vertex_dtype = input_graph.nodes().dtypes.reset_index(drop=True)
if not start_dtype.equals(vertex_dtype):
invalid_dtype = True
if invalid_dtype:
warning_msg = (
"The 'start' values dtype must match " "the graph's vertices dtype."
)
warnings.warn(warning_msg, UserWarning)
if isinstance(start, dask_cudf.Series):
start = start.astype(vertex_dtype)
else:
start = start.astype(vertex_dtype[0])
is_valid_vertex = input_graph.has_node(start)
if not is_valid_vertex:
raise ValueError("At least one start vertex provided was invalid")
if input_graph.renumbered:
if isinstance(start, dask_cudf.DataFrame):
tmp_col_names = start.columns
elif isinstance(start, dask_cudf.Series):
tmp_col_names = None
start = input_graph.lookup_internal_vertex_id(start, tmp_col_names)
data_start = get_distributed_data(start)
do_expensive_check = False
# FIXME: Why is 'direction_optimizing' not part of the python cugraph API
# and why is it set to 'False' by default
direction_optimizing = False
cupy_result = [
client.submit(
_call_plc_bfs,
Comms.get_session_id(),
input_graph._plc_graph[w],
st[0],
depth_limit,
direction_optimizing,
return_distances,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w, st in data_start.worker_to_parts.items()
]
wait(cupy_result)
cudf_result = [
client.submit(convert_to_cudf, cp_arrays) for cp_arrays in cupy_result
]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(cupy_result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
ddf = input_graph.unrenumber(ddf, "predecessor")
ddf = ddf.fillna(-1)
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/traversal/__init__.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/traversal/sssp.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import cupy
import cudf
import dask_cudf
from pylibcugraph import sssp as pylibcugraph_sssp, ResourceHandle
def _call_plc_sssp(
sID, mg_graph_x, source, cutoff, compute_predecessors, do_expensive_check
):
vertices, distances, predecessors = pylibcugraph_sssp(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
source=source,
cutoff=cutoff,
compute_predecessors=compute_predecessors,
do_expensive_check=do_expensive_check,
)
return cudf.DataFrame(
{
"distance": cudf.Series(distances),
"vertex": cudf.Series(vertices),
"predecessor": cudf.Series(predecessors),
}
)
def sssp(input_graph, source, cutoff=None, check_source=True):
"""
Compute the distance and predecessors for shortest paths from the specified
source to all the vertices in the input_graph. The distances column will
store the distance from the source to each vertex. The predecessors column
will store each vertex's predecessor in the shortest path. Vertices that
are unreachable will have a distance of infinity denoted by the maximum
value of the data type and the predecessor set as -1. The source vertex's
predecessor is also set to -1. The input graph must contain edge list as
dask-cudf dataframe with one partition per GPU.
Parameters
----------
input_graph : cugraph.Graph
cuGraph graph descriptor, should contain the connectivity information
as dask cudf edge list dataframe.
source : Integer
Specify source vertex
cutoff : double, optional (default = None)
Maximum edge weight sum considered by the algorithm
check_source : bool, optional (default=True)
If True, performs more extensive tests on the start vertices
to ensure validitity, at the expense of increased run time.
Returns
-------
df : dask_cudf.DataFrame
df['vertex'] gives the vertex id
df['distance'] gives the path distance from the
starting vertex
df['predecessor'] gives the vertex id it was
reached from in the traversal
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> df = dcg.sssp(dg, 0)
"""
# FIXME: Implement a better way to check if the graph is weighted similar
# to 'simpleGraph'
if not input_graph.weighted:
err_msg = (
"'SSSP' requires the input graph to be weighted."
"'BFS' should be used instead of 'SSSP' for unweighted graphs."
)
raise ValueError(err_msg)
client = default_client()
def check_valid_vertex(G, source):
is_valid_vertex = G.has_node(source)
if not is_valid_vertex:
raise ValueError("Invalid source vertex")
if check_source:
check_valid_vertex(input_graph, source)
if cutoff is None:
cutoff = cupy.inf
if input_graph.renumbered:
source = (
input_graph.lookup_internal_vertex_id(cudf.Series([source]))
.fillna(-1)
.compute()
)
source = source.iloc[0]
do_expensive_check = False
compute_predecessors = True
result = [
client.submit(
_call_plc_sssp,
Comms.get_session_id(),
input_graph._plc_graph[w],
source,
cutoff,
compute_predecessors,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
ddf = dask_cudf.from_delayed(result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([r.release() for r in result])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
ddf = input_graph.unrenumber(ddf, "predecessor")
ddf["predecessor"] = ddf["predecessor"].fillna(-1)
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_prediction/overlap.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from cugraph.dask.common.input_utils import get_distributed_data
from cugraph.utilities import renumber_vertex_pair
from pylibcugraph import (
overlap_coefficients as pylibcugraph_overlap_coefficients,
)
from pylibcugraph import ResourceHandle
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_first, cupy_second, cupy_similarity = cp_arrays
df = cudf.DataFrame()
df["first"] = cupy_first
df["second"] = cupy_second
df["overlap_coeff"] = cupy_similarity
return df
def _call_plc_overlap(
sID, mg_graph_x, vertex_pair, use_weight, do_expensive_check, vertex_pair_col_name
):
first = vertex_pair[vertex_pair_col_name[0]]
second = vertex_pair[vertex_pair_col_name[1]]
return pylibcugraph_overlap_coefficients(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
first=first,
second=second,
use_weight=use_weight,
do_expensive_check=do_expensive_check,
)
def overlap(input_graph, vertex_pair=None, use_weight=False):
"""
Compute the Overlap Coefficient between each pair of vertices connected by
an edge, or between arbitrary pairs of vertices specified by the user.
Overlap Coefficient is defined between two sets as the ratio of the volume
of their intersection divided by the smaller of their two volumes. In the
context of graphs, the neighborhood of a vertex is seen as a set. The
Overlap Coefficient weight of each edge represents the strength of
connection between vertices based on the relative similarity of their
neighbors. If first is specified but second is not, or vice versa, an
exception will be thrown.
cugraph.overlap, in the absence of a specified vertex pair list, will
compute the two_hop_neighbors of the entire graph to construct a vertex pair
list and will return the Overlap coefficient for those vertex pairs. This is
not advisable as the vertex_pairs can grow exponentially with respect to the
size of the datasets
Parameters
----------
input_graph : cugraph.Graph
cuGraph Graph instance, should contain the connectivity information
as an edge list (edge weights are not supported yet for this algorithm). The
graph should be undirected where an undirected edge is represented by a
directed edge in both direction. The adjacency list will be computed if
not already present.
This implementation only supports undirected, unweighted Graph.
vertex_pair : cudf.DataFrame, optional (default=None)
A GPU dataframe consisting of two columns representing pairs of
vertices. If provided, the Overlap coefficient is computed for the
given vertex pairs. If the vertex_pair is not provided then the
current implementation computes the Overlap coefficient for all
adjacent vertices in the graph.
use_weight : bool, optional (default=False)
Flag to indicate whether to compute weighted overlap (if use_weight==True)
or un-weighted overlap (if use_weight==False).
'input_graph' must be weighted if 'use_weight=True'.
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['first']: dask_cudf.Series
The first vertex ID of each pair(will be identical to first if specified).
ddf['second']: dask_cudf.Series
The second vertex ID of each pair(will be identical to second if
specified).
ddf['overlap_coeff']: dask_cudf.Series
The computed overlap coefficient between the first and the second
vertex ID.
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
if vertex_pair is None:
# Call two_hop neighbor of the entire graph
vertex_pair = input_graph.get_two_hop_neighbors()
vertex_pair_col_name = vertex_pair.columns
if isinstance(vertex_pair, (dask_cudf.DataFrame, cudf.DataFrame)):
vertex_pair = renumber_vertex_pair(input_graph, vertex_pair)
elif vertex_pair is not None:
raise ValueError("vertex_pair must be a dask_cudf or cudf dataframe")
if not isinstance(vertex_pair, (dask_cudf.DataFrame)):
vertex_pair = dask_cudf.from_cudf(
vertex_pair, npartitions=len(Comms.get_workers())
)
vertex_pair = get_distributed_data(vertex_pair)
wait(vertex_pair)
vertex_pair = vertex_pair.worker_to_parts
# Initialize dask client
client = default_client()
do_expensive_check = False
if vertex_pair is not None:
result = [
client.submit(
_call_plc_overlap,
Comms.get_session_id(),
input_graph._plc_graph[w],
vertex_pair[w][0],
use_weight,
do_expensive_check,
vertex_pair_col_name,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "first")
ddf = input_graph.unrenumber(ddf, "second")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_prediction/jaccard.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from cugraph.dask.common.input_utils import get_distributed_data
from cugraph.utilities import renumber_vertex_pair
from pylibcugraph import (
jaccard_coefficients as pylibcugraph_jaccard_coefficients,
)
from pylibcugraph import ResourceHandle
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_first, cupy_second, cupy_similarity = cp_arrays
df = cudf.DataFrame()
df["first"] = cupy_first
df["second"] = cupy_second
df["jaccard_coeff"] = cupy_similarity
return df
def _call_plc_jaccard(
sID, mg_graph_x, vertex_pair, use_weight, do_expensive_check, vertex_pair_col_name
):
first = vertex_pair[vertex_pair_col_name[0]]
second = vertex_pair[vertex_pair_col_name[1]]
return pylibcugraph_jaccard_coefficients(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
first=first,
second=second,
use_weight=use_weight,
do_expensive_check=do_expensive_check,
)
def jaccard(input_graph, vertex_pair=None, use_weight=False):
"""
Compute the Jaccard similarity between each pair of vertices connected by
an edge, or between arbitrary pairs of vertices specified by the user.
Jaccard similarity is defined between two sets as the ratio of the volume
of their intersection divided by the volume of their union. In the context
of graphs, the neighborhood of a vertex is seen as a set. The Jaccard
similarity weight of each edge represents the strength of connection
between vertices based on the relative similarity of their neighbors. If
first is specified but second is not, or vice versa, an exception will be
thrown.
NOTE: If the vertex_pair parameter is not specified then the behavior
of cugraph.jaccard is different from the behavior of
networkx.jaccard_coefficient.
cugraph.dask.jaccard, in the absence of a specified vertex pair list, will
compute the two_hop_neighbors of the entire graph to construct a vertex pair
list and will return the jaccard coefficient for those vertex pairs. This is
not advisable as the vertex_pairs can grow exponentially with respect to the
size of the datasets
networkx.jaccard_coefficient, in the absence of a specified vertex
pair list, will return an upper triangular dense matrix, excluding
the diagonal as well as vertex pairs that are directly connected
by an edge in the graph, of jaccard coefficients. Technically, networkx
returns a lazy iterator across this upper triangular matrix where
the actual jaccard coefficient is computed when the iterator is
dereferenced. Computing a dense matrix of results is not feasible
if the number of vertices in the graph is large (100,000 vertices
would result in 4.9 billion values in that iterator).
If your graph is small enough (or you have enough memory and patience)
you can get the interesting (non-zero) values that are part of the networkx
solution by doing the following:
But please remember that cugraph will fill the dataframe with the entire
solution you request, so you'll need enough memory to store the 2-hop
neighborhood dataframe.
Parameters
----------
input_graph : cugraph.Graph
cuGraph Graph instance, should contain the connectivity information
as an edge list (edge weights are not supported yet for this algorithm). The
graph should be undirected where an undirected edge is represented by a
directed edge in both direction. The adjacency list will be computed if
not already present.
This implementation only supports undirected, unweighted Graph.
vertex_pair : cudf.DataFrame, optional (default=None)
A GPU dataframe consisting of two columns representing pairs of
vertices. If provided, the jaccard coefficient is computed for the
given vertex pairs. If the vertex_pair is not provided then the
current implementation computes the jaccard coefficient for all
adjacent vertices in the graph.
use_weight : bool, optional (default=False)
Flag to indicate whether to compute weighted jaccard (if use_weight==True)
or un-weighted jaccard (if use_weight==False).
'input_graph' must be weighted if 'use_weight=True'.
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['first']: dask_cudf.Series
The first vertex ID of each pair (will be identical to first if specified).
ddf['second']: dask_cudf.Series
The second vertex ID of each pair (will be identical to second if
specified).
ddf['jaccard_coeff']: dask_cudf.Series
The computed jaccard coefficient between the first and the second
vertex ID.
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
if vertex_pair is None:
# Call two_hop neighbor of the entire graph
vertex_pair = input_graph.get_two_hop_neighbors()
vertex_pair_col_name = vertex_pair.columns
if isinstance(vertex_pair, (dask_cudf.DataFrame, cudf.DataFrame)):
vertex_pair = renumber_vertex_pair(input_graph, vertex_pair)
elif vertex_pair is not None:
raise ValueError("vertex_pair must be a dask_cudf or cudf dataframe")
if not isinstance(vertex_pair, (dask_cudf.DataFrame)):
vertex_pair = dask_cudf.from_cudf(
vertex_pair, npartitions=len(Comms.get_workers())
)
vertex_pair = get_distributed_data(vertex_pair)
wait(vertex_pair)
vertex_pair = vertex_pair.worker_to_parts
# Initialize dask client
client = default_client()
do_expensive_check = False
if vertex_pair is not None:
result = [
client.submit(
_call_plc_jaccard,
Comms.get_session_id(),
input_graph._plc_graph[w],
vertex_pair[w][0],
use_weight,
do_expensive_check,
vertex_pair_col_name,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "first")
ddf = input_graph.unrenumber(ddf, "second")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_prediction/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_prediction/sorensen.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
from cugraph.dask.common.input_utils import get_distributed_data
from cugraph.utilities import renumber_vertex_pair
from pylibcugraph import (
sorensen_coefficients as pylibcugraph_sorensen_coefficients,
)
from pylibcugraph import ResourceHandle
def convert_to_cudf(cp_arrays):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
cupy_first, cupy_second, cupy_similarity = cp_arrays
df = cudf.DataFrame()
df["first"] = cupy_first
df["second"] = cupy_second
df["sorensen_coeff"] = cupy_similarity
return df
def _call_plc_sorensen(
sID, mg_graph_x, vertex_pair, use_weight, do_expensive_check, vertex_pair_col_name
):
first = vertex_pair[vertex_pair_col_name[0]]
second = vertex_pair[vertex_pair_col_name[1]]
return pylibcugraph_sorensen_coefficients(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
first=first,
second=second,
use_weight=use_weight,
do_expensive_check=do_expensive_check,
)
def sorensen(input_graph, vertex_pair=None, use_weight=False):
"""
Compute the Sorensen coefficient between each pair of vertices connected by
an edge, or between arbitrary pairs of vertices specified by the user.
Sorensen coefficient is defined between two sets as the ratio of twice the
volume of their intersection divided by the volume of each set.
If first is specified but second is not, or vice versa, an exception will
be thrown.
cugraph.dask.sorensen, in the absence of a specified vertex pair list, will
compute the two_hop_neighbors of the entire graph to construct a vertex pair
list and will return the sorensen coefficient for those vertex pairs. This is
not advisable as the vertex_pairs can grow exponentially with respect to the
size of the datasets
Parameters
----------
input_graph : cugraph.Graph
cuGraph Graph instance, should contain the connectivity information
as an edge list (edge weights are not supported yet for this algorithm). The
graph should be undirected where an undirected edge is represented by a
directed edge in both direction. The adjacency list will be computed if
not already present.
This implementation only supports undirected, unweighted Graph.
vertex_pair : cudf.DataFrame, optional (default=None)
A GPU dataframe consisting of two columns representing pairs of
vertices. If provided, the sorensen coefficient is computed for the
given vertex pairs. If the vertex_pair is not provided then the
current implementation computes the sorensen coefficient for all
adjacent vertices in the graph.
use_weight : bool, optional (default=False)
Flag to indicate whether to compute weighted sorensen (if use_weight==True)
or un-weighted sorensen (if use_weight==False).
'input_graph' must be weighted if 'use_weight=True'.
Returns
-------
result : dask_cudf.DataFrame
GPU distributed data frame containing 2 dask_cudf.Series
ddf['first']: dask_cudf.Series
The first vertex ID of each pair(will be identical to first if specified).
ddf['second']: dask_cudf.Series
The second vertex ID of each pair(will be identical to second if
specified).
ddf['sorensen_coeff']: dask_cudf.Series
The computed sorensen coefficient between the first and the second
vertex ID.
"""
if input_graph.is_directed():
raise ValueError("input graph must be undirected")
if vertex_pair is None:
# Call two_hop neighbor of the entire graph
vertex_pair = input_graph.get_two_hop_neighbors()
vertex_pair_col_name = vertex_pair.columns
if isinstance(vertex_pair, (dask_cudf.DataFrame, cudf.DataFrame)):
vertex_pair = renumber_vertex_pair(input_graph, vertex_pair)
elif vertex_pair is not None:
raise ValueError("vertex_pair must be a dask_cudf or cudf dataframe")
if not isinstance(vertex_pair, (dask_cudf.DataFrame)):
vertex_pair = dask_cudf.from_cudf(
vertex_pair, npartitions=len(Comms.get_workers())
)
vertex_pair = get_distributed_data(vertex_pair)
wait(vertex_pair)
vertex_pair = vertex_pair.worker_to_parts
# Initialize dask client
client = default_client()
do_expensive_check = False
if vertex_pair is not None:
result = [
client.submit(
_call_plc_sorensen,
Comms.get_session_id(),
input_graph._plc_graph[w],
vertex_pair[w][0],
use_weight,
do_expensive_check,
vertex_pair_col_name,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
cudf_result = [client.submit(convert_to_cudf, cp_arrays) for cp_arrays in result]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, cudf_result)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "first")
ddf = input_graph.unrenumber(ddf, "second")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_analysis/hits.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from dask.distributed import wait, default_client
import cugraph.dask.comms.comms as Comms
import dask_cudf
import cudf
import warnings
from pylibcugraph import ResourceHandle, hits as pylibcugraph_hits
def _call_plc_hits(
sID,
mg_graph_x,
tol,
max_iter,
initial_hubs_guess_vertices,
initial_hubs_guess_values,
normalized,
do_expensive_check,
):
return pylibcugraph_hits(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
tol=tol,
max_iter=max_iter,
initial_hubs_guess_vertices=initial_hubs_guess_vertices,
initial_hubs_guess_values=initial_hubs_guess_values,
normalized=normalized,
do_expensive_check=do_expensive_check,
)
def convert_to_cudf(cp_arrays):
"""
create a cudf DataFrame from cupy arrays
"""
cupy_vertices, cupy_hubs, cupy_authorities = cp_arrays
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["hubs"] = cupy_hubs
df["authorities"] = cupy_authorities
return df
def hits(input_graph, tol=1.0e-5, max_iter=100, nstart=None, normalized=True):
"""
Compute HITS hubs and authorities values for each vertex
The HITS algorithm computes two numbers for a node. Authorities
estimates the node value based on the incoming links. Hubs estimates
the node value based on outgoing links.
Both cuGraph and networkx implementation use a 1-norm.
Parameters
----------
input_graph : cugraph.Graph
cuGraph graph descriptor, should contain the connectivity information
as an edge list (edge weights are not used for this algorithm).
The adjacency list will be computed if not already present.
tol : float, optional (default=1.0e-5)
Set the tolerance of the approximation, this parameter should be a
small magnitude value.
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned.
nstart : cudf.Dataframe, optional (default=None)
The initial hubs guess vertices along with their initial hubs guess
value
nstart['vertex'] : cudf.Series
Initial hubs guess vertices
nstart['values'] : cudf.Series
Initial hubs guess values
normalized : bool, optional (default=True)
A flag to normalize the results
Returns
-------
HubsAndAuthorities : dask_cudf.DataFrame
GPU distributed data frame containing three dask_cudf.Series of
size V: the vertex identifiers and the corresponding hubs and
authorities values.
df['vertex'] : dask_cudf.Series
Contains the vertex identifiers
df['hubs'] : dask_cudf.Series
Contains the hubs score
df['authorities'] : dask_cudf.Series
Contains the authorities score
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst',
... edge_attr='value')
>>> hits = dcg.hits(dg, max_iter = 50)
"""
client = default_client()
if input_graph.store_transposed is False:
warning_msg = (
"HITS expects the 'store_transposed' flag "
"to be set to 'True' for optimal performance during "
"the graph creation"
)
warnings.warn(warning_msg, UserWarning)
do_expensive_check = False
initial_hubs_guess_vertices = None
initial_hubs_guess_values = None
if nstart is not None:
initial_hubs_guess_vertices = nstart["vertex"]
initial_hubs_guess_values = nstart["values"]
cupy_result = [
client.submit(
_call_plc_hits,
Comms.get_session_id(),
input_graph._plc_graph[w],
tol,
max_iter,
initial_hubs_guess_vertices,
initial_hubs_guess_values,
normalized,
do_expensive_check,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(cupy_result)
cudf_result = [
client.submit(
convert_to_cudf, cp_arrays, workers=client.who_has(cp_arrays)[cp_arrays.key]
)
for cp_arrays in cupy_result
]
wait(cudf_result)
ddf = dask_cudf.from_delayed(cudf_result).persist()
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(cupy_result, cudf_result)])
if input_graph.renumbered:
return input_graph.unrenumber(ddf, "vertex")
return ddf
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_analysis/__init__.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/link_analysis/pagerank.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import warnings
import dask
from dask.distributed import wait, default_client
import dask_cudf
import cudf
import numpy as np
from pylibcugraph import (
pagerank as plc_pagerank,
personalized_pagerank as plc_p_pagerank,
exceptions as plc_exceptions,
ResourceHandle,
)
import cugraph.dask.comms.comms as Comms
from cugraph.dask.common.input_utils import get_distributed_data
from cugraph.exceptions import FailedToConvergeError
def convert_to_return_tuple(plc_pr_retval):
"""
Using the PLC pagerank return tuple, creates a cudf DataFrame from the cupy
arrays and extracts the (optional) bool.
"""
if len(plc_pr_retval) == 3:
cupy_vertices, cupy_pagerank, converged = plc_pr_retval
else:
cupy_vertices, cupy_pagerank = plc_pr_retval
converged = True
df = cudf.DataFrame()
df["vertex"] = cupy_vertices
df["pagerank"] = cupy_pagerank
return (df, converged)
# FIXME: Move this function to the utility module so that it can be
# shared by other algos
def ensure_valid_dtype(input_graph, input_df, input_df_name):
if input_graph.properties.weighted is False:
# If the graph is not weighted, an artificial weight column
# of type 'float32' is added and it must match the user
# personalization/nstart values.
edge_attr_dtype = np.float32
else:
edge_attr_dtype = input_graph.input_df["value"].dtype
if "values" in input_df.columns:
input_df_values_dtype = input_df["values"].dtype
if input_df_values_dtype != edge_attr_dtype:
warning_msg = (
f"PageRank requires '{input_df_name}' values "
"to match the graph's 'edge_attr' type. "
f"edge_attr type is: {edge_attr_dtype} and got "
f"'{input_df_name}' values of type: "
f"{input_df_values_dtype}."
)
warnings.warn(warning_msg, UserWarning)
input_df = input_df.astype({"values": edge_attr_dtype})
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
input_df_vertex_dtype = input_df["vertex"].dtype
if input_df_vertex_dtype != vertex_dtype:
warning_msg = (
f"PageRank requires '{input_df_name}' vertex "
"to match the graph's 'vertex' type. "
f"input graph's vertex type is: {vertex_dtype} and got "
f"'{input_df_name}' vertex of type: "
f"{input_df_vertex_dtype}."
)
warnings.warn(warning_msg, UserWarning)
input_df = input_df.astype({"vertex": vertex_dtype})
return input_df
def renumber_vertices(input_graph, input_df):
input_df = input_graph.add_internal_vertex_id(
input_df, "vertex", "vertex"
).compute()
return input_df
def _call_plc_pagerank(
sID,
mg_graph_x,
pre_vtx_o_wgt_vertices,
pre_vtx_o_wgt_sums,
initial_guess_vertices,
initial_guess_values,
alpha,
epsilon,
max_iterations,
do_expensive_check,
fail_on_nonconvergence,
):
try:
return plc_pagerank(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
precomputed_vertex_out_weight_vertices=pre_vtx_o_wgt_vertices,
precomputed_vertex_out_weight_sums=pre_vtx_o_wgt_sums,
initial_guess_vertices=initial_guess_vertices,
initial_guess_values=initial_guess_values,
alpha=alpha,
epsilon=epsilon,
max_iterations=max_iterations,
do_expensive_check=do_expensive_check,
fail_on_nonconvergence=fail_on_nonconvergence,
)
# Re-raise this as a cugraph exception so users trying to catch this do not
# have to know to import another package.
except plc_exceptions.FailedToConvergeError as exc:
raise FailedToConvergeError from exc
def _call_plc_personalized_pagerank(
sID,
mg_graph_x,
pre_vtx_o_wgt_vertices,
pre_vtx_o_wgt_sums,
data_personalization,
initial_guess_vertices,
initial_guess_values,
alpha,
epsilon,
max_iterations,
do_expensive_check,
fail_on_nonconvergence,
):
personalization_vertices = data_personalization["vertex"]
personalization_values = data_personalization["values"]
try:
return plc_p_pagerank(
resource_handle=ResourceHandle(Comms.get_handle(sID).getHandle()),
graph=mg_graph_x,
precomputed_vertex_out_weight_vertices=pre_vtx_o_wgt_vertices,
precomputed_vertex_out_weight_sums=pre_vtx_o_wgt_sums,
personalization_vertices=personalization_vertices,
personalization_values=personalization_values,
initial_guess_vertices=initial_guess_vertices,
initial_guess_values=initial_guess_values,
alpha=alpha,
epsilon=epsilon,
max_iterations=max_iterations,
do_expensive_check=do_expensive_check,
fail_on_nonconvergence=fail_on_nonconvergence,
)
# Re-raise this as a cugraph exception so users trying to catch this do not
# have to know to import another package.
except plc_exceptions.FailedToConvergeError as exc:
raise FailedToConvergeError from exc
def pagerank(
input_graph,
alpha=0.85,
personalization=None,
precomputed_vertex_out_weight=None,
max_iter=100,
tol=1.0e-5,
nstart=None,
fail_on_nonconvergence=True,
):
"""
Find the PageRank values for each vertex in a graph using multiple GPUs.
cuGraph computes an approximation of the Pagerank using the power method.
The input graph must contain edge list as dask-cudf dataframe with
one partition per GPU.
All edges will have an edge_attr value of 1.0 if not provided.
Parameters
----------
input_graph : cugraph.Graph
cuGraph graph descriptor, should contain the connectivity information
as dask cudf edge list dataframe(edge weights are not used for this
algorithm).
alpha : float, optional (default=0.85)
The damping factor alpha represents the probability to follow an
outgoing edge, standard value is 0.85.
Thus, 1.0-alpha is the probability to “teleport” to a random vertex.
Alpha should be greater than 0.0 and strictly lower than 1.0.
personalization : cudf.Dataframe, optional (default=None)
GPU Dataframe containing the personalization information.
(a performance optimization)
personalization['vertex'] : cudf.Series
Subset of vertices of graph for personalization
personalization['values'] : cudf.Series
Personalization values for vertices
precomputed_vertex_out_weight : cudf.Dataframe, optional (default=None)
GPU Dataframe containing the precomputed vertex out weight
(a performance optimization)
information.
precomputed_vertex_out_weight['vertex'] : cudf.Series
Subset of vertices of graph for precomputed_vertex_out_weight
precomputed_vertex_out_weight['sums'] : cudf.Series
Corresponding precomputed sum of outgoing vertices weight
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned. This can
be used to limit the execution time and do an early exit before the
solver reaches the convergence tolerance.
If this value is lower or equal to 0 cuGraph will use the default
value, which is 100.
tol : float, optional (default=1e-05)
Set the tolerance the approximation, this parameter should be a small
magnitude value.
The lower the tolerance the better the approximation. If this value is
0.0f, cuGraph will use the default value which is 1.0E-5.
Setting too small a tolerance can lead to non-convergence due to
numerical roundoff. Usually values between 0.01 and 0.00001 are
acceptable.
nstart : cudf.Dataframe, optional (default=None)
GPU Dataframe containing the initial guess for pagerank.
(a performance optimization)
nstart['vertex'] : cudf.Series
Subset of vertices of graph for initial guess for pagerank values
nstart['values'] : cudf.Series
Pagerank values for vertices
fail_on_nonconvergence : bool (default=True)
If the solver does not reach convergence, raise an exception if
fail_on_nonconvergence is True. If fail_on_nonconvergence is False,
the return value is a tuple of (pagerank, converged) where pagerank is
a cudf.DataFrame as described below, and converged is a boolean
indicating if the solver converged (True) or not (False).
Returns
-------
The return value varies based on the value of the fail_on_nonconvergence
paramter. If fail_on_nonconvergence is True:
PageRank : dask_cudf.DataFrame
GPU data frame containing two dask_cudf.Series of size V: the
vertex identifiers and the corresponding PageRank values.
NOTE: if the input cugraph.Graph was created using the renumber=False
option of any of the from_*_edgelist() methods, pagerank assumes that
the vertices in the edgelist are contiguous and start from 0.
If the actual set of vertices in the edgelist is not
contiguous (has gaps) or does not start from zero, pagerank will assume
the "missing" vertices are isolated vertices in the graph, and will
compute and return pagerank values for each. If this is not the desired
behavior, ensure the input cugraph.Graph is created from the
from_*_edgelist() functions with the renumber=True option (the default)
ddf['vertex'] : dask_cudf.Series
Contains the vertex identifiers
ddf['pagerank'] : dask_cudf.Series
Contains the PageRank score
If fail_on_nonconvergence is False:
(PageRank, converged) : tuple of (dask_cudf.DataFrame, bool)
PageRank is the GPU dataframe described above, converged is a bool
indicating if the solver converged (True) or not (False).
Examples
--------
>>> import cugraph.dask as dcg
>>> import dask_cudf
>>> # ... Init a DASK Cluster
>>> # see https://docs.rapids.ai/api/cugraph/stable/dask-cugraph.html
>>> # Download dataset from https://github.com/rapidsai/cugraph/datasets/..
>>> chunksize = dcg.get_chunksize(datasets_path / "karate.csv")
>>> ddf = dask_cudf.read_csv(datasets_path / "karate.csv",
... chunksize=chunksize, delimiter=" ",
... names=["src", "dst", "value"],
... dtype=["int32", "int32", "float32"])
>>> dg = cugraph.Graph(directed=True)
>>> dg.from_dask_cudf_edgelist(ddf, source='src', destination='dst')
>>> pr = dcg.pagerank(dg)
"""
# Initialize dask client
client = default_client()
if input_graph.store_transposed is False:
warning_msg = (
"Pagerank expects the 'store_transposed' flag "
"to be set to 'True' for optimal performance during "
"the graph creation"
)
warnings.warn(warning_msg, UserWarning)
initial_guess_vertices = None
initial_guess_values = None
precomputed_vertex_out_weight_vertices = None
precomputed_vertex_out_weight_sums = None
do_expensive_check = False
# FIXME: Distribute the 'precomputed_vertex_out_weight'
# across GPUs for performance optimization
if precomputed_vertex_out_weight is not None:
if input_graph.renumbered is True:
precomputed_vertex_out_weight = renumber_vertices(
input_graph, precomputed_vertex_out_weight
)
precomputed_vertex_out_weight = ensure_valid_dtype(
input_graph, precomputed_vertex_out_weight, "precomputed_vertex_out_weight"
)
precomputed_vertex_out_weight_vertices = precomputed_vertex_out_weight["vertex"]
precomputed_vertex_out_weight_sums = precomputed_vertex_out_weight["sums"]
# FIXME: Distribute the 'nstart' across GPUs for performance optimization
if nstart is not None:
if input_graph.renumbered is True:
nstart = renumber_vertices(input_graph, nstart)
nstart = ensure_valid_dtype(input_graph, nstart, "nstart")
initial_guess_vertices = nstart["vertex"]
initial_guess_values = nstart["values"]
if personalization is not None:
if input_graph.renumbered is True:
personalization = renumber_vertices(input_graph, personalization)
personalization = ensure_valid_dtype(
input_graph, personalization, "personalization"
)
personalization_ddf = dask_cudf.from_cudf(
personalization, npartitions=len(Comms.get_workers())
)
data_prsztn = get_distributed_data(personalization_ddf)
result = [
client.submit(
_call_plc_personalized_pagerank,
Comms.get_session_id(),
input_graph._plc_graph[w],
precomputed_vertex_out_weight_vertices,
precomputed_vertex_out_weight_sums,
data_personalization[0],
initial_guess_vertices,
initial_guess_values,
alpha,
tol,
max_iter,
do_expensive_check,
fail_on_nonconvergence,
workers=[w],
allow_other_workers=False,
)
for w, data_personalization in data_prsztn.worker_to_parts.items()
]
else:
result = [
client.submit(
_call_plc_pagerank,
Comms.get_session_id(),
input_graph._plc_graph[w],
precomputed_vertex_out_weight_vertices,
precomputed_vertex_out_weight_sums,
initial_guess_vertices,
initial_guess_values,
alpha,
tol,
max_iter,
do_expensive_check,
fail_on_nonconvergence,
workers=[w],
allow_other_workers=False,
)
for w in Comms.get_workers()
]
wait(result)
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
# Have each worker convert tuple of arrays and bool from PLC to cudf
# DataFrames and bools. This will be a list of futures.
result_tuples = [
client.submit(convert_to_return_tuple, cp_arrays) for cp_arrays in result
]
# Convert the futures to dask delayed objects so the tuples can be
# split. nout=2 is passed since each tuple/iterable is a fixed length of 2.
result_tuples = [dask.delayed(r, nout=2) for r in result_tuples]
# Create the ddf and get the converged bool from the delayed objs. Use a
# meta DataFrame to pass the expected dtypes for the DataFrame to prevent
# another compute to determine them automatically.
meta = cudf.DataFrame(columns=["vertex", "pagerank"])
meta = meta.astype({"pagerank": "float64", "vertex": vertex_dtype})
ddf = dask_cudf.from_delayed([t[0] for t in result_tuples], meta=meta).persist()
converged = all(dask.compute(*[t[1] for t in result_tuples]))
wait(ddf)
# Wait until the inactive futures are released
wait([(r.release(), c_r.release()) for r, c_r in zip(result, result_tuples)])
if input_graph.renumbered:
ddf = input_graph.unrenumber(ddf, "vertex")
if fail_on_nonconvergence:
return ddf
else:
return (ddf, converged)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/common/mg_utils.py | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import numba.cuda
# FIXME: this raft import breaks the library if ucx-py is
# not available. They are necessary only when doing MG work.
from cugraph.dask.common.read_utils import MissingUCXPy
try:
from raft_dask.common.utils import default_client
except ImportError as err:
# FIXME: Generalize since err.name is arr when
# libnuma.so.1 is not available
if err.name == "ucp" or err.name == "arr":
default_client = MissingUCXPy()
else:
raise
# FIXME: We currently look for the default client from dask, as such is the
# if there is a dask client running without any GPU we will still try
# to run MG using this client, it also implies that more work will be
# required in order to run an MG Batch in Combination with mutli-GPU Graph
def get_client():
try:
client = default_client()
except ValueError:
client = None
return client
def prepare_worker_to_parts(data, client=None):
if client is None:
client = get_client()
for placeholder, worker in enumerate(client.has_what().keys()):
if worker not in data.worker_to_parts:
data.worker_to_parts[worker] = [placeholder]
return data
def is_single_gpu():
ngpus = len(numba.cuda.gpus)
if ngpus > 1:
return False
else:
return True
def get_visible_devices():
_visible_devices = os.environ.get("CUDA_VISIBLE_DEVICES")
if _visible_devices is None:
# FIXME: We assume that if the variable is unset there is only one GPU
visible_devices = ["0"]
else:
visible_devices = _visible_devices.strip().split(",")
return visible_devices
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/common/input_utils.py | # Copyright (c) 2020-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections.abc import Sequence
from collections import OrderedDict
from dask_cudf.core import DataFrame as dcDataFrame
from dask_cudf.core import Series as daskSeries
import cugraph.dask.comms.comms as Comms
# FIXME: this raft import breaks the library if ucx-py is
# not available. They are necessary only when doing MG work.
from cugraph.dask.common.read_utils import MissingUCXPy
try:
from raft_dask.common.utils import get_client
except ImportError as err:
# FIXME: Generalize since err.name is arr when
# libnuma.so.1 is not available
if err.name == "ucp" or err.name == "arr":
get_client = MissingUCXPy()
else:
raise
from cugraph.dask.common.part_utils import _extract_partitions
from dask.distributed import default_client
from toolz import first
from functools import reduce
class DistributedDataHandler:
"""
Class to centralize distributed data management. Functionalities include:
- Data colocation
- Worker information extraction
- GPU futures extraction,
Additional functionality can be added as needed. This class **does not**
contain the actual data, just the metadata necessary to handle it,
including common pieces of code that need to be performed to call
Dask functions.
The constructor is not meant to be used directly, but through the factory
method DistributedDataHandler.create
"""
def __init__(
self, gpu_futures=None, workers=None, datatype=None, multiple=False, client=None
):
self.client = get_client(client)
self.gpu_futures = gpu_futures
self.worker_to_parts = _workers_to_parts(gpu_futures)
self.workers = workers
self.datatype = datatype
self.multiple = multiple
self.worker_info = None
self.total_rows = None
self.max_vertex_id = None
self.ranks = None
self.parts_to_sizes = None
self.local_data = None
@classmethod
def get_client(cls, client=None):
return default_client() if client is None else client
""" Class methods for initalization """
@classmethod
def create(cls, data, client=None, batch_enabled=False):
"""
Creates a distributed data handler instance with the given
distributed data set(s).
Parameters
----------
data : dask.array, dask.dataframe, or unbounded Sequence of
dask.array or dask.dataframe.
client : dask.distributedClient
"""
client = cls.get_client(client)
multiple = isinstance(data, Sequence)
if isinstance(first(data) if multiple else data, (dcDataFrame, daskSeries)):
datatype = "cudf"
else:
raise TypeError("Graph data must be dask-cudf dataframe")
broadcast_worker = None
if batch_enabled:
worker_ranks = client.run(Comms.get_worker_id, Comms.get_session_id())
# The worker with 'rank = 0' must be the root of the broadcast.
broadcast_worker = list(worker_ranks.keys())[
list(worker_ranks.values()).index(0)
]
gpu_futures = client.sync(
_extract_partitions,
data,
client,
batch_enabled=batch_enabled,
broadcast_worker=broadcast_worker,
)
workers = tuple(OrderedDict.fromkeys(map(lambda x: x[0], gpu_futures)))
return DistributedDataHandler(
gpu_futures=gpu_futures,
workers=workers,
datatype=datatype,
multiple=multiple,
client=client,
)
""" Methods to calculate further attributes """
def calculate_worker_and_rank_info(self, comms):
self.worker_info = comms.worker_info(comms.worker_addresses)
self.ranks = dict()
for w, futures in self.worker_to_parts.items():
self.ranks[w] = self.worker_info[w]["rank"]
def calculate_parts_to_sizes(self, comms=None, ranks=None):
if self.worker_info is None and comms is not None:
self.calculate_worker_and_rank_info(comms)
self.total_rows = 0
self.parts_to_sizes = dict()
parts = [
(
wf[0],
self.client.submit(
_get_rows, wf[1], self.multiple, workers=[wf[0]], pure=False
),
)
for idx, wf in enumerate(self.worker_to_parts.items())
]
sizes = self.client.compute(parts, sync=True)
for w, sizes_parts in sizes:
sizes, total = sizes_parts
self.parts_to_sizes[self.worker_info[w]["rank"]] = sizes
self.total_rows += total
def calculate_local_data(self, comms, by):
if self.worker_info is None and comms is not None:
self.calculate_worker_and_rank_info(comms)
local_data = dict(
[
(
self.worker_info[wf[0]]["rank"],
self.client.submit(_get_local_data, wf[1], by, workers=[wf[0]]),
)
for idx, wf in enumerate(self.worker_to_parts.items())
]
)
_local_data_dict = self.client.compute(local_data, sync=True)
local_data_dict = {"edges": [], "offsets": [], "verts": []}
max_vid = 0
for rank in range(len(_local_data_dict)):
data = _local_data_dict[rank]
local_data_dict["edges"].append(data[0])
if rank == 0:
local_offset = 0
else:
prev_data = _local_data_dict[rank - 1]
local_offset = prev_data[1] + 1
local_data_dict["offsets"].append(local_offset)
local_data_dict["verts"].append(data[1] - local_offset + 1)
if data[2] > max_vid:
max_vid = data[2]
import numpy as np
local_data_dict["edges"] = np.array(local_data_dict["edges"], dtype=np.int32)
local_data_dict["offsets"] = np.array(
local_data_dict["offsets"], dtype=np.int32
)
local_data_dict["verts"] = np.array(local_data_dict["verts"], dtype=np.int32)
self.local_data = local_data_dict
self.max_vertex_id = max_vid
def _get_local_data(df, by):
df = df[0]
num_local_edges = len(df)
local_by_max = df[by].iloc[-1]
local_max = df[["src", "dst"]].max().max()
return num_local_edges, local_by_max, local_max
""" Internal methods, API subject to change """
def _workers_to_parts(futures):
"""
Builds an ordered dict mapping each worker to their list
of parts
:param futures: list of (worker, part) tuples
:return:
"""
w_to_p_map = OrderedDict.fromkeys(Comms.get_workers())
for w, p in futures:
if w_to_p_map[w] is None:
w_to_p_map[w] = []
w_to_p_map[w].append(p)
keys_to_delete = [w for (w, p) in w_to_p_map.items() if p is None]
for k in keys_to_delete:
del w_to_p_map[k]
return w_to_p_map
def _get_rows(objs, multiple):
def get_obj(x):
return x[0] if multiple else x
total = list(map(lambda x: get_obj(x).shape[0], objs))
return total, reduce(lambda a, b: a + b, total)
def get_mg_batch_data(dask_cudf_data, batch_enabled=False):
data = DistributedDataHandler.create(
data=dask_cudf_data, batch_enabled=batch_enabled
)
return data
def get_distributed_data(input_ddf):
ddf = input_ddf
comms = Comms.get_comms()
data = DistributedDataHandler.create(data=ddf)
if data.worker_info is None and comms is not None:
data.calculate_worker_and_rank_info(comms)
return data
def get_vertex_partition_offsets(input_graph):
import cudf
renumber_vertex_count = input_graph.renumber_map.implementation.ddf.map_partitions(
len
).compute()
renumber_vertex_cumsum = renumber_vertex_count.cumsum()
# Assume the input_graph edgelist was renumbered
src_col_name = input_graph.renumber_map.renumbered_src_col_name
vertex_dtype = input_graph.edgelist.edgelist_df[src_col_name].dtype
vertex_partition_offsets = cudf.Series([0], dtype=vertex_dtype)
vertex_partition_offsets = vertex_partition_offsets.append(
cudf.Series(renumber_vertex_cumsum, dtype=vertex_dtype)
)
return vertex_partition_offsets
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/common/part_utils.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from dask.distributed import futures_of, default_client, wait
from toolz import first
import collections
import dask_cudf
from dask.array.core import Array as daskArray
from dask_cudf.core import DataFrame as daskDataFrame
from dask_cudf.core import Series as daskSeries
from functools import reduce
import cugraph.dask.comms.comms as Comms
from dask.delayed import delayed
import cudf
def workers_to_parts(futures):
"""
Builds an ordered dict mapping each worker to their list
of parts
:param futures: list of (worker, part) tuples
:return:
"""
w_to_p_map = collections.OrderedDict()
for w, p in futures:
if w not in w_to_p_map:
w_to_p_map[w] = []
w_to_p_map[w].append(p)
return w_to_p_map
def _func_get_rows(df):
return df.shape[0]
def parts_to_ranks(client, worker_info, part_futures):
"""
Builds a list of (rank, size) tuples of partitions
:param worker_info: dict of {worker, {"rank": rank }}. Note: \
This usually comes from the underlying communicator
:param part_futures: list of (worker, future) tuples
:return: [(part, size)] in the same order of part_futures
"""
futures = [
(
worker_info[wf[0]]["rank"],
client.submit(_func_get_rows, wf[1], workers=[wf[0]], pure=False),
)
for idx, wf in enumerate(part_futures)
]
sizes = client.compute(list(map(lambda x: x[1], futures)), sync=True)
total = reduce(lambda a, b: a + b, sizes)
return [(futures[idx][0], size) for idx, size in enumerate(sizes)], total
def persist_distributed_data(dask_df, client):
client = default_client() if client is None else client
worker_addresses = Comms.get_workers()
_keys = dask_df.__dask_keys__()
worker_dict = {}
for i, key in enumerate(_keys):
worker_dict[key] = tuple([worker_addresses[i]])
persisted = client.persist(dask_df, workers=worker_dict)
parts = futures_of(persisted)
return parts
def _create_empty_dask_df_future(meta_df, client, worker):
df_future = client.scatter(meta_df.head(0), workers=[worker])
wait(df_future)
return [df_future]
def get_persisted_df_worker_map(dask_df, client):
ddf_keys = futures_of(dask_df)
output_map = {}
for w, w_keys in client.has_what().items():
output_map[w] = [ddf_k for ddf_k in ddf_keys if ddf_k.key in w_keys]
if len(output_map[w]) == 0:
output_map[w] = _create_empty_dask_df_future(dask_df._meta, client, w)
return output_map
def _chunk_lst(ls, num_parts):
return [ls[i::num_parts] for i in range(num_parts)]
def persist_dask_df_equal_parts_per_worker(
dask_df, client, return_type="dask_cudf.DataFrame"
):
"""
Persist dask_df with equal parts per worker
Args:
dask_df: dask_cudf.DataFrame
client: dask.distributed.Client
return_type: str, "dask_cudf.DataFrame" or "dict"
Returns:
persisted_keys: dict of {worker: [persisted_keys]}
"""
if return_type not in ["dask_cudf.DataFrame", "dict"]:
raise ValueError("return_type must be either 'dask_cudf.DataFrame' or 'dict'")
ddf_keys = dask_df.to_delayed()
workers = client.scheduler_info()["workers"].keys()
ddf_keys_ls = _chunk_lst(ddf_keys, len(workers))
persisted_keys_d = {}
for w, ddf_k in zip(workers, ddf_keys_ls):
persisted_keys_d[w] = client.compute(
ddf_k, workers=w, allow_other_workers=False, pure=False
)
persisted_keys_ls = [
item for sublist in persisted_keys_d.values() for item in sublist
]
wait(persisted_keys_ls)
if return_type == "dask_cudf.DataFrame":
dask_df = dask_cudf.from_delayed(
persisted_keys_ls, meta=dask_df._meta
).persist()
wait(dask_df)
return dask_df
return persisted_keys_d
def get_length_of_parts(persisted_keys_d, client):
"""
Get the length of each partition
Args:
persisted_keys_d: dict of {worker: [persisted_keys]}
client: dask.distributed.Client
Returns:
length_of_parts: dict of {worker: [length_of_parts]}
"""
length_of_parts = {}
for w, p_keys in persisted_keys_d.items():
length_of_parts[w] = [
client.submit(
len, p_key, pure=False, workers=[w], allow_other_workers=False
)
for p_key in p_keys
]
for w, len_futures in length_of_parts.items():
length_of_parts[w] = client.gather(len_futures)
return length_of_parts
async def _extract_partitions(
dask_obj, client=None, batch_enabled=False, broadcast_worker=None
):
client = default_client() if client is None else client
worker_list = Comms.get_workers()
# dask.dataframe or dask.array
if isinstance(dask_obj, (daskDataFrame, daskArray, daskSeries)):
if batch_enabled:
persisted = client.persist(dask_obj, workers=broadcast_worker)
else:
# repartition the 'dask_obj' to get as many partitions as there
# are workers
dask_obj = dask_obj.repartition(npartitions=len(worker_list))
# Have the first n workers persisting the n partitions
# Ideally, there would be as many partitions as there are workers
persisted = [
client.persist(dask_obj.get_partition(p), workers=w)
for p, w in enumerate(worker_list[: dask_obj.npartitions])
]
# Persist empty dataframe/series with the remaining workers if
# there are less partitions than workers
if dask_obj.npartitions < len(worker_list):
# The empty df should have the same column names and dtypes as
# dask_obj
if isinstance(dask_obj, dask_cudf.DataFrame):
empty_df = cudf.DataFrame(columns=list(dask_obj.columns))
empty_df = empty_df.astype(
dict(zip(dask_obj.columns, dask_obj.dtypes))
)
else:
empty_df = cudf.Series(dtype=dask_obj.dtype)
for p, w in enumerate(worker_list[dask_obj.npartitions :]):
empty_ddf = dask_cudf.from_cudf(empty_df, npartitions=1)
persisted.append(client.persist(empty_ddf, workers=w))
parts = futures_of(persisted)
# iterable of dask collections (need to colocate them)
elif isinstance(dask_obj, collections.abc.Sequence):
# NOTE: We colocate (X, y) here by zipping delayed
# n partitions of them as (X1, y1), (X2, y2)...
# and asking client to compute a single future for
# each tuple in the list.
dela = [np.asarray(d.to_delayed()) for d in dask_obj]
# TODO: ravel() is causing strange behavior w/ delayed Arrays which are
# not yet backed by futures. Need to investigate this behavior.
# ref: https://github.com/rapidsai/cuml/issues/2045
raveled = [d.flatten() for d in dela]
parts = client.compute([p for p in zip(*raveled)])
await wait(parts)
key_to_part = [(part.key, part) for part in parts]
who_has = await client.who_has(parts)
return [(first(who_has[key]), part) for key, part in key_to_part]
def create_dict(futures):
w_to_p_map = collections.OrderedDict()
for w, k, p in futures:
if w not in w_to_p_map:
w_to_p_map[w] = []
w_to_p_map[w].append([p, k])
return w_to_p_map
def set_global_index(df, cumsum):
df.index = df.index + cumsum
df.index = df.index.astype("int64")
return df
def get_cumsum(df, by):
return df[by].value_counts(sort=False).cumsum()
def repartition(ddf, cumsum):
# Calculate new optimal divisions and repartition the data
# for load balancing.
import math
npartitions = ddf.npartitions
count = math.ceil(len(ddf) / npartitions)
new_divisions = [0]
move_count = 0
i = npartitions - 2
for i in range(npartitions - 1):
search_val = count - move_count
index = cumsum[i].searchsorted(search_val)
if index == len(cumsum[i]):
index = -1
elif index > 0:
left = cumsum[i].iloc[index - 1]
right = cumsum[i].iloc[index]
index -= search_val - left < right - search_val
new_divisions.append(new_divisions[i] + cumsum[i].iloc[index] + move_count)
move_count = cumsum[i].iloc[-1] - cumsum[i].iloc[index]
new_divisions.append(new_divisions[i + 1] + cumsum[-1].iloc[-1] + move_count - 1)
return ddf.repartition(divisions=tuple(new_divisions))
def load_balance_func(ddf_, by, client=None):
# Load balances the sorted dask_cudf DataFrame.
# Input is a dask_cudf dataframe ddf_ which is sorted by
# the column name passed as the 'by' argument.
client = default_client() if client is None else client
parts = persist_distributed_data(ddf_, client)
wait(parts)
who_has = client.who_has(parts)
key_to_part = [(part.key, part) for part in parts]
gpu_fututres = [
(first(who_has[key]), part.key[1], part) for key, part in key_to_part
]
worker_to_data = create_dict(gpu_fututres)
# Calculate cumulative sum in each dataframe partition
cumsum_parts = [
client.submit(get_cumsum, wf[1][0][0], by, workers=[wf[0]]).result()
for idx, wf in enumerate(worker_to_data.items())
]
num_rows = []
for cumsum in cumsum_parts:
num_rows.append(cumsum.iloc[-1])
# Calculate current partition divisions.
divisions = [sum(num_rows[0:x:1]) for x in range(0, len(num_rows) + 1)]
divisions[-1] = divisions[-1] - 1
divisions = tuple(divisions)
# Set global index from 0 to len(dask_cudf_dataframe) so that global
# indexing of divisions can be used for repartitioning.
futures = [
client.submit(
set_global_index, wf[1][0][0], divisions[wf[1][0][1]], workers=[wf[0]]
)
for idx, wf in enumerate(worker_to_data.items())
]
wait(futures)
ddf = dask_cudf.from_delayed(futures)
ddf.divisions = divisions
# Repartition the data
ddf = repartition(ddf, cumsum_parts)
return ddf
def concat_dfs(df_list):
"""
Concat a list of cudf dataframes.
"""
return cudf.concat(df_list)
def get_delayed_dict(ddf):
"""
Returns a dicitionary with the dataframe tasks as keys and
the dataframe delayed objects as values.
"""
df_delayed = {}
for delayed_obj in ddf.to_delayed():
df_delayed[delayed_obj.key] = delayed_obj
return df_delayed
def concat_within_workers(client, ddf):
"""
Concats all partitions within workers without transfers.
"""
df_delayed = get_delayed_dict(ddf)
result = []
for worker, tasks in client.has_what().items():
worker_task_list = []
for task in list(tasks):
if task in df_delayed:
worker_task_list.append(df_delayed[task])
concat_tasks = delayed(concat_dfs)(worker_task_list)
result.append(client.persist(collections=concat_tasks, workers=worker))
return dask_cudf.from_delayed(result)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/common/read_utils.py | # Copyright (c) 2019-2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def get_n_workers():
from dask.distributed import default_client
client = default_client()
return len(client.scheduler_info()["workers"])
def get_chunksize(input_path):
"""
Calculate the appropriate chunksize for dask_cudf.read_csv
to get a number of partitions equal to the number of GPUs.
Examples
--------
>>> import cugraph.dask as dcg
>>> chunksize = dcg.get_chunksize(datasets_path / 'netscience.csv')
"""
import os
from glob import glob
import math
input_files = sorted(glob(str(input_path)))
if len(input_files) == 1:
size = os.path.getsize(input_files[0])
chunksize = math.ceil(size / get_n_workers())
else:
size = [os.path.getsize(_file) for _file in input_files]
chunksize = max(size)
return chunksize
class MissingUCXPy:
def __call__(self, *args, **kwargs):
raise ModuleNotFoundError(
"ucx-py could not be imported but is required for MG operations"
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/common/__init__.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/structure/replication.pyx | # Copyright (c) 2020-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from cugraph.structure cimport utils as c_utils
from cugraph.structure.graph_primtypes cimport *
from libc.stdint cimport uintptr_t
import cudf
import dask.distributed as dd
from cugraph.dask.common.input_utils import get_mg_batch_data
import dask_cudf
import cugraph.dask.comms.comms as Comms
import cugraph.dask.common.mg_utils as mg_utils
import numpy as np
def replicate_cudf_dataframe(cudf_dataframe, client=None, comms=None):
if type(cudf_dataframe) is not cudf.DataFrame:
raise TypeError("Expected a cudf.Series to replicate")
client = mg_utils.get_client() if client is None else client
comms = Comms.get_comms() if comms is None else comms
dask_cudf_df = dask_cudf.from_cudf(cudf_dataframe, npartitions=1)
df_length = len(dask_cudf_df)
_df_data = get_mg_batch_data(dask_cudf_df, batch_enabled=True)
df_data = mg_utils.prepare_worker_to_parts(_df_data, client)
workers_to_futures = {worker: client.submit(_replicate_cudf_dataframe,
(data, cudf_dataframe.columns.values, cudf_dataframe.dtypes, df_length),
comms.sessionId,
workers=[worker]) for
(worker, data) in
df_data.worker_to_parts.items()}
dd.wait(workers_to_futures)
return workers_to_futures
def _replicate_cudf_dataframe(input_data, session_id):
cdef uintptr_t c_handle = <uintptr_t> NULL
cdef uintptr_t c_series = <uintptr_t> NULL
result = None
handle = Comms.get_handle(session_id)
c_handle = <uintptr_t>handle.getHandle()
_data, columns, dtypes, df_length = input_data
data = _data[0]
has_data = type(data) is cudf.DataFrame
series = None
df_data = {}
for idx, column in enumerate(columns):
if has_data:
series = data[column]
else:
dtype = dtypes[idx]
series = cudf.Series(np.zeros(df_length), dtype=dtype)
df_data[column] = series
c_series = series.__cuda_array_interface__['data'][0]
comms_bcast(c_handle, c_series, df_length, series.dtype)
if has_data:
result = data
else:
result = cudf.DataFrame(data=df_data)
return result
def replicate_cudf_series(cudf_series, client=None, comms=None):
if type(cudf_series) is not cudf.Series:
raise TypeError("Expected a cudf.Series to replicate")
client = mg_utils.get_client() if client is None else client
comms = Comms.get_comms() if comms is None else comms
dask_cudf_series = dask_cudf.from_cudf(cudf_series,
npartitions=1)
series_length = len(dask_cudf_series)
_series_data = get_mg_batch_data(dask_cudf_series, batch_enabled=True)
series_data = mg_utils.prepare_worker_to_parts(_series_data)
dtype = cudf_series.dtype
workers_to_futures = {worker:
client.submit(_replicate_cudf_series,
(data, series_length, dtype),
comms.sessionId,
workers=[worker]) for
(worker, data) in
series_data.worker_to_parts.items()}
dd.wait(workers_to_futures)
return workers_to_futures
def _replicate_cudf_series(input_data, session_id):
cdef uintptr_t c_handle = <uintptr_t> NULL
cdef uintptr_t c_result = <uintptr_t> NULL
result = None
handle = Comms.get_handle(session_id)
c_handle = <uintptr_t>handle.getHandle()
(_data, size, dtype) = input_data
data = _data[0]
has_data = type(data) is cudf.Series
if has_data:
result = data
else:
result = cudf.Series(np.zeros(size), dtype=dtype)
c_result = result.__cuda_array_interface__['data'][0]
comms_bcast(c_handle, c_result, size, dtype)
return result
cdef comms_bcast(uintptr_t handle,
uintptr_t value_ptr,
size_t count,
dtype):
if dtype == np.int32:
c_utils.comms_bcast((<handle_t*> handle)[0], <int*> value_ptr, count)
elif dtype == np.float32:
c_utils.comms_bcast((<handle_t*> handle)[0], <float*> value_ptr, count)
elif dtype == np.float64:
c_utils.comms_bcast((<handle_t*> handle)[0], <double*> value_ptr, count)
else:
raise TypeError("Unsupported broadcast type") | 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/structure/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources replication.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX structure_
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/structure/mg_property_graph.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cudf
import cupy
import cugraph
import dask_cudf
import cugraph.dask as dcg
from cugraph.utilities.utils import import_optional, create_list_series_from_2d_ar
from typing import Union
pd = import_optional("pandas")
class EXPERIMENTAL__MGPropertySelection:
"""
Instances of this class are returned from the PropertyGraph.select_*()
methods and can be used by the PropertyGraph.extract_subgraph() method to
extract a Graph containing vertices and edges with only the selected
properties.
"""
def __init__(self, vertex_selection_series=None, edge_selection_series=None):
self.vertex_selections = vertex_selection_series
self.edge_selections = edge_selection_series
def __add__(self, other):
"""
Add either the vertex_selections, edge_selections, or both to this
instance from "other" if either are not already set.
"""
vs = self.vertex_selections
if vs is None:
vs = other.vertex_selections
es = self.edge_selections
if es is None:
es = other.edge_selections
return EXPERIMENTAL__MGPropertySelection(vs, es)
# FIXME: remove leading __ when no longer experimental
class EXPERIMENTAL__MGPropertyGraph:
"""
Class which stores vertex and edge properties that can be used to construct
Graphs from individual property selections and used later to annotate graph
algorithm results with corresponding properties.
"""
# column name constants used in internal DataFrames
vertex_col_name = "_VERTEX_"
src_col_name = "_SRC_"
dst_col_name = "_DST_"
type_col_name = "_TYPE_"
edge_id_col_name = "_EDGE_ID_"
weight_col_name = "_WEIGHT_"
_default_type_name = ""
def __init__(self, num_workers=None):
# The dataframe containing the properties for each vertex.
# Each vertex occupies a row, and individual properties are maintained
# in individual columns. The table contains a column for each property
# of each vertex. If a vertex does not contain a property, it will have
# a NaN value in that property column. Each vertex will also have a
# "type_name" that can be assigned by the caller to describe the type
# of the vertex for a given application domain. If no type_name is
# provided, the default type_name is "".
# Example:
# vertex | type_name | propA | propB | propC
# ------------------------------------------
# 3 | "user" | 22 | NaN | 11
# 88 | "service" | NaN | 3.14 | 21
# 9 | "" | NaN | NaN | 2
self.__vertex_prop_dataframe = None
# The dataframe containing the properties for each edge.
# The description is identical to the vertex property dataframe, except
# edges are identified by ordered pairs of vertices (src and dst).
# Example:
# src | dst | type_name | propA | propB | propC
# ---------------------------------------------
# 3 | 88 | "started" | 22 | NaN | 11
# 88 | 9 | "called" | NaN | 3.14 | 21
# 9 | 88 | "" | NaN | NaN | 2
self.__edge_prop_dataframe = None
# The var:value dictionaries used during evaluation of filter/query
# expressions for vertices and edges. These dictionaries contain
# entries for each column name in their respective dataframes which
# are mapped to instances of PropertyColumn objects.
#
# When filter/query expressions are evaluated, PropertyColumn objects
# are used in place of DataFrame columns in order to support string
# comparisons when cuDF DataFrames are used. This approach also allows
# expressions to contain var names that can be used in expressions that
# are different than those in the actual internal tables, allowing for
# the tables to contain additional or different column names than what
# can be used in expressions.
#
# Example: "type_name == 'user' & propC > 10"
#
# The above would be evaluated and "type_name" and "propC" would be
# PropertyColumn instances which support specific operators used in
# queries.
self.__vertex_prop_eval_dict = {}
self.__edge_prop_eval_dict = {}
self.__dataframe_type = dask_cudf.DataFrame
self.__series_type = dask_cudf.Series
# The dtypes for each column in each DataFrame. This is required since
# merge operations can often change the dtypes to accommodate NaN
# values (eg. int64 to float64, since NaN is a float).
self.__vertex_prop_dtypes = {}
self.__edge_prop_dtypes = {}
# Lengths of the properties that are vectors
self.__vertex_vector_property_lengths = {}
self.__edge_vector_property_lengths = {}
# Add unique edge IDs to the __edge_prop_dataframe by simply
# incrementing this counter. Remains None if user provides edge IDs.
self.__last_edge_id = None
# Are edge IDs automatically generated sequentially by PG (True),
# provided by the user (False), or no edges added yet (None).
self.__is_edge_id_autogenerated = None
# Cached property values
self.__num_vertices = None
self.__vertex_type_value_counts = None
self.__edge_type_value_counts = None
# number of gpu's to use
if num_workers is None:
self.__num_workers = dcg.get_n_workers()
else:
self.__num_workers = num_workers
def _build_from_components(
self,
*,
vertex_prop_dataframe,
edge_prop_dataframe,
dataframe_type,
series_type,
vertex_prop_dtypes,
edge_prop_dtypes,
vertex_vector_property_lengths,
edge_vector_property_lengths,
last_edge_id,
is_edge_id_autogenerated,
# Computable
vertex_prop_eval_dict=None,
edge_prop_eval_dict=None,
# Cached properties
num_vertices=None,
vertex_type_value_counts=None,
edge_type_value_counts=None,
# MG-specific
num_workers=None,
):
"""Backdoor to populate a PropertyGraph from existing data.
Use only if you know what you're doing.
"""
self.__vertex_prop_dataframe = vertex_prop_dataframe
self.__edge_prop_dataframe = edge_prop_dataframe
if vertex_prop_eval_dict is None:
vertex_prop_eval_dict = {}
if vertex_prop_dataframe is not None:
self._update_eval_dict(
vertex_prop_eval_dict, vertex_prop_dataframe, self.vertex_col_name
)
self.__vertex_prop_eval_dict = vertex_prop_eval_dict
if edge_prop_eval_dict is None:
edge_prop_eval_dict = {}
if edge_prop_dataframe is not None:
self._update_eval_dict(
edge_prop_eval_dict, edge_prop_dataframe, self.edge_id_col_name
)
self.__edge_prop_eval_dict = edge_prop_eval_dict
self.__dataframe_type = dataframe_type
self.__series_type = series_type
self.__vertex_prop_dtypes = vertex_prop_dtypes
self.__edge_prop_dtypes = edge_prop_dtypes
self.__vertex_vector_property_lengths = vertex_vector_property_lengths
self.__edge_vector_property_lengths = edge_vector_property_lengths
self.__last_edge_id = last_edge_id
self.__is_edge_id_autogenerated = is_edge_id_autogenerated
self.__num_vertices = num_vertices
self.__vertex_type_value_counts = vertex_type_value_counts
self.__edge_type_value_counts = edge_type_value_counts
if num_workers is None:
self.__num_workers = dcg.get_n_workers()
else:
self.__num_workers = num_workers
# PropertyGraph read-only attributes
@property
def edges(self):
if self.__edge_prop_dataframe is not None:
return self.__edge_prop_dataframe[
[self.src_col_name, self.dst_col_name]
].reset_index()
return None
@property
def vertex_property_names(self):
if self.__vertex_prop_dataframe is not None:
props = list(self.__vertex_prop_dataframe.columns)
props.remove(self.type_col_name) # should "type" be removed?
return props
return []
@property
def edge_property_names(self):
if self.__edge_prop_dataframe is not None:
props = list(self.__edge_prop_dataframe.columns)
props.remove(self.src_col_name)
props.remove(self.dst_col_name)
props.remove(self.type_col_name) # should "type" be removed?
if self.weight_col_name in props:
props.remove(self.weight_col_name)
return props
return []
@property
def vertex_types(self):
"""The set of vertex type names"""
value_counts = self._vertex_type_value_counts
if value_counts is None:
names = set()
elif self.__series_type is dask_cudf.Series:
names = set(value_counts.index.to_arrow().to_pylist())
else:
names = set(value_counts.index)
default = self._default_type_name
if default not in names and self.get_num_vertices(default) > 0:
# include "" from vertices that only exist in edge data
names.add(default)
return names
@property
def edge_types(self):
"""The set of edge type names"""
value_counts = self._edge_type_value_counts
if value_counts is None:
return set()
elif self.__series_type is dask_cudf.Series:
return set(value_counts.index.to_arrow().to_pylist())
else:
return set(value_counts.index)
# PropertyGraph read-only attributes for debugging
@property
def _vertex_prop_dataframe(self):
return self.__vertex_prop_dataframe
@property
def _edge_prop_dataframe(self):
return self.__edge_prop_dataframe
@property
def _vertex_type_value_counts(self):
"""A Series of the counts of types in __vertex_prop_dataframe"""
if self.__vertex_prop_dataframe is None:
return
if self.__vertex_type_value_counts is None:
# Types should all be strings; what should we do if we see NaN?
self.__vertex_type_value_counts = (
self.__vertex_prop_dataframe[self.type_col_name]
.value_counts(sort=False, dropna=False)
.compute()
)
return self.__vertex_type_value_counts
@property
def _edge_type_value_counts(self):
"""A Series of the counts of types in __edge_prop_dataframe"""
if self.__edge_prop_dataframe is None:
return
if self.__edge_type_value_counts is None:
# Types should all be strings; what should we do if we see NaN?
self.__edge_type_value_counts = (
self.__edge_prop_dataframe[self.type_col_name]
.value_counts(sort=False, dropna=False)
.compute()
)
return self.__edge_type_value_counts
def get_num_vertices(self, type=None, *, include_edge_data=True):
"""Return the number of all vertices or vertices of a given type.
Parameters
----------
type : string, optional
If type is None (the default), return the total number of vertices,
otherwise return the number of vertices of the specified type.
include_edge_data : bool (default True)
If True, include vertices that were added in vertex and edge data.
If False, only include vertices that were added in vertex data.
Note that vertices that only exist in edge data are assumed to have
the default type.
See Also
--------
PropertyGraph.get_num_edges
"""
if type is None:
if not include_edge_data:
if self.__vertex_prop_dataframe is None:
return 0
return len(self.__vertex_prop_dataframe)
if self.__num_vertices is not None:
return self.__num_vertices
self.__num_vertices = 0
vert_sers = self.__get_all_vertices_series()
if vert_sers:
if self.__series_type is dask_cudf.Series:
vert_count = dask_cudf.concat(
vert_sers, ignore_index=True
).nunique()
self.__num_vertices = vert_count.compute()
return self.__num_vertices
value_counts = self._vertex_type_value_counts
if type == self._default_type_name and include_edge_data:
# The default type, "", can refer to both vertex and edge data
if self.__vertex_prop_dataframe is None:
return self.get_num_vertices()
return (
self.get_num_vertices()
- len(self.__vertex_prop_dataframe)
+ (value_counts[type] if type in value_counts else 0)
)
if self.__vertex_prop_dataframe is None:
return 0
return value_counts[type] if type in value_counts else 0
def get_num_edges(self, type=None):
"""Return the number of all edges or edges of a given type.
Parameters
----------
type : string, optional
If type is None (the default), return the total number of edges,
otherwise return the number of edges of the specified type.
See Also
--------
PropertyGraph.get_num_vertices
"""
if type is None:
if self.__edge_prop_dataframe is not None:
return len(self.__edge_prop_dataframe)
else:
return 0
if self.__edge_prop_dataframe is None:
return 0
value_counts = self._edge_type_value_counts
return value_counts[type] if type in value_counts else 0
def get_vertices(self, selection=None):
"""
Return a Series containing the unique vertex IDs contained in both
the vertex and edge property data.
"""
vert_sers = self.__get_all_vertices_series()
if vert_sers:
if self.__series_type is dask_cudf.Series:
return (
dask_cudf.concat(vert_sers, ignore_index=True)
.unique()
.sort_values()
)
else:
raise TypeError("dataframe must be a CUDF Dask dataframe.")
return self.__series_type()
def vertices_ids(self):
"""
Alias for get_vertices()
"""
return self.get_vertices()
def vertex_types_from_numerals(
self, nums: Union[cudf.Series, pd.Series]
) -> Union[cudf.Series, pd.Series]:
"""
Returns the string vertex type names given the numeric category labels.
Note: Does not accept or return dask_cudf Series.
Parameters
----------
nums: Union[cudf.Series, pandas.Series] (Required)
The list of numeric category labels to convert.
Returns
-------
Union[cudf.Series, pd.Series]
The string type names converted from the input numerals.
"""
return (
self.__vertex_prop_dataframe[self.type_col_name]
.dtype.categories.to_series()
.iloc[nums]
.reset_index(drop=True)
)
def edge_types_from_numerals(
self, nums: Union[cudf.Series, pd.Series]
) -> Union[cudf.Series, pd.Series]:
"""
Returns the string edge type names given the numeric category labels.
Note: Does not accept or return dask_cudf Series.
Parameters
----------
nums: Union[cudf.Series, pandas.Series] (Required)
The list of numeric category labels to convert.
Returns
-------
Union[cudf.Series, pd.Series]
The string type names converted from the input numerals.
"""
return (
self.__edge_prop_dataframe[self.type_col_name]
.dtype.categories.to_series()
.iloc[nums]
.reset_index(drop=True)
)
def add_vertex_data(
self,
dataframe,
vertex_col_name,
type_name=None,
property_columns=None,
vector_properties=None,
vector_property=None,
):
"""
Add a dataframe describing vertex properties to the PropertyGraph.
Parameters
----------
dataframe : DataFrame-compatible instance
A DataFrame instance with a compatible Pandas-like DataFrame
interface.
vertex_col_name : string
The column name that contains the values to be used as vertex IDs,
or the name of the index if the index is vertex IDs.
Specifying the index may be more efficient, and will be the most
efficient if the index of vertex IDs is already sorted.
type_name : string
The name to be assigned to the type of property being added. For
example, if dataframe contains data about users, type_name might be
"users". If not specified, the type of properties will be added as
the empty string, "".
property_columns : list of strings
List of column names in dataframe to be added as properties. All
other columns in dataframe will be ignored. If not specified, all
columns in dataframe are added.
vector_properties : dict of string to list of strings, optional
A dict of vector properties to create from columns in the dataframe.
Each vector property stores an array for each vertex.
The dict keys are the new vector property names, and the dict values
should be Python lists of column names from which to create the vector
property. Columns used to create vector properties won't be added to
the property graph by default, but may be included as properties by
including them in the property_columns argument.
Use ``MGPropertyGraph.vertex_vector_property_to_array`` to convert a
vertex vector property to an array.
vector_property : string, optional
If provided, all columns not included in other arguments will be used
to create a vector property with the given name. This is often used
for convenience instead of ``vector_properties`` when all input
properties should be converted to a vector property.
Returns
-------
None
Examples
--------
>>>
"""
if type(dataframe) is not dask_cudf.DataFrame:
raise TypeError("dataframe must be a Dask dataframe.")
if vertex_col_name not in dataframe.columns:
if vertex_col_name != dataframe.index.name:
raise ValueError(
f"{vertex_col_name} is not a column in or the index name of "
f"dataframe: {dataframe.columns}"
)
index_is_set = True
else:
index_is_set = False
if type_name is not None and not isinstance(type_name, str):
raise TypeError(f"type_name must be a string, got: {type(type_name)}")
if type_name is None:
type_name = self._default_type_name
if property_columns:
if type(property_columns) is not list:
raise TypeError(
f"property_columns must be a list, got: {type(property_columns)}"
)
invalid_columns = set(property_columns).difference(dataframe.columns)
if invalid_columns:
raise ValueError(
"property_columns contains column(s) not found in dataframe: "
f"{list(invalid_columns)}"
)
existing_vectors = (
set(property_columns) & self.__vertex_vector_property_lengths.keys()
)
if existing_vectors:
raise ValueError(
"Non-vector property columns cannot be added to existing "
f"vector properties: {', '.join(sorted(existing_vectors))}"
)
TCN = self.type_col_name
if vector_properties is not None:
invalid_keys = {self.vertex_col_name, TCN}
if property_columns:
invalid_keys.update(property_columns)
self._check_vector_properties(
dataframe,
vector_properties,
self.__vertex_vector_property_lengths,
invalid_keys,
)
if vector_property is not None:
invalid_keys = {self.vertex_col_name, TCN, vertex_col_name}
if property_columns:
invalid_keys.update(property_columns)
if vector_properties:
invalid_keys.update(*vector_properties.values())
d = {
vector_property: [
col for col in dataframe.columns if col not in invalid_keys
]
}
invalid_keys.remove(vertex_col_name)
self._check_vector_properties(
dataframe,
d,
self.__vertex_vector_property_lengths,
invalid_keys,
)
# Update vector_properties, but don't mutate the original
if vector_properties is not None:
d.update(vector_properties)
vector_properties = d
# Clear the cached values related to the number of vertices since more
# could be added in this method.
self.__num_vertices = None
self.__vertex_type_value_counts = None # Could update instead
# Add `type_name` to the TYPE categorical dtype if necessary
is_first_data = self.__vertex_prop_dataframe is None
if is_first_data:
# Initialize the __vertex_prop_dataframe using the same
# type as the incoming dataframe.
temp_dataframe = cudf.DataFrame(columns=[self.vertex_col_name, TCN])
self.__vertex_prop_dataframe = dask_cudf.from_cudf(
temp_dataframe, npartitions=self.__num_workers
)
# Initialize the new columns to the same dtype as the appropriate
# column in the incoming dataframe, since the initial merge may not
# result in the same dtype. (see
# https://github.com/rapidsai/cudf/issues/9981)
if not index_is_set:
self.__update_dataframe_dtypes(
self.__vertex_prop_dataframe,
{self.vertex_col_name: dataframe[vertex_col_name].dtype},
)
self.__vertex_prop_dataframe = self.__vertex_prop_dataframe.set_index(
self.vertex_col_name
)
# Use categorical dtype for the type column
if self.__series_type is dask_cudf.Series:
cat_class = cudf.CategoricalDtype
else:
cat_class = pd.CategoricalDtype
cat_dtype = cat_class([type_name], ordered=False)
else:
cat_dtype = self.__update_categorical_dtype(
self.__vertex_prop_dataframe, TCN, type_name
)
# NOTE: This copies the incoming DataFrame in order to add the new
# columns. The copied DataFrame is then merged (another copy) and then
# deleted when out-of-scope.
# Ensure that both the predetermined vertex ID column name and vertex
# type column name are present for proper merging.
tmp_df = dataframe.copy()
if not index_is_set:
tmp_df[self.vertex_col_name] = tmp_df[vertex_col_name]
elif tmp_df.index.name != self.vertex_col_name:
tmp_df.index = tmp_df.index.rename(self.vertex_col_name)
# FIXME: handle case of a type_name column already being in tmp_df
# FIXME: We should do categorization first
# Related issue: https://github.com/rapidsai/cugraph/issues/2903
tmp_df[TCN] = type_name
tmp_df[TCN] = tmp_df[TCN].astype(cat_dtype)
if property_columns:
# all columns
column_names_to_drop = set(tmp_df.columns)
# remove the ones to keep
column_names_to_drop.difference_update(
property_columns + [self.vertex_col_name, TCN]
)
else:
column_names_to_drop = {vertex_col_name}
if vector_properties:
# Drop vector property source columns by default
more_to_drop = set().union(*vector_properties.values())
if property_columns is not None:
more_to_drop.difference_update(property_columns)
column_names_to_drop |= more_to_drop
column_names_to_drop -= vector_properties.keys()
tmp_df = self._create_vector_properties(tmp_df, vector_properties)
if index_is_set:
column_names_to_drop -= {self.vertex_col_name, vertex_col_name}
tmp_df = tmp_df.drop(labels=column_names_to_drop, axis=1)
# Save the original dtypes for each new column so they can be restored
# prior to constructing subgraphs (since column dtypes may get altered
# during merge to accommodate NaN values).
if is_first_data:
new_col_info = tmp_df.dtypes.items()
else:
new_col_info = self.__get_new_column_dtypes(
tmp_df, self.__vertex_prop_dataframe
)
self.__vertex_prop_dtypes.update(new_col_info)
tmp_df = tmp_df.persist()
if not index_is_set:
tmp_df = tmp_df.set_index(self.vertex_col_name).persist()
self.__update_dataframe_dtypes(tmp_df, self.__vertex_prop_dtypes)
if is_first_data:
self.__vertex_prop_dataframe = tmp_df
else:
# Join on vertex ids (the index)
# TODO: can we automagically determine when we to use concat?
df = self.__vertex_prop_dataframe.join(
tmp_df,
how="outer",
rsuffix="_NEW_",
# npartitions=self.__num_workers # TODO: see how this behaves
).persist()
cols = self.__vertex_prop_dataframe.columns.intersection(
tmp_df.columns
).to_list()
rename_cols = {f"{col}_NEW_": col for col in cols}
new_cols = list(rename_cols)
sub_df = df[new_cols].rename(columns=rename_cols)
# This only adds data--it doesn't replace existing data
df = df.drop(columns=new_cols).fillna(sub_df).persist()
if df.npartitions > 4 * self.__num_workers:
# TODO: better understand behavior of npartitions argument in join
df = df.repartition(npartitions=2 * self.__num_workers).persist()
self.__vertex_prop_dataframe = df
# Update the vertex eval dict with the latest column instances
self._update_eval_dict(
self.__vertex_prop_eval_dict,
self.__vertex_prop_dataframe,
self.vertex_col_name,
)
def _update_eval_dict(self, eval_dict, df, index_name):
# Update the vertex eval dict with the latest column instances
latest = {n: df[n] for n in df.columns}
eval_dict.update(latest)
eval_dict[index_name] = df.index
def get_vertex_data(self, vertex_ids=None, types=None, columns=None):
"""
Return a dataframe containing vertex properties for only the specified
vertex_ids, columns, and/or types, or all vertex IDs if not specified.
"""
if self.__vertex_prop_dataframe is not None:
df = self.__vertex_prop_dataframe
if vertex_ids is not None:
if isinstance(vertex_ids, int):
vertex_ids = [vertex_ids]
try:
df = df.loc[vertex_ids]
except TypeError:
raise TypeError(
"vertex_ids needs to be a list-like type "
f"compatible with DataFrame.loc[], got {type(vertex_ids)}"
)
if types is not None:
if isinstance(types, str):
df_mask = df[self.type_col_name] == types
else:
df_mask = df[self.type_col_name].isin(types)
df = df.loc[df_mask]
# The "internal" pG.vertex_col_name and pG.type_col_name columns
# are also included/added since they are assumed to be needed by
# the caller.
if columns is not None:
# FIXME: invalid columns will result in a KeyError, should a
# check be done here and a more PG-specific error raised?
df = df[[self.type_col_name] + columns]
df_out = df.reset_index()
# Preserve the dtype (vertex id type) to avoid cugraph algorithms
# throwing errors due to a dtype mismatch
index_dtype = self.__vertex_prop_dataframe.index.dtype
df_out.index = df_out.index.astype(index_dtype)
return df_out
return None
def add_edge_data(
self,
dataframe,
vertex_col_names,
edge_id_col_name=None,
type_name=None,
property_columns=None,
vector_properties=None,
vector_property=None,
):
"""
Add a dataframe describing edge properties to the PropertyGraph.
Parameters
----------
dataframe : DataFrame-compatible instance
A DataFrame instance with a compatible Pandas-like DataFrame
interface.
vertex_col_names : list of strings
The column names that contain the values to be used as the source
and destination vertex IDs for the edges.
edge_id_col_name : string, optional
The column name that contains the values to be used as edge IDs,
or the name of the index if the index is edge IDs.
Specifying the index may be more efficient, and will be the most
efficient if the index of edge IDs is already sorted.
If unspecified, edge IDs will be automatically assigned.
Currently, all edge data must be added with the same method: either
with automatically generated IDs, or from user-provided edge IDs.
type_name : string
The name to be assigned to the type of property being added. For
example, if dataframe contains data about transactions, type_name
might be "transactions". If not specified, the type of properties
will be added as the empty string "".
property_columns : list of strings
List of column names in dataframe to be added as properties. All
other columns in dataframe will be ignored. If not specified, all
columns in dataframe are added.
vector_properties : dict of string to list of strings, optional
A dict of vector properties to create from columns in the dataframe.
Each vector property stores an array for each edge.
The dict keys are the new vector property names, and the dict values
should be Python lists of column names from which to create the vector
property. Columns used to create vector properties won't be added to
the property graph by default, but may be included as properties by
including them in the property_columns argument.
Use ``MGPropertyGraph.edge_vector_property_to_array`` to convert an
edge vector property to an array.
vector_property : string, optional
If provided, all columns not included in other arguments will be used
to create a vector property with the given name. This is often used
for convenience instead of ``vector_properties`` when all input
properties should be converted to a vector property.
Returns
-------
None
Examples
--------
>>>
"""
if type(dataframe) is not dask_cudf.DataFrame:
raise TypeError("dataframe must be a Dask dataframe.")
if type(vertex_col_names) not in [list, tuple]:
raise TypeError(
"vertex_col_names must be a list or tuple, got: "
f"{type(vertex_col_names)}"
)
if edge_id_col_name is not None:
if not isinstance(edge_id_col_name, str):
raise TypeError(
"edge_id_col_name must be a string, got: "
f"{type(edge_id_col_name)}"
)
if edge_id_col_name not in dataframe.columns:
if edge_id_col_name != dataframe.index.name:
raise ValueError(
"edge_id_col_name argument not in columns, "
f"got {edge_id_col_name!r}"
)
index_is_set = True
else:
index_is_set = False
invalid_columns = set(vertex_col_names).difference(dataframe.columns)
if invalid_columns:
raise ValueError(
"vertex_col_names contains column(s) not found "
f"in dataframe: {list(invalid_columns)}"
)
if type_name is not None and not isinstance(type_name, str):
raise TypeError(f"type_name must be a string, got: {type(type_name)}")
if type_name is None:
type_name = self._default_type_name
if property_columns:
if type(property_columns) is not list:
raise TypeError(
f"property_columns must be a list, got: {type(property_columns)}"
)
invalid_columns = set(property_columns).difference(dataframe.columns)
if invalid_columns:
raise ValueError(
"property_columns contains column(s) not found in dataframe: "
f"{list(invalid_columns)}"
)
existing_vectors = (
set(property_columns) & self.__vertex_vector_property_lengths.keys()
)
if existing_vectors:
raise ValueError(
"Non-vector property columns cannot be added to existing "
f"vector properties: {', '.join(sorted(existing_vectors))}"
)
if self.__is_edge_id_autogenerated is False and edge_id_col_name is None:
raise NotImplementedError(
"Unable to automatically generate edge IDs. "
"`edge_id_col_name` must be specified if edge data has been "
"previously added with edge_id_col_name."
)
if self.__is_edge_id_autogenerated is True and edge_id_col_name is not None:
raise NotImplementedError(
"Invalid use of `edge_id_col_name`. Edge data has already "
"been added with automatically generated IDs, so now all "
"edge data must be added using automatically generated IDs."
)
TCN = self.type_col_name
if vector_properties is not None:
invalid_keys = {self.src_col_name, self.dst_col_name, TCN}
if property_columns:
invalid_keys.update(property_columns)
self._check_vector_properties(
dataframe,
vector_properties,
self.__edge_vector_property_lengths,
invalid_keys,
)
if vector_property is not None:
invalid_keys = {
self.src_col_name,
self.dst_col_name,
TCN,
vertex_col_names[0],
vertex_col_names[1],
}
if property_columns:
invalid_keys.update(property_columns)
if vector_properties:
invalid_keys.update(*vector_properties.values())
d = {
vector_property: [
col for col in dataframe.columns if col not in invalid_keys
]
}
invalid_keys.difference_update(vertex_col_names)
self._check_vector_properties(
dataframe,
d,
self.__edge_vector_property_lengths,
invalid_keys,
)
# Update vector_properties, but don't mutate the original
if vector_properties is not None:
d.update(vector_properties)
vector_properties = d
# Clear the cached value for num_vertices since more could be added in
# this method. This method cannot affect __node_type_value_counts
self.__num_vertices = None
self.__edge_type_value_counts = None # Could update instead
# Add `type_name` to the categorical dtype if necessary
is_first_data = self.__edge_prop_dataframe is None
if is_first_data:
temp_dataframe = cudf.DataFrame(
columns=[self.src_col_name, self.dst_col_name, TCN]
)
self.__update_dataframe_dtypes(
temp_dataframe,
{
self.src_col_name: dataframe[vertex_col_names[0]].dtype,
self.dst_col_name: dataframe[vertex_col_names[1]].dtype,
},
)
temp_dataframe.index = temp_dataframe.index.rename(self.edge_id_col_name)
if edge_id_col_name is not None and not index_is_set:
temp_dataframe.index = temp_dataframe.index.astype(
dataframe[edge_id_col_name].dtype
)
# Use categorical dtype for the type column
if self.__series_type is dask_cudf.Series:
cat_class = cudf.CategoricalDtype
else:
cat_class = pd.CategoricalDtype
cat_dtype = cat_class([type_name], ordered=False)
self.__is_edge_id_autogenerated = edge_id_col_name is None
self.__edge_prop_dataframe = temp_dataframe
else:
cat_dtype = self.__update_categorical_dtype(
self.__edge_prop_dataframe, TCN, type_name
)
# NOTE: This copies the incoming DataFrame in order to add the new
# columns. The copied DataFrame is then merged (another copy) and then
# deleted when out-of-scope.
tmp_df = dataframe.copy()
tmp_df[self.src_col_name] = tmp_df[vertex_col_names[0]]
tmp_df[self.dst_col_name] = tmp_df[vertex_col_names[1]]
# FIXME: We should do categorization first
# Related issue: https://github.com/rapidsai/cugraph/issues/2903
tmp_df[TCN] = type_name
tmp_df[TCN] = tmp_df[TCN].astype(cat_dtype)
# Add unique edge IDs to the new rows. This is just a count for each
# row starting from the last edge ID value, with initial edge ID 0.
if edge_id_col_name is None:
# FIXME: can we assign index instead of column?
starting_eid = -1 if self.__last_edge_id is None else self.__last_edge_id
tmp_df[self.edge_id_col_name] = 1
tmp_df[self.edge_id_col_name] = (
tmp_df[self.edge_id_col_name].cumsum() + starting_eid
)
tmp_df = tmp_df.persist().set_index(self.edge_id_col_name).persist()
self.__last_edge_id = starting_eid + len(tmp_df)
else:
if not index_is_set:
tmp_df = tmp_df.rename(
columns={edge_id_col_name: self.edge_id_col_name}
).persist()
tmp_df = tmp_df.set_index(self.edge_id_col_name)
tmp_df.index = tmp_df.index.astype(dataframe[edge_id_col_name].dtype)
elif tmp_df.index.name != self.edge_id_col_name:
tmp_df.index = tmp_df.index.rename(self.edge_id_col_name)
tmp_df = tmp_df.persist()
if property_columns:
# all columns
column_names_to_drop = set(tmp_df.columns)
# remove the ones to keep
column_names_to_drop.difference_update(
property_columns + [self.src_col_name, self.dst_col_name, TCN]
)
else:
column_names_to_drop = {vertex_col_names[0], vertex_col_names[1]}
if vector_properties:
# Drop vector property source columns by default
more_to_drop = set().union(*vector_properties.values())
if property_columns is not None:
more_to_drop.difference_update(property_columns)
column_names_to_drop |= more_to_drop
column_names_to_drop -= vector_properties.keys()
tmp_df = self._create_vector_properties(tmp_df, vector_properties)
tmp_df = tmp_df.drop(labels=column_names_to_drop, axis=1)
# Save the original dtypes for each new column so they can be restored
# prior to constructing subgraphs (since column dtypes may get altered
# during merge to accommodate NaN values).
if is_first_data:
new_col_info = tmp_df.dtypes.items()
else:
new_col_info = self.__get_new_column_dtypes(
tmp_df, self.__edge_prop_dataframe
)
self.__edge_prop_dtypes.update(new_col_info)
self.__update_dataframe_dtypes(tmp_df, self.__edge_prop_dtypes)
if is_first_data:
self.__edge_prop_dataframe = tmp_df
else:
# Join on edge ids (the index)
# TODO: can we automagically determine when we to use concat?
df = self.__edge_prop_dataframe.join(
tmp_df,
how="outer",
rsuffix="_NEW_",
# npartitions=self.__num_workers # TODO: see how this behaves
).persist()
cols = self.__edge_prop_dataframe.columns.intersection(
tmp_df.columns
).to_list()
rename_cols = {f"{col}_NEW_": col for col in cols}
new_cols = list(rename_cols)
sub_df = df[new_cols].rename(columns=rename_cols)
# This only adds data--it doesn't replace existing data
df = df.drop(columns=new_cols).fillna(sub_df).persist()
if df.npartitions > 4 * self.__num_workers:
# TODO: better understand behavior of npartitions argument in join
df = df.repartition(npartitions=2 * self.__num_workers).persist()
self.__edge_prop_dataframe = df
# Update the edge eval dict with the latest column instances
self._update_eval_dict(
self.__edge_prop_eval_dict,
self.__edge_prop_dataframe,
self.edge_id_col_name,
)
def get_edge_data(self, edge_ids=None, types=None, columns=None):
"""
Return a dataframe containing edge properties for only the specified
edge_ids, columns, and/or edge type, or all edge IDs if not specified.
"""
if self.__edge_prop_dataframe is not None:
df = self.__edge_prop_dataframe
if edge_ids is not None:
if isinstance(edge_ids, int):
edge_ids = [edge_ids]
try:
df = df.loc[edge_ids]
except TypeError:
raise TypeError(
"edge_ids needs to be a list-like type "
f"compatible with DataFrame.loc[], got {type(edge_ids)}"
)
if types is not None:
if isinstance(types, str):
df_mask = df[self.type_col_name] == types
else:
df_mask = df[self.type_col_name].isin(types)
df = df.loc[df_mask]
# The "internal" src, dst, edge_id, and type columns are also
# included/added since they are assumed to be needed by the caller.
if columns is None:
# remove the "internal" weight column if one was added
all_columns = list(self.__edge_prop_dataframe.columns)
if self.weight_col_name in all_columns:
all_columns.remove(self.weight_col_name)
df = df[all_columns]
else:
# FIXME: invalid columns will result in a KeyError, should a
# check be done here and a more PG-specific error raised?
df = df[
[self.src_col_name, self.dst_col_name, self.type_col_name] + columns
]
df_out = df.reset_index()
# Preserve the dtype (edge id type) to avoid cugraph algorithms
# throwing errors due to a dtype mismatch
index_dtype = self.__edge_prop_dataframe.index.dtype
df_out.index = df_out.index.astype(index_dtype)
return df_out
return None
def fillna_vertices(self, val=0):
"""
Fills empty vertex property values with the given value, zero by default.
Fills in-place.
Parameters
----------
val : object, Series, or dict
The object that will replace "na". Default = 0. If a dict or
Series is passed, the index or keys are the columns to fill
and the values are the fill value for the corresponding column.
"""
self.__vertex_prop_dataframe = self.__vertex_prop_dataframe.fillna(
val
).persist()
def fillna_edges(self, val=0):
"""
Fills empty edge property values with the given value, zero by default.
Fills in-place.
Parameters
----------
val : object, Series, or dict
The object that will replace "na". Default = 0. If a dict or
Series is passed, the index or keys are the columns to fill
and the values are the fill value for the corresponding column.
"""
self.__edge_prop_dataframe = self.__edge_prop_dataframe.fillna(val).persist()
def select_vertices(self, expr, from_previous_selection=None):
raise NotImplementedError
def select_edges(self, expr):
"""
Evaluate expr and return a PropertySelection object representing the
edges that match the expression.
Parameters
----------
expr : string
A python expression using property names and operators to select
specific edges.
Returns
-------
PropertySelection instance to be used for calls to extract_subgraph()
in order to construct a Graph containing only specific edges.
Examples
--------
>>>
"""
# FIXME: check types
globals = {}
locals = self.__edge_prop_eval_dict
selected_col = eval(expr, globals, locals)
return EXPERIMENTAL__MGPropertySelection(edge_selection_series=selected_col)
def extract_subgraph(
self,
create_using=None,
selection=None,
edge_weight_property=None,
default_edge_weight=None,
check_multi_edges=True,
renumber_graph=True,
add_edge_data=True,
):
"""
Return a subgraph of the overall PropertyGraph containing vertices
and edges that match a selection.
Parameters
----------
create_using : type or instance of cugraph.Graph or PropertyGraph, optional
Creates a Graph to return using the type specified. If an instance
is specified, the type of the instance is used to construct the
return Graph, and all relevant attributes set on the instance are
copied to the return Graph (eg. directed). If not specified the
returned Graph will be a directed cugraph.MultiGraph instance.
selection : PropertySelection, optional
A PropertySelection returned from one or more calls to
select_vertices() and/or select_edges(), used for creating a Graph
with only the selected properties. If not specified the returned
Graph will have all properties. Note, this could result in a Graph
with multiple edges, which may not be supported based on the value
of create_using.
edge_weight_property : string, optional
The name of the property whose values will be used as weights on
the returned Graph. If not specified, the returned Graph will be
unweighted. Ignored for PropertyGraph return type.
default_edge_weight : float64, optional
Value that replaces empty weight property fields.
Ignored for PropertyGraph return type.
check_multi_edges : bool (default True)
When True and create_using argument is given and not a MultiGraph,
this will perform a check to verify that the edges in the edge
dataframe do not form a multigraph with duplicate edges.
Ignored for PropertyGraph return type.
renumber_graph : bool (default True)
If True, return a Graph that has been renumbered for use by graph
algorithms. If False, the returned graph will need to be manually
renumbered prior to calling graph algos.
Ignored for PropertyGraph return type.
add_edge_data : bool (default True)
If True, add meta data about the edges contained in the extracted
graph which are required for future calls to annotate_dataframe().
Ignored for PropertyGraph return type.
Returns
-------
A Graph instance of the same type as create_using containing only the
vertices and edges resulting from applying the selection to the set of
vertex and edge property data.
Examples
--------
>>>
"""
if selection is not None and not isinstance(
selection, EXPERIMENTAL__MGPropertySelection
):
raise TypeError(
"selection must be an instance of "
f"PropertySelection, got {type(selection)}"
)
# NOTE: the expressions passed in to extract specific edges and
# vertices assume the original dtypes in the user input have been
# preserved. However, merge operations on the DataFrames can change
# dtypes (eg. int64 to float64 in order to add NaN entries). This
# should not be a problem since the conversions do not change the
# values.
if selection is not None and selection.vertex_selections is not None:
selected_vertex_dataframe = self.__vertex_prop_dataframe[
selection.vertex_selections
]
else:
selected_vertex_dataframe = None
if selection is not None and selection.edge_selections is not None:
selected_edge_dataframe = self.__edge_prop_dataframe[
selection.edge_selections
]
else:
selected_edge_dataframe = self.__edge_prop_dataframe
# FIXME: check that self.__edge_prop_dataframe is set!
# If vertices were specified, select only the edges that contain the
# selected verts in both src and dst
if (
selected_vertex_dataframe is not None
and not selected_vertex_dataframe.empty
):
has_srcs = selected_edge_dataframe[self.src_col_name].isin(
selected_vertex_dataframe.index
)
has_dsts = selected_edge_dataframe[self.dst_col_name].isin(
selected_vertex_dataframe.index
)
edges = selected_edge_dataframe[has_srcs & has_dsts]
# Alternative to benchmark
# edges = selected_edge_dataframe.merge(
# selected_vertex_dataframe[[]],
# left_on=self.src_col_name,
# right_index=True,
# ).merge(
# selected_vertex_dataframe[[]],
# left_on=self.dst_col_name,
# right_index=True,
# )
else:
edges = selected_edge_dataframe
# Default create_using set here instead of function signature to
# prevent cugraph from running on import. This may help diagnose errors
create_kind = "cugraph"
if create_using is None:
create_using = cugraph.MultiGraph(directed=True)
elif isinstance(create_using, type(self)):
rv = type(create_using)()
create_kind = "propertygraph"
elif type(create_using) is type and issubclass(create_using, type(self)):
rv = create_using()
create_kind = "propertygraph"
if create_kind == "cugraph":
# The __*_prop_dataframes have likely been merged several times and
# possibly had their dtypes converted in order to accommodate NaN
# values. Restore the original dtypes in the resulting edges df prior
# to creating a Graph.
self.__update_dataframe_dtypes(edges, self.__edge_prop_dtypes)
return self.edge_props_to_graph(
edges,
create_using=create_using,
edge_weight_property=edge_weight_property,
default_edge_weight=default_edge_weight,
check_multi_edges=check_multi_edges,
renumber_graph=renumber_graph,
add_edge_data=add_edge_data,
)
# Return a subgraph as PropertyGraph
if (
selected_vertex_dataframe is None
and self.__vertex_prop_dataframe is not None
):
selected_vertex_dataframe = self.__vertex_prop_dataframe.copy()
num_vertices = self.__num_vertices
vertex_type_value_counts = self.__vertex_type_value_counts
else:
num_vertices = None
vertex_type_value_counts = None
if edges is not None and edges is self.__edge_prop_dataframe:
edges = edges.copy()
edge_type_value_counts = self.__edge_type_value_counts
else:
edge_type_value_counts = None
rv._build_from_components(
vertex_prop_dataframe=selected_vertex_dataframe,
edge_prop_dataframe=edges,
dataframe_type=self.__dataframe_type,
series_type=self.__series_type,
vertex_prop_dtypes=dict(self.__vertex_prop_dtypes),
edge_prop_dtypes=dict(self.__edge_prop_dtypes),
vertex_vector_property_lengths=dict(self.__vertex_vector_property_lengths),
edge_vector_property_lengths=dict(self.__edge_vector_property_lengths),
last_edge_id=self.__last_edge_id,
is_edge_id_autogenerated=self.__is_edge_id_autogenerated,
# Cached properties
num_vertices=num_vertices,
vertex_type_value_counts=vertex_type_value_counts,
edge_type_value_counts=edge_type_value_counts,
)
return rv
def annotate_dataframe(self, df, G, edge_vertex_col_names):
raise NotImplementedError()
def edge_props_to_graph(
self,
edge_prop_df,
create_using,
edge_weight_property=None,
default_edge_weight=None,
check_multi_edges=True,
renumber_graph=True,
add_edge_data=True,
):
"""
Create and return a Graph from the edges in edge_prop_df.
"""
# Don't mutate input data
edge_prop_df = edge_prop_df.copy()
# FIXME: check default_edge_weight is valid
if edge_weight_property:
if (
edge_weight_property not in edge_prop_df.columns
and edge_prop_df.index.name != edge_weight_property
):
raise ValueError(
"edge_weight_property "
f'"{edge_weight_property}" was not found in '
"edge_prop_df"
)
# Ensure a valid edge_weight_property can be used for applying
# weights to the subgraph, and if a default_edge_weight was
# specified, apply it to all NAs in the weight column.
# Also allow the type column to be specified as the edge weight
# property so that uniform_neighbor_sample can be called with
# the weights interpreted as types.
if edge_weight_property == self.type_col_name:
prop_col = edge_prop_df[self.type_col_name].cat.codes.astype("float32")
edge_prop_df["_temp_type_col"] = prop_col
edge_weight_property = "_temp_type_col"
elif edge_weight_property in edge_prop_df.columns:
prop_col = edge_prop_df[edge_weight_property]
else:
prop_col = edge_prop_df.index.to_series()
edge_prop_df[edge_weight_property] = prop_col
if prop_col.count().compute() != prop_col.size:
if default_edge_weight is None:
raise ValueError(
f'edge_weight_property "{edge_weight_property}" '
"contains NA values in the subgraph and "
"default_edge_weight is not set"
)
else:
prop_col.fillna(default_edge_weight, inplace=True)
edge_attr = edge_weight_property
# If a default_edge_weight was specified but an edge_weight_property
# was not, a new edge weight column must be added.
elif default_edge_weight:
edge_attr = self.weight_col_name
edge_prop_df[edge_attr] = default_edge_weight
else:
edge_attr = None
# Set up the new Graph to return
if isinstance(create_using, cugraph.Graph):
# FIXME: extract more attrs from the create_using instance
attrs = {"directed": create_using.is_directed()}
G = type(create_using)(**attrs)
elif type(create_using) is type and issubclass(create_using, cugraph.Graph):
G = create_using()
else:
raise TypeError(
"create_using must be a cugraph.Graph "
"(or subclass) type or instance, got: "
f"{type(create_using)}"
)
# Prevent duplicate edges (if not allowed) since applying them to
# non-MultiGraphs would result in ambiguous edge properties.
if (
check_multi_edges
and not G.is_multigraph()
and self.is_multigraph(edge_prop_df).compute()
):
if create_using:
if type(create_using) is type:
t = create_using.__name__
else:
t = type(create_using).__name__
msg = f"'{t}' graph type specified by create_using"
else:
msg = "default Graph graph type"
raise RuntimeError(
"query resulted in duplicate edges which "
f"cannot be represented with the {msg}"
)
col_names = [self.src_col_name, self.dst_col_name]
if edge_attr is not None:
col_names.append(edge_attr)
edge_prop_df = edge_prop_df.reset_index().drop(
[col for col in edge_prop_df if col not in col_names], axis=1
)
edge_prop_df = edge_prop_df.repartition(
npartitions=self.__num_workers * 4
).persist()
G.from_dask_cudf_edgelist(
edge_prop_df,
source=self.src_col_name,
destination=self.dst_col_name,
edge_attr=edge_attr,
renumber=renumber_graph,
)
if add_edge_data:
# Set the edge_data on the resulting Graph to a DataFrame
# containing the edges and the edge ID for each. Edge IDs are
# needed for future calls to annotate_dataframe() in order to
# associate edges with their properties, since the PG can contain
# multiple edges between vertrices with different properties.
# FIXME: also add vertex_data
G.edge_data = self.__create_property_lookup_table(edge_prop_df)
del edge_prop_df
return G
def renumber_vertices_by_type(self, prev_id_column=None):
"""Renumber vertex IDs to be contiguous by type.
Parameters
----------
prev_id_column : str, optional
Column name to save the vertex ID before renumbering.
Returns a DataFrame with the start and stop IDs for each vertex type.
Stop is *inclusive*.
"""
# Check if some vertex IDs exist only in edge data
TCN = self.type_col_name
default = self._default_type_name
if self.__edge_prop_dataframe is not None and self.get_num_vertices(
default, include_edge_data=True
) != self.get_num_vertices(default, include_edge_data=False):
raise NotImplementedError(
"Currently unable to renumber vertices when some vertex "
"IDs only exist in edge data"
)
if self.__vertex_prop_dataframe is None:
return None
if (
prev_id_column is not None
and prev_id_column in self.__vertex_prop_dataframe
):
raise ValueError(
f"Can't save previous IDs to existing column {prev_id_column!r}"
)
# Use categorical dtype for the type column
if self.__series_type is dask_cudf.Series:
cat_class = cudf.CategoricalDtype
else:
cat_class = pd.CategoricalDtype
is_cat = isinstance(self.__vertex_prop_dataframe.dtypes[TCN], cat_class)
if not is_cat:
cat_dtype = cat_class([TCN], ordered=False)
self.__vertex_prop_dataframe[TCN] = self.__vertex_prop_dataframe[
TCN
].astype(cat_dtype)
df = self.__vertex_prop_dataframe
index_dtype = df.index.dtype
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
cat_dtype = df.dtypes[TCN]
df[TCN] = df[TCN].astype(str)
# Include self.vertex_col_name when sorting by values to ensure we can
# evenly distribute the data across workers.
df = df.reset_index().persist()
if len(cat_dtype.categories) > 1 and len(self.vertex_types) > 1:
# `self.vertex_types` is currently not cheap, b/c it looks at edge df
df = df.sort_values(
by=[TCN, self.vertex_col_name], ignore_index=True
).persist()
if self.__edge_prop_dataframe is not None:
new_name = f"new_{self.vertex_col_name}"
df[new_name] = 1
df[new_name] = df[new_name].cumsum() - 1
mapper = df[[self.vertex_col_name, new_name]]
edge_index_dtype = self.__edge_prop_dataframe.index.dtype
self.__edge_prop_dataframe = (
self.__edge_prop_dataframe
# map src_col_name IDs
.merge(mapper, left_on=self.src_col_name, right_on=self.vertex_col_name)
.drop(columns=[self.src_col_name, self.vertex_col_name])
.rename(columns={new_name: self.src_col_name})
# map dst_col_name IDs
.merge(mapper, left_on=self.dst_col_name, right_on=self.vertex_col_name)
.drop(columns=[self.dst_col_name, self.vertex_col_name])
.rename(columns={new_name: self.dst_col_name})
)
self.__edge_prop_dataframe.index = self.__edge_prop_dataframe.index.astype(
edge_index_dtype
)
self.__edge_prop_dataframe.index = self.__edge_prop_dataframe.index.rename(
self.edge_id_col_name
)
if prev_id_column is None:
df[self.vertex_col_name] = df[new_name]
del df[new_name]
else:
df = df.rename(
columns={
new_name: self.vertex_col_name,
self.vertex_col_name: prev_id_column,
}
)
else:
if prev_id_column is not None:
df[prev_id_column] = df[self.vertex_col_name]
df[self.vertex_col_name] = 1
df[self.vertex_col_name] = df[self.vertex_col_name].cumsum() - 1
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df[TCN] = df[TCN].astype(cat_dtype)
df[self.vertex_col_name] = df[self.vertex_col_name].astype(index_dtype)
self.__vertex_prop_dataframe = (
df.persist().set_index(self.vertex_col_name, sorted=True).persist()
)
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df = self._vertex_type_value_counts
cat_dtype = df.index.dtype
df.index = df.index.astype(str)
# self._vertex_type_value_counts
rv = df.sort_index().cumsum().to_frame("stop")
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df.index = df.index.astype(cat_dtype)
rv["start"] = rv["stop"].shift(1, fill_value=0)
rv["stop"] -= 1 # Make inclusive
return rv[["start", "stop"]]
def renumber_edges_by_type(self, prev_id_column=None):
"""Renumber edge IDs to be contiguous by type.
Parameters
----------
prev_id_column : str, optional
Column name to save the edge ID before renumbering.
Returns a DataFrame with the start and stop IDs for each edge type.
Stop is *inclusive*.
"""
# TODO: keep track if edges are already numbered correctly.
if self.__edge_prop_dataframe is None:
return None
if prev_id_column is not None and prev_id_column in self.__edge_prop_dataframe:
raise ValueError(
f"Can't save previous IDs to existing column {prev_id_column!r}"
)
df = self.__edge_prop_dataframe
index_dtype = df.index.dtype
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
cat_dtype = df.dtypes[self.type_col_name]
df[self.type_col_name] = df[self.type_col_name].astype(str)
# Include self.edge_id_col_name when sorting by values to ensure we can
# evenly distribute the data across workers.
df = df.reset_index().persist()
if len(cat_dtype.categories) > 1 and len(self.edge_types) > 1:
df = df.sort_values(
by=[self.type_col_name, self.edge_id_col_name], ignore_index=True
).persist()
if prev_id_column is not None:
df[prev_id_column] = df[self.edge_id_col_name]
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df[self.type_col_name] = df[self.type_col_name].astype(cat_dtype)
df[self.edge_id_col_name] = 1
df[self.edge_id_col_name] = df[self.edge_id_col_name].cumsum() - 1
df[self.edge_id_col_name] = df[self.edge_id_col_name].astype(index_dtype)
self.__edge_prop_dataframe = (
df.persist().set_index(self.edge_id_col_name, sorted=True).persist()
)
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df = self._edge_type_value_counts
if df.index.dtype == cat_dtype:
df.index = df.index.astype(str)
# self._edge_type_value_counts
rv = df.sort_index().cumsum().to_frame("stop")
# FIXME DASK_CUDF: https://github.com/rapidsai/cudf/issues/11795
df.index = df.index.astype(cat_dtype)
rv["start"] = rv["stop"].shift(1, fill_value=0)
rv["stop"] -= 1 # Make inclusive
return rv[["start", "stop"]]
def vertex_vector_property_to_array(
self, df, col_name, fillvalue=None, *, missing="ignore"
):
"""Convert a known vertex vector property in a DataFrame to an array.
Parameters
----------
df : dask_cudf.DataFrame
col_name : str
The column name in the DataFrame to convert to an array.
This vector property should have been created by MGPropertyGraph.
fillvalue : scalar or list, optional (default None)
Fill value for rows with missing vector data. If it is a list,
it must be the correct size of the vector property. If fillvalue is None,
then behavior if missing data is controlled by ``missing`` keyword.
Leave this as None for better performance if all rows should have data.
missing : {"ignore", "error"}
If "ignore", empty or null rows without vector data will be skipped
when creating the array, so output array shape will be
[# of non-empty rows] by [size of vector property].
When "error", RuntimeError will be raised if there are any empty rows.
Ignored if fillvalue is given.
Returns
-------
dask.array (of cupy.ndarray)
"""
if col_name not in self.__vertex_vector_property_lengths:
raise ValueError(f"{col_name!r} is not a known vertex vector property")
length = self.__vertex_vector_property_lengths[col_name]
return self._get_vector_property(df, col_name, length, fillvalue, missing)
def edge_vector_property_to_array(
self, df, col_name, fillvalue=None, *, missing="ignore"
):
"""Convert a known edge vector property in a DataFrame to an array.
Parameters
----------
df : dask_cudf.DataFrame
col_name : str
The column name in the DataFrame to convert to an array.
This vector property should have been created by MGPropertyGraph.
fillvalue : scalar or list, optional (default None)
Fill value for rows with missing vector data. If it is a list,
it must be the correct size of the vector property. If fillvalue is None,
then behavior if missing data is controlled by ``missing`` keyword.
Leave this as None for better performance if all rows should have data.
missing : {"ignore", "error"}
If "ignore", empty or null rows without vector data will be skipped
when creating the array, so output array shape will be
[# of non-empty rows] by [size of vector property].
When "error", RuntimeError will be raised if there are any empty rows.
Ignored if fillvalue is given.
Returns
-------
dask.array (of cupy.ndarray)
"""
if col_name not in self.__edge_vector_property_lengths:
raise ValueError(f"{col_name!r} is not a known edge vector property")
length = self.__edge_vector_property_lengths[col_name]
return self._get_vector_property(df, col_name, length, fillvalue, missing)
def _check_vector_properties(
self, df, vector_properties, vector_property_lengths, invalid_keys
):
"""Check if vector_properties is valid and update vector_property_lengths"""
df_cols = set(df.columns)
for key, columns in vector_properties.items():
if key in invalid_keys:
raise ValueError(
"Cannot assign new vector property to existing "
f"non-vector property: {key}"
)
if isinstance(columns, str):
# If df[columns] is a ListDtype column, should we allow it?
raise TypeError(
f"vector property columns for {key!r} should be a list; "
f"got a str ({columns!r})"
)
if not df_cols.issuperset(columns):
missing = ", ".join(set(columns) - df_cols)
raise ValueError(
f"Dataframe does not have columns for vector property {key!r}:"
f"{missing}"
)
if not columns:
raise ValueError("Empty vector property columns for {key!r}!")
if vector_property_lengths.get(key, len(columns)) != len(columns):
prev_length = vector_property_lengths[key]
new_length = len(columns)
raise ValueError(
f"Wrong size for vector property {key}; got {new_length}, but "
f"this vector property already exists with size {prev_length}"
)
for key, columns in vector_properties.items():
vector_property_lengths[key] = len(columns)
def _create_vector_properties(self, df, vector_properties):
return df.map_partitions(
self._create_vector_properties_partition, vector_properties
)
def _get_vector_property(self, df, col_name, length, fillvalue, missing):
if type(df) is not self.__dataframe_type:
raise TypeError(
f"Expected type {self.__dataframe_type}; got type {type(df)}"
)
if col_name not in df.columns:
raise ValueError(f"Column name {col_name} is not in the columns of df")
if missing not in {"error", "ignore"}:
raise ValueError(
f'missing keyword must be one of "error" or "ignore"; got {missing!r}'
)
if fillvalue is not None:
try:
fillvalue = list(fillvalue)
except Exception:
fillvalue = [fillvalue] * length
else:
if len(fillvalue) != length:
raise ValueError(
f"Wrong size of list as fill value; got {len(fillvalue)}, "
f"expected {length}"
)
if df.dtypes[col_name] != "list":
raise TypeError(
"Wrong dtype for vector property; expected 'list', "
f"got {df.dtypes[col_name]}"
)
s = df[col_name]
meta = self._vector_series_to_array_partition(
s._meta, length, fillvalue, "ignore"
)
return s.map_partitions(
self._vector_series_to_array_partition,
length,
fillvalue,
missing,
meta=meta,
)
def is_multi_gpu(self):
"""
Return True if this is a multi-gpu graph. Always returns True for
MGPropertyGraph.
"""
return True
@classmethod
def is_multigraph(cls, df):
"""
Return True if df has >1 of the same src, dst pair
"""
return cls._has_duplicates(df, [cls.src_col_name, cls.dst_col_name])
@classmethod
def has_duplicate_edges(cls, df, columns=None):
"""
Return True if df has rows with the same src, dst, type, and columns
"""
cols = [cls.src_col_name, cls.dst_col_name, cls.type_col_name]
if columns:
cols.extend(columns)
return cls._has_duplicates(df, cols)
@classmethod
def _has_duplicates(cls, df, cols):
# empty not supported by dask
if len(df.columns) == 0:
return False
unique_pair_len = df.drop_duplicates(
split_out=df.npartitions, ignore_index=True
).shape[0]
# if unique_pairs == len(df)
# then no duplicate edges
return unique_pair_len != df.shape[0]
def __create_property_lookup_table(self, edge_prop_df):
"""
Returns a DataFrame containing the src vertex, dst vertex, and edge_id
values from edge_prop_df.
"""
return edge_prop_df[[self.src_col_name, self.dst_col_name]].reset_index()
def __get_all_vertices_series(self):
"""
Return a list of all Series objects that contain vertices from all
tables.
"""
vpd = self.__vertex_prop_dataframe
epd = self.__edge_prop_dataframe
vert_sers = []
if vpd is not None:
vert_sers.append(vpd.index.to_series())
if epd is not None:
vert_sers.append(epd[self.src_col_name])
vert_sers.append(epd[self.dst_col_name])
# `dask_cudf.concat` doesn't work when the index dtypes are different
# See: https://github.com/rapidsai/cudf/issues/11741
if len(vert_sers) > 1 and not all(
cudf.api.types.is_dtype_equal(vert_sers[0].index.dtype, s.index.dtype)
for s in vert_sers
):
vert_sers = [s.reset_index(drop=True) for s in vert_sers]
return vert_sers
@staticmethod
def __get_new_column_dtypes(from_df, to_df):
"""
Returns a list containing tuples of (column name, dtype) for each
column in from_df that is not present in to_df.
"""
new_cols = set(from_df.columns) - set(to_df.columns)
return [(col, from_df.dtypes[col]) for col in new_cols]
@staticmethod
def __update_dataframe_dtypes(df, column_dtype_dict):
"""
Set the dtype for columns in df using the dtypes in column_dtype_dict.
This also handles converting standard integer dtypes to nullable
integer dtypes, needed to accommodate NA values in columns.
"""
for (col, dtype) in column_dtype_dict.items():
if col not in df.columns:
continue
# If the DataFrame is Pandas and the dtype is an integer type,
# ensure a nullable integer array is used by specifying the correct
# dtype. The alias for these dtypes is simply a capitalized string
# (eg. "Int64")
# https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#integer-dtypes-and-missing-data
dtype_str = str(dtype)
if dtype_str in ["int32", "int64"]:
dtype_str = dtype_str.title()
if str(df.dtypes[col]) != dtype_str:
df[col] = df[col].astype(dtype_str)
def __update_categorical_dtype(self, df, column, val):
"""Add a new category to a categorical dtype column of a dataframe.
Returns the new categorical dtype.
"""
# Add `val` to the categorical dtype if necessary
if val not in df.dtypes[column].categories:
df[column] = df[column].cat.add_categories([val])
return df.dtypes[column]
@staticmethod
def _create_vector_properties_partition(df, vector_properties):
# Make each vector contigous and 1-d
new_cols = {}
for key, columns in vector_properties.items():
values = df[columns].values
new_cols[key] = create_list_series_from_2d_ar(values, index=df.index)
return df.assign(**new_cols)
@staticmethod
def _vector_series_to_array_partition(s, length, fillvalue, missing):
# This returns a writable view (i.e., no copies!)
if len(s) == 0:
# TODO: fix bug in cudf; operating on dask_cudf dataframes nests list dtype
dtype = s.dtype
while dtype == "list":
dtype = dtype.element_type
return cupy.empty((0, length), dtype=dtype)
if fillvalue is not None:
s = s.copy() # copy b/c we mutate below
s[s.isnull()] = fillvalue
rv = s._data.columns[0].children[-1].values.reshape(-1, length)
if fillvalue is None and missing == "error" and rv.shape[0] != len(s):
raise RuntimeError(
f"Vector property {s.name!r} has empty rows! "
'Provide a fill value or use `missing="ignore"` to ignore empty rows.'
)
return rv
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask | rapidsai_public_repos/cugraph/python/cugraph/cugraph/dask/structure/__init__.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/triangle_count.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities import ensure_cugraph_obj_for_nx
import cudf
from pylibcugraph import triangle_count as pylibcugraph_triangle_count
from pylibcugraph import ResourceHandle
import warnings
# FIXME: Move this function to the utility module so that it can be
# shared by other algos
def ensure_valid_dtype(input_graph, start_list):
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
if isinstance(start_list, cudf.Series):
start_list_dtypes = start_list.dtype
else:
start_list_dtypes = start_list.dtypes[0]
if start_list_dtypes != vertex_dtype:
warning_msg = (
"Triangle_count requires 'start_list' to match the graph's 'vertex' type. "
f"input graph's vertex type is: {vertex_dtype} and got "
f"'start_list' of type: {start_list_dtypes}."
)
warnings.warn(warning_msg, UserWarning)
start_list = start_list.astype(vertex_dtype)
return start_list
def triangle_count(G, start_list=None):
"""
Compute the number of triangles (cycles of length three) in the
input graph.
Parameters
----------
G : cugraph.graph or networkx.Graph
cuGraph graph descriptor, should contain the connectivity information,
(edge weights are not used in this algorithm).
The current implementation only supports undirected graphs.
start_list : list or cudf.Series
list of vertices for triangle count. if None the entire set of vertices
in the graph is processed
Returns
-------
result : cudf.DataFrame
GPU data frame containing 2 cudf.Series
ddf['vertex']: cudf.Series
Contains the triangle counting vertices
ddf['counts']: cudf.Series
Contains the triangle counting counts
Examples
--------
>>> gdf = cudf.read_csv(datasets_path / 'karate.csv',
... delimiter = ' ',
... dtype=['int32', 'int32', 'float32'],
... header=None)
>>> G = cugraph.Graph()
>>> G.from_cudf_edgelist(gdf, source='0', destination='1', edge_attr='2')
>>> count = cugraph.triangle_count(G)
"""
G, _ = ensure_cugraph_obj_for_nx(G)
if G.is_directed():
raise ValueError("input graph must be undirected")
if start_list is not None:
if isinstance(start_list, int):
start_list = [start_list]
if isinstance(start_list, list):
start_list = cudf.Series(start_list)
if not isinstance(start_list, cudf.Series):
raise TypeError(
f"'start_list' must be either a list or a cudf.Series,"
f"got: {start_list.dtype}"
)
start_list = ensure_valid_dtype(G, start_list)
if G.renumbered is True:
if isinstance(start_list, cudf.DataFrame):
start_list = G.lookup_internal_vertex_id(start_list, start_list.columns)
else:
start_list = G.lookup_internal_vertex_id(start_list)
do_expensive_check = False
vertex, counts = pylibcugraph_triangle_count(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
start_list=start_list,
do_expensive_check=do_expensive_check,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["counts"] = counts
if G.renumbered:
df = G.unrenumber(df, "vertex")
return df
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/induced_subgraph.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from typing import Union, Tuple
import cudf
from pylibcugraph import ResourceHandle
from pylibcugraph import induced_subgraph as pylibcugraph_induced_subgraph
from cugraph.structure import Graph
from cugraph.utilities import (
ensure_cugraph_obj_for_nx,
cugraph_to_nx,
)
from cugraph.utilities.utils import import_optional
# FIXME: the networkx.Graph type used in type annotations is specified
# using a string literal to avoid depending on and importing networkx.
# Instead, networkx is imported optionally, which may cause a problem
# for a type checker if run in an environment where networkx is not installed.
networkx = import_optional("networkx")
# FIXME: Move this function to the utility module so that it can be
# shared by other algos
def ensure_valid_dtype(input_graph: Graph, input: cudf.Series, input_name: str):
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
input_dtype = input.dtype
if input_dtype != vertex_dtype:
warning_msg = (
f"Subgraph requires '{input_name}' "
"to match the graph's 'vertex' type. "
f"input graph's vertex type is: {vertex_dtype} and got "
f"'{input_name}' of type: "
f"{input_dtype}."
)
warnings.warn(warning_msg, UserWarning)
input = input.astype(vertex_dtype)
return input
def induced_subgraph(
G: Union[Graph, "networkx.Graph"],
vertices: Union[cudf.Series, cudf.DataFrame],
offsets: Union[list, cudf.Series] = None,
) -> Tuple[Union[Graph, "networkx.Graph"], cudf.Series]:
"""
Compute a subgraph of the existing graph including only the specified
vertices. This algorithm works with both directed and undirected graphs
and does not actually traverse the edges, but instead simply pulls out any
edges that are incident on vertices that are both contained in the vertices
list.
If no subgraph can be extracted from the vertices provided, a 'None' value
will be returned.
Parameters
----------
G : cugraph.Graph or networkx.Graph
The current implementation only supports weighted graphs.
vertices : cudf.Series or cudf.DataFrame
Specifies the vertices of the induced subgraph. For multi-column
vertices, vertices should be provided as a cudf.DataFrame
offsets : list or cudf.Series, optional
Specifies the subgraph offsets into the subgraph vertices.
If no offsets array is provided, a default array [0, len(vertices)]
will be used.
Returns
-------
Sg : cugraph.Graph or networkx.Graph
A graph object containing the subgraph induced by the given vertex set.
seeds_offsets: cudf.Series
A cudf Series containing the starting offset in the returned edge list
for each seed.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> verts = np.zeros(3, dtype=np.int32)
>>> verts[0] = 0
>>> verts[1] = 1
>>> verts[2] = 2
>>> sverts = cudf.Series(verts)
>>> Sg, seeds_offsets = cugraph.induced_subgraph(G, sverts)
"""
G, isNx = ensure_cugraph_obj_for_nx(G)
directed = G.is_directed()
# FIXME: Hardcoded for now
offsets = None
if G.renumbered:
if isinstance(vertices, cudf.DataFrame):
vertices = G.lookup_internal_vertex_id(vertices, vertices.columns)
else:
vertices = G.lookup_internal_vertex_id(vertices)
vertices = ensure_valid_dtype(G, vertices, "subgraph_vertices")
if not isinstance(offsets, cudf.Series):
if isinstance(offsets, list):
offsets = cudf.Series(offsets)
elif offsets is None:
# FIXME: Does the offsets always start from zero?
offsets = cudf.Series([0, len(vertices)])
result_graph = Graph(directed=directed)
do_expensive_check = False
source, destination, weight, offsets = pylibcugraph_induced_subgraph(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
subgraph_vertices=vertices,
subgraph_offsets=offsets,
do_expensive_check=do_expensive_check,
)
df = cudf.DataFrame()
df["src"] = source
df["dst"] = destination
df["weight"] = weight
if len(df) == 0:
return None, None
seeds_offsets = cudf.Series(offsets)
if G.renumbered:
df, src_names = G.unrenumber(df, "src", get_column_names=True)
df, dst_names = G.unrenumber(df, "dst", get_column_names=True)
else:
# FIXME: THe original 'src' and 'dst' are not stored in 'simpleGraph'
src_names = "src"
dst_names = "dst"
if G.edgelist.weights:
result_graph.from_cudf_edgelist(
df, source=src_names, destination=dst_names, edge_attr="weight"
)
else:
result_graph.from_cudf_edgelist(df, source=src_names, destination=dst_names)
if isNx is True:
result_graph = cugraph_to_nx(result_graph)
return result_graph, seeds_offsets
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/ecg.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities import (
ensure_cugraph_obj_for_nx,
df_score_to_dictionary,
)
import cudf
from pylibcugraph import ecg as pylibcugraph_ecg
from pylibcugraph import ResourceHandle
def ecg(input_graph, min_weight=0.05, ensemble_size=16, weight=None):
"""
Compute the Ensemble Clustering for Graphs (ECG) partition of the input
graph. ECG runs truncated Louvain on an ensemble of permutations of the
input graph, then uses the ensemble partitions to determine weights for
the input graph. The final result is found by running full Louvain on
the input graph using the determined weights.
See https://arxiv.org/abs/1809.05578 for further information.
Parameters
----------
input_graph : cugraph.Graph or NetworkX Graph
The graph descriptor should contain the connectivity information
and weights. The adjacency list will be computed if not already
present.
min_weight : float, optional (default=0.5)
The minimum value to assign as an edgeweight in the ECG algorithm.
It should be a value in the range [0,1] usually left as the default
value of .05
ensemble_size : integer, optional (default=16)
The number of graph permutations to use for the ensemble.
The default value is 16, larger values may produce higher quality
partitions for some graphs.
weight : str, optional (default=None)
This parameter is here for NetworkX compatibility and
represents which NetworkX data column represents Edge weights.
Returns
-------
parts : cudf.DataFrame or python dictionary
GPU data frame of size V containing two columns, the vertex id and
the partition id it is assigned to.
df[vertex] : cudf.Series
Contains the vertex identifiers
df[partition] : cudf.Series
Contains the partition assigned to the vertices
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> parts = cugraph.ecg(G)
"""
input_graph, isNx = ensure_cugraph_obj_for_nx(input_graph)
vertex, partition = pylibcugraph_ecg(
resource_handle=ResourceHandle(),
graph=input_graph._plc_graph,
min_weight=min_weight,
ensemble_size=ensemble_size,
do_expensive_check=False,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["partition"] = partition
if input_graph.renumbered:
df = input_graph.unrenumber(df, "vertex")
if isNx is True:
df = df_score_to_dictionary(df, "partition")
return df
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/subgraph_extraction.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Union
import warnings
import cudf
import cugraph
from cugraph.structure import Graph
from cugraph.utilities.utils import import_optional
# FIXME: the networkx.Graph type used in the type annotation for subgraph() is
# specified using a string literal to avoid depending on and importing
# networkx. Instead, networkx is imported optionally, which may cause a problem
# for a type checker if run in an environment where networkx is not installed.
networkx = import_optional("networkx")
def subgraph(
G: Union[Graph, "networkx.Graph"],
vertices: Union[cudf.Series, cudf.DataFrame],
) -> Union[Graph, "networkx.Graph"]:
"""
Compute a subgraph of the existing graph including only the specified
vertices. This algorithm works with both directed and undirected graphs
and does not actually traverse the edges, but instead simply pulls out any
edges that are incident on vertices that are both contained in the vertices
list.
If no subgraph can be extracted from the vertices provided, a 'None' value
will be returned.
Parameters
----------
G : cugraph.Graph or networkx.Graph
The current implementation only supports weighted graphs.
vertices : cudf.Series or cudf.DataFrame
Specifies the vertices of the induced subgraph. For multi-column
vertices, vertices should be provided as a cudf.DataFrame
Returns
-------
Sg : cugraph.Graph or networkx.Graph
A graph object containing the subgraph induced by the given vertex set.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> verts = np.zeros(3, dtype=np.int32)
>>> verts[0] = 0
>>> verts[1] = 1
>>> verts[2] = 2
>>> sverts = cudf.Series(verts)
>>> Sg = cugraph.subgraph(G, sverts)
"""
warning_msg = (
"This call is deprecated. Please call 'cugraph.induced_subgraph()' instead."
)
warnings.warn(warning_msg, DeprecationWarning)
result_graph, _ = cugraph.induced_subgraph(G, vertices)
return result_graph
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/spectral_clustering.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities import (
ensure_cugraph_obj_for_nx,
df_score_to_dictionary,
)
from pylibcugraph import (
balanced_cut_clustering as pylibcugraph_balanced_cut_clustering,
spectral_modularity_maximization as pylibcugraph_spectral_modularity_maximization,
analyze_clustering_modularity as pylibcugraph_analyze_clustering_modularity,
analyze_clustering_edge_cut as pylibcugraph_analyze_clustering_edge_cut,
analyze_clustering_ratio_cut as pylibcugraph_analyze_clustering_ratio_cut,
)
from pylibcugraph import ResourceHandle
import cudf
import numpy as np
def spectralBalancedCutClustering(
G,
num_clusters,
num_eigen_vects=2,
evs_tolerance=0.00001,
evs_max_iter=100,
kmean_tolerance=0.00001,
kmean_max_iter=100,
):
"""
Compute a clustering/partitioning of the given graph using the spectral
balanced cut method.
Parameters
----------
G : cugraph.Graph or networkx.Graph
Graph descriptor
num_clusters : integer
Specifies the number of clusters to find, must be greater than 1
num_eigen_vects : integer, optional
Specifies the number of eigenvectors to use. Must be lower or equal to
num_clusters. Default is 2
evs_tolerance: float, optional
Specifies the tolerance to use in the eigensolver.
Default is 0.00001
evs_max_iter: integer, optional
Specifies the maximum number of iterations for the eigensolver.
Default is 100
kmean_tolerance: float, optional
Specifies the tolerance to use in the k-means solver.
Default is 0.00001
kmean_max_iter: integer, optional
Specifies the maximum number of iterations for the k-means solver.
Default is 100
Returns
-------
df : cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding cluster assignments.
df['vertex'] : cudf.Series
contains the vertex identifiers
df['cluster'] : cudf.Series
contains the cluster assignments
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.spectralBalancedCutClustering(G, 5)
"""
# Error checking in C++ code
G, isNx = ensure_cugraph_obj_for_nx(G)
# Check if vertex type is "int32"
if (
G.edgelist.edgelist_df.dtypes[0] != np.int32
or G.edgelist.edgelist_df.dtypes[1] != np.int32
):
raise ValueError(
"'spectralBalancedCutClustering' requires the input graph's vertex to be "
"of type 'int32'"
)
vertex, partition = pylibcugraph_balanced_cut_clustering(
ResourceHandle(),
G._plc_graph,
num_clusters,
num_eigen_vects,
evs_tolerance,
evs_max_iter,
kmean_tolerance,
kmean_max_iter,
do_expensive_check=False,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["cluster"] = partition
if G.renumbered:
df = G.unrenumber(df, "vertex")
if isNx is True:
df = df_score_to_dictionary(df, "cluster")
return df
def spectralModularityMaximizationClustering(
G,
num_clusters,
num_eigen_vects=2,
evs_tolerance=0.00001,
evs_max_iter=100,
kmean_tolerance=0.00001,
kmean_max_iter=100,
):
"""
Compute a clustering/partitioning of the given graph using the spectral
modularity maximization method.
Parameters
----------
G : cugraph.Graph or networkx.Graph
cuGraph graph descriptor. This graph should have edge weights.
num_clusters : integer
Specifies the number of clusters to find
num_eigen_vects : integer, optional
Specifies the number of eigenvectors to use. Must be lower or equal to
num_clusters. Default is 2
evs_tolerance: float, optional
Specifies the tolerance to use in the eigensolver.
Default is 0.00001
evs_max_iter: integer, optional
Specifies the maximum number of iterations for the eigensolver.
Default is 100
kmean_tolerance: float, optional
Specifies the tolerance to use in the k-means solver.
Default is 0.00001
kmean_max_iter: integer, optional
Specifies the maximum number of iterations for the k-means solver.
Default is 100
Returns
-------
df : cudf.DataFrame
GPU data frame containing two cudf.Series of size V: the vertex
identifiers and the corresponding cluster assignments.
df['vertex'] : cudf.Series
contains the vertex identifiers
df['cluster'] : cudf.Series
contains the cluster assignments
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.spectralModularityMaximizationClustering(G, 5)
"""
G, isNx = ensure_cugraph_obj_for_nx(G)
if (
G.edgelist.edgelist_df.dtypes[0] != np.int32
or G.edgelist.edgelist_df.dtypes[1] != np.int32
):
raise ValueError(
"'spectralModularityMaximizationClustering' requires the input graph's "
"vertex to be of type 'int32'"
)
vertex, partition = pylibcugraph_spectral_modularity_maximization(
ResourceHandle(),
G._plc_graph,
num_clusters,
num_eigen_vects,
evs_tolerance,
evs_max_iter,
kmean_tolerance,
kmean_max_iter,
do_expensive_check=False,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["cluster"] = partition
if G.renumbered:
df = G.unrenumber(df, "vertex")
if isNx is True:
df = df_score_to_dictionary(df, "cluster")
return df
def analyzeClustering_modularity(
G, n_clusters, clustering, vertex_col_name="vertex", cluster_col_name="cluster"
):
"""
Compute the modularity score for a given partitioning/clustering.
The assumption is that “clustering” is the results from a call
from a special clustering algorithm and contains columns named
“vertex” and “cluster”.
Parameters
----------
G : cugraph.Graph or networkx.Graph
graph descriptor. This graph should have edge weights.
n_clusters : integer
Specifies the number of clusters in the given clustering
clustering : cudf.DataFrame
The cluster assignment to analyze.
vertex_col_name : str or list of str, optional (default='vertex')
The names of the column in the clustering dataframe identifying
the external vertex id
cluster_col_name : str, optional (default='cluster')
The name of the column in the clustering dataframe identifying
the cluster id
Returns
-------
score : float
The computed modularity score
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.spectralBalancedCutClustering(G, 5)
>>> score = cugraph.analyzeClustering_modularity(G, 5, df)
"""
if type(vertex_col_name) is list:
if not all(isinstance(name, str) for name in vertex_col_name):
raise Exception("vertex_col_name must be list of string")
elif type(vertex_col_name) is not str:
raise Exception("vertex_col_name must be a string")
if type(cluster_col_name) is not str:
raise Exception("cluster_col_name must be a string")
G, isNx = ensure_cugraph_obj_for_nx(G)
if (
G.edgelist.edgelist_df.dtypes[0] != np.int32
or G.edgelist.edgelist_df.dtypes[1] != np.int32
):
raise ValueError(
"'analyzeClustering_modularity' requires the input graph's "
"vertex to be of type 'int32'"
)
if G.renumbered:
clustering = G.add_internal_vertex_id(
clustering, "vertex", vertex_col_name, drop=True
)
if clustering.dtypes[0] != np.int32 or clustering.dtypes[1] != np.int32:
raise ValueError(
"'analyzeClustering_modularity' requires both the clustering 'vertex' "
"and 'cluster' to be of type 'int32'"
)
score = pylibcugraph_analyze_clustering_modularity(
ResourceHandle(),
G._plc_graph,
n_clusters,
clustering["vertex"],
clustering[cluster_col_name],
)
return score
def analyzeClustering_edge_cut(
G, n_clusters, clustering, vertex_col_name="vertex", cluster_col_name="cluster"
):
"""
Compute the edge cut score for a partitioning/clustering
The assumption is that “clustering” is the results from a call
from a special clustering algorithm and contains columns named
“vertex” and “cluster”.
Parameters
----------
G : cugraph.Graph
cuGraph graph descriptor
n_clusters : integer
Specifies the number of clusters in the given clustering
clustering : cudf.DataFrame
The cluster assignment to analyze.
vertex_col_name : str, optional (default='vertex')
The name of the column in the clustering dataframe identifying
the external vertex id
cluster_col_name : str, optional (default='cluster')
The name of the column in the clustering dataframe identifying
the cluster id
Returns
-------
score : float
The computed edge cut score
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.spectralBalancedCutClustering(G, 5)
>>> score = cugraph.analyzeClustering_edge_cut(G, 5, df)
"""
if type(vertex_col_name) is list:
if not all(isinstance(name, str) for name in vertex_col_name):
raise Exception("vertex_col_name must be list of string")
elif type(vertex_col_name) is not str:
raise Exception("vertex_col_name must be a string")
if type(cluster_col_name) is not str:
raise Exception("cluster_col_name must be a string")
G, isNx = ensure_cugraph_obj_for_nx(G)
if (
G.edgelist.edgelist_df.dtypes[0] != np.int32
or G.edgelist.edgelist_df.dtypes[1] != np.int32
):
raise ValueError(
"'analyzeClustering_edge_cut' requires the input graph's vertex to be "
"of type 'int32'"
)
if G.renumbered:
clustering = G.add_internal_vertex_id(
clustering, "vertex", vertex_col_name, drop=True
)
if clustering.dtypes[0] != np.int32 or clustering.dtypes[1] != np.int32:
raise ValueError(
"'analyzeClustering_edge_cut' requires both the clustering 'vertex' "
"and 'cluster' to be of type 'int32'"
)
score = pylibcugraph_analyze_clustering_edge_cut(
ResourceHandle(),
G._plc_graph,
n_clusters,
clustering["vertex"],
clustering[cluster_col_name],
)
return score
def analyzeClustering_ratio_cut(
G, n_clusters, clustering, vertex_col_name="vertex", cluster_col_name="cluster"
):
"""
Compute the ratio cut score for a partitioning/clustering
Parameters
----------
G : cugraph.Graph
cuGraph graph descriptor. This graph should have edge weights.
n_clusters : integer
Specifies the number of clusters in the given clustering
clustering : cudf.DataFrame
The cluster assignment to analyze.
vertex_col_name : str, optional (default='vertex')
The name of the column in the clustering dataframe identifying
the external vertex id
cluster_col_name : str, optional (default='cluster')
The name of the column in the clustering dataframe identifying
the cluster id
Returns
-------
score : float
The computed ratio cut score
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> df = cugraph.spectralBalancedCutClustering(G, 5)
>>> score = cugraph.analyzeClustering_ratio_cut(G, 5, df, 'vertex',
... 'cluster')
"""
if type(vertex_col_name) is list:
if not all(isinstance(name, str) for name in vertex_col_name):
raise Exception("vertex_col_name must be list of string")
elif type(vertex_col_name) is not str:
raise Exception("vertex_col_name must be a string")
if type(cluster_col_name) is not str:
raise Exception("cluster_col_name must be a string")
if G.renumbered:
clustering = G.add_internal_vertex_id(
clustering, "vertex", vertex_col_name, drop=True
)
if clustering.dtypes[0] != np.int32 or clustering.dtypes[1] != np.int32:
raise ValueError(
"'analyzeClustering_ratio_cut' requires both the clustering 'vertex' "
"and 'cluster' to be of type 'int32'"
)
score = pylibcugraph_analyze_clustering_ratio_cut(
ResourceHandle(),
G._plc_graph,
n_clusters,
clustering["vertex"],
clustering[cluster_col_name],
)
return score
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/louvain.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Union, Tuple
from cugraph.structure import Graph
from cugraph.utilities import (
is_nx_graph_type,
ensure_cugraph_obj_for_nx,
df_score_to_dictionary,
)
import cudf
import warnings
from pylibcugraph import louvain as pylibcugraph_louvain
from pylibcugraph import ResourceHandle
from cugraph.utilities.utils import import_optional
# FIXME: the networkx.Graph type used in type annotations is specified
# using a string literal to avoid depending on and importing networkx.
# Instead, networkx is imported optionally, which may cause a problem
# for a type checker if run in an environment where networkx is not installed.
networkx = import_optional("networkx")
VERTEX_COL_NAME = "vertex"
CLUSTER_ID_COL_NAME = "partition"
# FIXME: max_level should default to 100 once max_iter is removed
def louvain(
G: Union[Graph, "networkx.Graph"],
max_level: Union[int, None] = None,
max_iter: Union[int, None] = None,
resolution: float = 1.0,
threshold: float = 1e-7,
) -> Tuple[Union[cudf.DataFrame, dict], float]:
"""
Compute the modularity optimizing partition of the input graph using the
Louvain method
It uses the Louvain method described in:
VD Blondel, J-L Guillaume, R Lambiotte and E Lefebvre: Fast unfolding of
community hierarchies in large networks, J Stat Mech P10008 (2008),
http://arxiv.org/abs/0803.0476
Parameters
----------
G : cugraph.Graph or NetworkX Graph
The graph descriptor should contain the connectivity information
and weights. The adjacency list will be computed if not already
present.
The current implementation only supports undirected graphs.
max_level : integer, optional (default=100)
This controls the maximum number of levels of the Louvain
algorithm. When specified the algorithm will terminate after no more
than the specified number of levels. No error occurs when the
algorithm terminates early in this manner.
If max_level > 500, it will be set to 500 and a warning is emitted
in order to prevent excessive runtime.
max_iter : integer, optional (default=None)
This parameter is deprecated in favor of max_level. Previously
it was used to control the maximum number of levels of the Louvain
algorithm.
resolution: float, optional (default=1.0)
Called gamma in the modularity formula, this changes the size
of the communities. Higher resolutions lead to more smaller
communities, lower resolutions lead to fewer larger communities.
Defaults to 1.
threshold: float
Modularity gain threshold for each level. If the gain of
modularity between 2 levels of the algorithm is less than the
given threshold then the algorithm stops and returns the
resulting communities.
Defaults to 1e-7.
Returns
-------
result: cudf.DataFrame or dict
If input graph G is of type cugraph.Graph, a GPU dataframe
with two columns.
result[VERTEX_COL_NAME] : cudf.Series
Contains the vertex identifiers
result[CLUSTER_ID_COL_NAME] : cudf.Series
Contains the partition assigned to the vertices
If input graph G is of type networkx.Graph, a dict
Dictionary of vertices and their partition ids.
modularity_score : float
A floating point number containing the global modularity score
of the partitioning.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> parts = cugraph.louvain(G)
"""
# FIXME: Onece the graph construction calls support isolated vertices through
# the C API (the C++ interface already supports this) then there will be
# no need to compute isolated vertices here.
isolated_vertices = list()
if is_nx_graph_type(type(G)):
isolated_vertices = [v for v in range(G.number_of_nodes()) if G.degree[v] == 0]
else:
# FIXME: Gather isolated vertices of G
pass
G, isNx = ensure_cugraph_obj_for_nx(G)
if G.is_directed():
raise ValueError("input graph must be undirected")
# FIXME: This max_iter logic and the max_level defaulting can be deleted
# in favor of defaulting max_level in call once max_iter is deleted
if max_iter:
if max_level:
raise ValueError(
"max_iter is deprecated. Cannot specify both max_iter and max_level"
)
warning_msg = (
"max_iter has been renamed max_level. Use of max_iter is "
"deprecated and will no longer be supported in the next releases."
)
warnings.warn(warning_msg, FutureWarning)
max_level = max_iter
if max_level is None:
max_level = 100
if max_level > 500:
w_msg = "max_level is set too high, clamping it down to 500."
warnings.warn(w_msg)
max_level = 500
vertex, partition, modularity_score = pylibcugraph_louvain(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
max_level=max_level,
threshold=threshold,
resolution=resolution,
do_expensive_check=False,
)
result = cudf.DataFrame()
result[VERTEX_COL_NAME] = vertex
result[CLUSTER_ID_COL_NAME] = partition
if len(isolated_vertices) > 0:
unique_cids = result[CLUSTER_ID_COL_NAME].unique()
max_cluster_id = -1 if len(result) == 0 else unique_cids.max()
isolated_vtx_and_cids = cudf.DataFrame()
isolated_vtx_and_cids[VERTEX_COL_NAME] = isolated_vertices
isolated_vtx_and_cids[CLUSTER_ID_COL_NAME] = [
(max_cluster_id + i + 1) for i in range(len(isolated_vertices))
]
result = cudf.concat(
[result, isolated_vtx_and_cids], ignore_index=True, sort=False
)
if G.renumbered and len(G.input_df) > 0:
result = G.unrenumber(result, VERTEX_COL_NAME)
if isNx is True:
result = df_score_to_dictionary(result, CLUSTER_ID_COL_NAME)
return result, modularity_score
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/ktruss_subgraph.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.structure.graph_classes import Graph
from typing import Union
from cugraph.utilities import (
ensure_cugraph_obj_for_nx,
cugraph_to_nx,
)
from pylibcugraph import k_truss_subgraph as pylibcugraph_k_truss_subgraph
from pylibcugraph import ResourceHandle
import warnings
from numba import cuda
import cudf
from cugraph.utilities.utils import import_optional
# FIXME: the networkx.Graph type used in the type annotation for
# ktruss_subgraph() is specified using a string literal to avoid depending on
# and importing networkx. Instead, networkx is imported optionally, which may
# cause a problem for a type checker if run in an environment where networkx is
# not installed.
networkx = import_optional("networkx")
# FIXME: special case for ktruss on CUDA 11.4: an 11.4 bug causes ktruss to
# crash in that environment. Allow ktruss to import on non-11.4 systems, but
# raise an exception if ktruss is directly imported on 11.4.
def _ensure_compatible_cuda_version():
try:
cuda_version = cuda.runtime.get_version()
except cuda.cudadrv.runtime.CudaRuntimeAPIError:
cuda_version = "n/a"
unsupported_cuda_version = (11, 4)
if cuda_version == unsupported_cuda_version:
ver_string = ".".join([str(n) for n in unsupported_cuda_version])
raise NotImplementedError(
"k_truss is not currently supported in CUDA" f" {ver_string} environments."
)
def k_truss(
G: Union[Graph, "networkx.Graph"], k: int
) -> Union[Graph, "networkx.Graph"]:
"""
Returns the K-Truss subgraph of a graph for a specific k.
NOTE: this function is currently not available on CUDA 11.4 systems.
The k-truss of a graph is a subgraph where each edge is part of at least
(k−2) triangles. K-trusses are used for finding tighlty knit groups of
vertices in a graph. A k-truss is a relaxation of a k-clique in the graph
and was define in [1]. Finding cliques is computationally demanding and
finding the maximal k-clique is known to be NP-Hard.
Parameters
----------
G : cuGraph.Graph or networkx.Graph
cuGraph graph descriptor with connectivity information. k-Trusses are
defined for only undirected graphs as they are defined for
undirected triangle in a graph.
k : int
The desired k to be used for extracting the k-truss subgraph.
Returns
-------
G_truss : cuGraph.Graph or networkx.Graph
A cugraph graph descriptor with the k-truss subgraph for the given k.
The networkx graph will NOT have all attributes copied over
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> k_subgraph = cugraph.k_truss(G, 3)
"""
_ensure_compatible_cuda_version()
G, isNx = ensure_cugraph_obj_for_nx(G)
if isNx is True:
k_sub = ktruss_subgraph(G, k)
S = cugraph_to_nx(k_sub)
return S
else:
return ktruss_subgraph(G, k)
# FIXME: merge this function with k_truss
def ktruss_subgraph(
G: Union[Graph, "networkx.Graph"],
k: int,
use_weights=True, # deprecated
) -> Graph:
"""
Returns the K-Truss subgraph of a graph for a specific k.
NOTE: this function is currently not available on CUDA 11.4 systems.
The k-truss of a graph is a subgraph where each edge is part of at least
(k−2) triangles. K-trusses are used for finding tighlty knit groups of
vertices in a graph. A k-truss is a relaxation of a k-clique in the graph
and was define in [1]. Finding cliques is computationally demanding and
finding the maximal k-clique is known to be NP-Hard.
In contrast, finding a k-truss is computationally tractable as its
key building block, namely triangle counting, can be executed
in polnymomial time.Typically, it takes many iterations of triangle
counting to find the k-truss of a graph. Yet these iterations operate
on a weakly monotonically shrinking graph.
Therefore, finding the k-truss of a graph can be done in a fairly
reasonable amount of time. The solution in cuGraph is based on a
GPU algorithm first shown in [2] and uses the triangle counting algorithm
from [3].
References
----------
[1] Cohen, J.,
"Trusses: Cohesive subgraphs for social network analysis"
National security agency technical report, 2008
[2] O. Green, J. Fox, E. Kim, F. Busato, et al.
“Quickly Finding a Truss in a Haystack”
IEEE High Performance Extreme Computing Conference (HPEC), 2017
https://doi.org/10.1109/HPEC.2017.8091038
[3] O. Green, P. Yalamanchili, L.M. Munguia,
“Fast Triangle Counting on GPU”
Irregular Applications: Architectures and Algorithms (IA3), 2014
Parameters
----------
G : cuGraph.Graph
cuGraph graph descriptor with connectivity information. k-Trusses are
defined for only undirected graphs as they are defined for
undirected triangle in a graph.
The current implementation only supports undirected graphs.
k : int
The desired k to be used for extracting the k-truss subgraph.
use_weights : bool, optional (default=True)
Whether the output should contain the edge weights if G has them.
Deprecated: If 'weights' were passed at the graph creation, they will
be used.
Returns
-------
G_truss : cuGraph.Graph
A cugraph graph descriptor with the k-truss subgraph for the given k.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> k_subgraph = cugraph.ktruss_subgraph(G, 3)
"""
_ensure_compatible_cuda_version()
KTrussSubgraph = Graph()
if G.is_directed():
raise ValueError("input graph must be undirected")
if use_weights:
warning_msg = (
"The use_weights flag is deprecated "
"and will be removed in the next release. if weights "
"were passed at the graph creation, they will be used."
)
warnings.warn(warning_msg, FutureWarning)
sources, destinations, edge_weights, _ = pylibcugraph_k_truss_subgraph(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
k=k,
do_expensive_check=True,
)
subgraph_df = cudf.DataFrame()
subgraph_df["src"] = sources
subgraph_df["dst"] = destinations
if edge_weights is not None:
subgraph_df["weight"] = edge_weights
if G.renumbered:
subgraph_df = G.unrenumber(subgraph_df, "src")
subgraph_df = G.unrenumber(subgraph_df, "dst")
if G.edgelist.weights:
KTrussSubgraph.from_cudf_edgelist(
subgraph_df, source="src", destination="dst", edge_attr="weight"
)
else:
KTrussSubgraph.from_cudf_edgelist(subgraph_df, source="src", destination="dst")
return KTrussSubgraph
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/__init__.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.community.louvain import louvain
from cugraph.community.leiden import leiden
from cugraph.community.ecg import ecg
from cugraph.community.spectral_clustering import (
spectralBalancedCutClustering,
spectralModularityMaximizationClustering,
analyzeClustering_modularity,
analyzeClustering_edge_cut,
analyzeClustering_ratio_cut,
)
from cugraph.community.subgraph_extraction import subgraph
from cugraph.community.induced_subgraph import induced_subgraph
from cugraph.community.triangle_count import triangle_count
from cugraph.community.ktruss_subgraph import ktruss_subgraph
from cugraph.community.ktruss_subgraph import k_truss
from cugraph.community.egonet import ego_graph
from cugraph.community.egonet import batched_ego_graphs
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/leiden.py | # Copyright (c) 2019-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pylibcugraph import leiden as pylibcugraph_leiden
from pylibcugraph import ResourceHandle
from cugraph.structure import Graph
import cudf
from typing import Union, Tuple
from cugraph.utilities import (
ensure_cugraph_obj_for_nx,
df_score_to_dictionary,
)
from cugraph.utilities.utils import import_optional
# FIXME: the networkx.Graph type used in the type annotation for
# leiden() is specified using a string literal to avoid depending on
# and importing networkx. Instead, networkx is imported optionally, which may
# cause a problem for a type checker if run in an environment where networkx is
# not installed.
networkx = import_optional("networkx")
def leiden(
G: Union[Graph, "networkx.Graph"],
max_iter: int = 100,
resolution: float = 1.0,
random_state: int = None,
theta: int = 1.0,
) -> Tuple[cudf.DataFrame, float]:
"""
Compute the modularity optimizing partition of the input graph using the
Leiden algorithm
It uses the Leiden method described in:
Traag, V. A., Waltman, L., & van Eck, N. J. (2019). From Louvain to Leiden:
guaranteeing well-connected communities. Scientific reports, 9(1), 5233.
doi: 10.1038/s41598-019-41695-z
Parameters
----------
G : cugraph.Graph
cuGraph graph descriptor of type Graph
The current implementation only supports undirected weighted graphs.
The adjacency list will be computed if not already present.
max_iter : integer, optional (default=100)
This controls the maximum number of levels/iterations of the Leiden
algorithm. When specified the algorithm will terminate after no more
than the specified number of iterations. No error occurs when the
algorithm terminates early in this manner.
resolution: float, optional (default=1.0)
Called gamma in the modularity formula, this changes the size
of the communities. Higher resolutions lead to more smaller
communities, lower resolutions lead to fewer larger communities.
Defaults to 1.
random_state: int, optional(default=None)
Random state to use when generating samples. Optional argument,
defaults to a hash of process id, time, and hostname.
theta: float, optional (default=1.0)
Called theta in the Leiden algorithm, this is used to scale
modularity gain in Leiden refinement phase, to compute
the probability of joining a random leiden community.
Returns
-------
parts : cudf.DataFrame
GPU data frame of size V containing two columns the vertex id and the
partition id it is assigned to.
df['vertex'] : cudf.Series
Contains the vertex identifiers
df['partition'] : cudf.Series
Contains the partition assigned to the vertices
modularity_score : float
a floating point number containing the global modularity score of the
partitioning.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> parts, modularity_score = cugraph.leiden(G)
"""
G, isNx = ensure_cugraph_obj_for_nx(G)
if G.is_directed():
raise ValueError("input graph must be undirected")
vertex, partition, modularity_score = pylibcugraph_leiden(
resource_handle=ResourceHandle(),
random_state=random_state,
graph=G._plc_graph,
max_level=max_iter,
resolution=resolution,
theta=theta,
do_expensive_check=False,
)
df = cudf.DataFrame()
df["vertex"] = vertex
df["partition"] = partition
if G.renumbered:
parts = G.unrenumber(df, "vertex")
else:
parts = df
if isNx is True:
parts = df_score_to_dictionary(df, "partition")
return parts, modularity_score
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/community/egonet.py | # Copyright (c) 2021-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities import (
ensure_cugraph_obj,
is_nx_graph_type,
)
from cugraph.utilities import cugraph_to_nx
import cudf
from pylibcugraph import ego_graph as pylibcugraph_ego_graph
from pylibcugraph import ResourceHandle
import warnings
def _convert_graph_to_output_type(G, input_type):
"""
Given a cugraph.Graph, convert it to a new type appropriate for the
graph algos in this module, based on input_type.
"""
if is_nx_graph_type(input_type):
return cugraph_to_nx(G)
else:
return G
def _convert_df_series_to_output_type(df, offsets, input_type):
"""
Given a cudf.DataFrame df, convert it to a new type appropriate for the
graph algos in this module, based on input_type.
"""
if is_nx_graph_type(input_type):
return df.to_pandas(), offsets.values_host.tolist()
else:
return df, offsets
def ego_graph(G, n, radius=1, center=True, undirected=None, distance=None):
"""
Compute the induced subgraph of neighbors centered at node n,
within a given radius.
Parameters
----------
G : cugraph.Graph, networkx.Graph, CuPy or SciPy sparse matrix
Graph or matrix object, which should contain the connectivity
information. Edge weights, if present, should be single or double
precision floating point values.
n : integer or list, cudf.Series, cudf.DataFrame
A single node as integer or a cudf.DataFrame if nodes are
represented with multiple columns. If a cudf.DataFrame is provided,
only the first row is taken as the node input.
radius: integer, optional (default=1)
Include all neighbors of distance<=radius from n.
center: bool, optional
Defaults to True. False is not supported
undirected: bool, optional
This parameter is here for NetworkX compatibility and is ignored
distance: key, optional (default=None)
This parameter is here for NetworkX compatibility and is ignored
Returns
-------
G_ego : cuGraph.Graph or networkx.Graph
A graph descriptor with a minimum spanning tree or forest.
The networkx graph will not have all attributes copied over
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> ego_graph = cugraph.ego_graph(G, 1, radius=2)
"""
(G, input_type) = ensure_cugraph_obj(G, nx_weight_attr="weight")
result_graph = type(G)(directed=G.is_directed())
if undirected is not None:
warning_msg = (
"The parameter 'undirected' is deprecated and "
"will be removed in the next release"
)
warnings.warn(warning_msg, PendingDeprecationWarning)
if isinstance(n, (int, list)):
n = cudf.Series(n)
if isinstance(n, cudf.Series):
if G.renumbered is True:
n = G.lookup_internal_vertex_id(n)
elif isinstance(n, cudf.DataFrame):
if G.renumbered is True:
n = G.lookup_internal_vertex_id(n, n.columns)
else:
raise TypeError(
f"'n' must be either an integer or a list or a cudf.Series"
f" or a cudf.DataFrame, got: {type(n)}"
)
# Match the seed to the vertex dtype
n_type = G.edgelist.edgelist_df["src"].dtype
n = n.astype(n_type)
do_expensive_check = False
source, destination, weight, _ = pylibcugraph_ego_graph(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
source_vertices=n,
radius=radius,
do_expensive_check=do_expensive_check,
)
df = cudf.DataFrame()
df["src"] = source
df["dst"] = destination
if weight is not None:
df["weight"] = weight
if G.renumbered:
df, src_names = G.unrenumber(df, "src", get_column_names=True)
df, dst_names = G.unrenumber(df, "dst", get_column_names=True)
else:
# FIXME: The original 'src' and 'dst' are not stored in 'simpleGraph'
src_names = "src"
dst_names = "dst"
if G.edgelist.weights:
result_graph.from_cudf_edgelist(
df, source=src_names, destination=dst_names, edge_attr="weight"
)
else:
result_graph.from_cudf_edgelist(df, source=src_names, destination=dst_names)
return _convert_graph_to_output_type(result_graph, input_type)
def batched_ego_graphs(G, seeds, radius=1, center=True, undirected=None, distance=None):
"""
Compute the induced subgraph of neighbors for each node in seeds
within a given radius.
Parameters
----------
G : cugraph.Graph, networkx.Graph, CuPy or SciPy sparse matrix
Graph or matrix object, which should contain the connectivity
information. Edge weights, if present, should be single or double
precision floating point values.
seeds : cudf.Series or list or cudf.DataFrame
Specifies the seeds of the induced egonet subgraphs.
radius: integer, optional (default=1)
Include all neighbors of distance<=radius from n.
center: bool, optional
Defaults to True. False is not supported
undirected: bool, optional
Defaults to False. True is not supported
distance: key, optional (default=None)
Distances are counted in hops from n. Other cases are not supported.
Returns
-------
ego_edge_lists : cudf.DataFrame or pandas.DataFrame
GPU data frame containing all induced sources identifiers,
destination identifiers, edge weights
seeds_offsets: cudf.Series
Series containing the starting offset in the returned edge list
for each seed.
Examples
--------
>>> from cugraph.datasets import karate
>>> G = karate.get_graph(download=True)
>>> b_ego_graph, offsets = cugraph.batched_ego_graphs(G, seeds=[1,5],
... radius=2)
"""
(G, input_type) = ensure_cugraph_obj(G, nx_weight_attr="weight")
if seeds is not None:
if isinstance(seeds, int):
seeds = [seeds]
if isinstance(seeds, list):
seeds = cudf.Series(seeds)
if G.renumbered is True:
if isinstance(seeds, cudf.DataFrame):
seeds = G.lookup_internal_vertex_id(seeds, seeds.columns)
else:
seeds = G.lookup_internal_vertex_id(seeds)
# Match the seed to the vertex dtype
seeds_type = G.edgelist.edgelist_df["src"].dtype
seeds = seeds.astype(seeds_type)
do_expensive_check = False
source, destination, weight, offset = pylibcugraph_ego_graph(
resource_handle=ResourceHandle(),
graph=G._plc_graph,
source_vertices=seeds,
radius=radius,
do_expensive_check=do_expensive_check,
)
offsets = cudf.Series(offset)
df = cudf.DataFrame()
df["src"] = source
df["dst"] = destination
df["weight"] = weight
if G.renumbered:
df = G.unrenumber(df, "src", preserve_order=True)
df = G.unrenumber(df, "dst", preserve_order=True)
return _convert_df_series_to_output_type(df, offsets, input_type)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/internals/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources internals.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX internals_
ASSOCIATED_TARGETS cugraph
)
target_include_directories(internals_internals PRIVATE "${CMAKE_CURRENT_LIST_DIR}")
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/internals/internals.pyx | # Copyright (c) 2020-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# cython: profile=False
# distutils: language = c++
# cython: embedsignature = True
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from numba.cuda.api import from_cuda_array_interface
import numpy as np
cdef extern from "Python.h":
cdef cppclass PyObject
cdef extern from "callbacks_implems.hpp" namespace "cugraph::internals":
cdef cppclass Callback:
pass
cdef cppclass DefaultGraphBasedDimRedCallback(Callback):
void setup(int n, int d) except +
void on_preprocess_end(void *positions) except +
void on_epoch_end(void *positions) except +
void on_train_end(void *positions) except +
PyObject* pyCallbackClass
cdef class PyCallback:
def get_numba_matrix(self, positions, shape, typestr):
sizeofType = 4 if typestr == "float32" else 8
desc = {
'shape': shape,
'strides': (sizeofType, shape[0]*sizeofType),
'typestr': typestr,
'data': [positions],
'order': 'C',
'version': 1
}
return from_cuda_array_interface(desc)
cdef class GraphBasedDimRedCallback(PyCallback):
"""
Usage
-----
class CustomCallback(GraphBasedDimRedCallback):
def on_preprocess_end(self, positions):
print(positions.copy_to_host())
def on_epoch_end(self, positions):
print(positions.copy_to_host())
def on_train_end(self, positions):
print(positions.copy_to_host())
"""
cdef DefaultGraphBasedDimRedCallback native_callback
def __init__(self):
self.native_callback.pyCallbackClass = <PyObject *><void*>self
def get_native_callback(self):
return <uintptr_t>&(self.native_callback)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/internals/__init__.py | # Copyright (c) 2020-2021, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.internals.internals import GraphBasedDimRedCallback
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/internals/callbacks_implems.hpp | /*
* Copyright (c) 2020-2023, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <Python.h>
#include <cugraph/legacy/internals.hpp>
#include <iostream>
namespace cugraph {
namespace internals {
class DefaultGraphBasedDimRedCallback : public GraphBasedDimRedCallback {
public:
PyObject* get_numba_matrix(void* positions)
{
PyObject* pycl = (PyObject*)this->pyCallbackClass;
if (isFloat) {
return PyObject_CallMethod(
pycl, "get_numba_matrix", "(l(ll)s)", positions, n, n_components, "float32");
} else {
return PyObject_CallMethod(
pycl, "get_numba_matrix", "(l(ll)s)", positions, n, n_components, "float64");
}
}
void on_preprocess_end(void* positions) override
{
PyObject* numba_matrix = get_numba_matrix(positions);
PyObject* res =
PyObject_CallMethod(this->pyCallbackClass, "on_preprocess_end", "(O)", numba_matrix);
Py_DECREF(numba_matrix);
Py_DECREF(res);
}
void on_epoch_end(void* positions) override
{
PyObject* numba_matrix = get_numba_matrix(positions);
PyObject* res = PyObject_CallMethod(this->pyCallbackClass, "on_epoch_end", "(O)", numba_matrix);
Py_DECREF(numba_matrix);
Py_DECREF(res);
}
void on_train_end(void* positions) override
{
PyObject* numba_matrix = get_numba_matrix(positions);
PyObject* res = PyObject_CallMethod(this->pyCallbackClass, "on_train_end", "(O)", numba_matrix);
Py_DECREF(numba_matrix);
Py_DECREF(res);
}
public:
PyObject* pyCallbackClass;
};
} // namespace internals
} // namespace cugraph
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.utilities.api_tools import experimental_warning_wrapper
from cugraph.utilities.api_tools import deprecated_warning_wrapper
from cugraph.utilities.api_tools import promoted_experimental_warning_wrapper
from cugraph.structure.property_graph import EXPERIMENTAL__PropertyGraph
PropertyGraph = experimental_warning_wrapper(EXPERIMENTAL__PropertyGraph)
from cugraph.structure.property_graph import EXPERIMENTAL__PropertySelection
PropertySelection = experimental_warning_wrapper(EXPERIMENTAL__PropertySelection)
from cugraph.dask.structure.mg_property_graph import EXPERIMENTAL__MGPropertyGraph
MGPropertyGraph = experimental_warning_wrapper(EXPERIMENTAL__MGPropertyGraph)
from cugraph.dask.structure.mg_property_graph import EXPERIMENTAL__MGPropertySelection
MGPropertySelection = experimental_warning_wrapper(EXPERIMENTAL__MGPropertySelection)
# FIXME: Remove experimental.triangle_count next release
from cugraph.community.triangle_count import triangle_count
triangle_count = promoted_experimental_warning_wrapper(triangle_count)
from cugraph.experimental.components.scc import EXPERIMENTAL__strong_connected_component
strong_connected_component = experimental_warning_wrapper(
EXPERIMENTAL__strong_connected_component
)
from cugraph.experimental.structure.bicliques import EXPERIMENTAL__find_bicliques
find_bicliques = deprecated_warning_wrapper(
experimental_warning_wrapper(EXPERIMENTAL__find_bicliques)
)
from cugraph.gnn.data_loading import EXPERIMENTAL__BulkSampler
BulkSampler = experimental_warning_wrapper(EXPERIMENTAL__BulkSampler)
from cugraph.link_prediction.jaccard import jaccard, jaccard_coefficient
jaccard = promoted_experimental_warning_wrapper(jaccard)
jaccard_coefficient = promoted_experimental_warning_wrapper(jaccard_coefficient)
from cugraph.link_prediction.sorensen import sorensen, sorensen_coefficient
sorensen = promoted_experimental_warning_wrapper(sorensen)
sorensen_coefficient = promoted_experimental_warning_wrapper(sorensen_coefficient)
from cugraph.link_prediction.overlap import overlap, overlap_coefficient
overlap = promoted_experimental_warning_wrapper(overlap)
overlap_coefficient = promoted_experimental_warning_wrapper(overlap_coefficient)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/gnn/__init__.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.gnn.data_loading import EXPERIMENTAL__BulkSampler
from cugraph.utilities.api_tools import experimental_warning_wrapper
BulkSampler = experimental_warning_wrapper(EXPERIMENTAL__BulkSampler)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/components/scc.py | # Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cudf
import cugraph
import numpy as np
# TRIM Process:
# - removed single vertex componenets
# - select vertex with highest out degree
# - forwards BFS
# - backward BFS
# - compute intersection = components
# - remove component
# - repeat
def EXPERIMENTAL__strong_connected_component(source, destination):
"""
Generate the strongly connected components
using the FW-BW-TRIM approach, but skipping the trimming)
Parameters
----------
source : cudf.Series
A cudf series that contains the source side of an edge list
destination : cudf.Series
A cudf series that contains the destination side of an edge list
Returns
-------
cdf : cudf.DataFrame - a dataframe for components
df['vertex'] - the vertex ID
df['id'] - the component ID
sdf : cudf.DataFrame - a dataframe with single vertex components
df['vertex'] - the vertex ID
count - int - the number of components found
Examples
--------
>>> # M = read_mtx_file(graph_file)
>>> # sources = cudf.Series(M.row)
>>> # destinations = cudf.Series(M.col)
>>> # components, single_components, count =
>>> # cugraph.strong_connected_component(source, destination)
"""
# FIXME: Uncomment out the above example
max_value = np.iinfo(np.int32).max # NOQA
# create the FW and BW graphs - this version dopes nopt modify the graphs
G_fw = cugraph.Graph()
G_bw = cugraph.Graph()
G_fw.add_edge_list(source, destination)
G_bw.add_edge_list(destination, source)
# get a list of vertices and sort the list on out_degree
d = G_fw.degrees()
d = d.sort_values(by="out_degree", ascending=False)
num_verts = len(d)
# create space for the answers
components = [None] * num_verts
single_components = [None] * num_verts
# Counts - aka array indexies
count = 0
single_count = 0
# remove vertices that cannot be in a component
bad = d.query("in_degree == 0 or out_degree == 0")
if len(bad):
bad = bad.drop(["in_degree", "out_degree"])
single_components[single_count] = bad
single_count = single_count + 1
d = _filter_list(d, bad)
# ----- Start processing -----
while len(d) > 0:
v = d["vertex"][0]
# compute the forward BFS
bfs_fw = cugraph.bfs(G_fw, v)
bfs_fw = bfs_fw.query("distance != @max_value")
# Now backwards
bfs_bw = cugraph.bfs(G_bw, v)
bfs_bw = bfs_bw.query("distance != @max_value")
# intersection
common = bfs_fw.merge(bfs_bw, on="vertex", how="inner")
if len(common) > 1:
common["id"] = v
components[count] = common
d = _filter_list(d, common)
count = count + 1
else:
# v is an isolated vertex
vdf = cudf.DataFrame()
vdf["vertex"] = v
single_components[single_count] = vdf
single_count = single_count + 1
d = d.iloc[1:]
# end of loop until vertex queue is empty
comp = _compress_array(components, count)
sing = _compress_array(single_components, single_count)
return comp, sing, count
# ---------
def _filter_list(vert_list, drop_list):
t = cudf.DataFrame()
t["vertex"] = drop_list["vertex"]
t["d"] = 0
df = vert_list.merge(t, on="vertex", how="left")
df["d"] = df["d"].fillna(1)
df = df.query("d == 1")
df.drop("d", inplace=True)
return df
def _compress_array(a, length):
tmp = cudf.DataFrame()
if length > 0:
tmp_a = [None] * length
for i in range(length):
tmp_a[i] = a[i]
tmp = cudf.concat(tmp_a)
return tmp
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/components/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/dask/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/Graph.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import networkx as nx
class Graph(nx.Graph):
"""
Class which extends NetworkX Graph class. It provides original
NetworkX functionality and will be overridden as this compatibility
layer moves functionality to gpus in future releases.
"""
pass
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/DiGraph.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import networkx as nx
class DiGraph(nx.DiGraph):
"""
Class which extends NetworkX DiGraph class. It provides original
NetworkX functionality and will be overridden as this compatibility
layer moves functionality to gpus in future releases.
"""
pass
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib
from types import ModuleType
import sys
# FIXME: only perform the NetworkX imports below if NetworkX is installed. If
# it's determined that NetworkX is required to use nx compat, then the contents
# of this entire namespace may have to be optional, or packaged separately with
# a hard dependency on NetworkX.
# Start by populating this namespace with the same contents as
# networkx/__init__.py
from networkx import *
# Override the individual NetworkX objects loaded above with the cugraph.nx
# compat equivalents. This means if an equivalent compat obj is not available,
# the standard NetworkX obj will be used.
#
# Each cugraph obj should have the same module path as the
# NetworkX obj it isoverriding, and the submodules along the hierarchy should
# each import the same sub objects/modules as NetworkX does. For example,
# in NetworkX, "pagerank" is a function in
# "networkx/algorithms/link_analysis/pagerank_alg.py", and is
# directly imported in the namespaces "networkx.algorithms.link_analysis",
# "networkx.algorithms", and "networkx". Therefore, the cugraph
# compat pagerank should be defined in a module of the same name and
# also be present in the same namespaces.
# Refer to the networkx __init__.py files when adding new overriding
# modules to ensure the same paths and used and namespaces are populated.
from cugraph.experimental.compat.nx import algorithms
from cugraph.experimental.compat.nx.algorithms import *
from cugraph.experimental.compat.nx.algorithms import link_analysis
from cugraph.experimental.compat.nx.algorithms.link_analysis import *
# Recursively import all of the NetworkX modules into equivalent submodules
# under this package. The above "from networkx import *" handles names in this
# namespace, but it will not create the equivalent networkx submodule
# hierarchy. For example, a user could expect to "import cugraph.nx.drawing",
# which should simply redirect to "networkx.drawing".
#
# This can be accomplished by updating sys.modules with the import path and
# module object of each NetworkX submodule in the NetworkX package hierarchy,
# but only for module paths that have not been added yet (otherwise this would
# overwrite the overides above).
_visited = set()
def _import_submodules_recursively(obj, mod_path):
# Since modules can freely import any other modules, immediately mark this
# obj as visited so submodules that import it are not re-examined
# infinitely.
_visited.add(obj)
for name in dir(obj):
sub_obj = getattr(obj, name)
if type(sub_obj) is ModuleType:
sub_mod_path = f"{mod_path}.{name}"
# Do not overwrite modules that are already present, such as those
# intended to override which were imported separately above.
if sub_mod_path not in sys.modules:
sys.modules[sub_mod_path] = sub_obj
if sub_obj not in _visited:
_import_submodules_recursively(sub_obj, sub_mod_path)
_import_submodules_recursively(importlib.import_module("networkx"), __name__)
del _visited
del _import_submodules_recursively
# At this point, individual types that cugraph.nx are overriding
# could be used to override the corresponding types *inside* the
# networkx modules imported above. For example, the networkx graph generators
# will still return networkx.Graph objects instead of cugraph.nx.Graph
# objects (unless the user knows to pass a "create_using" arg, if available).
# For specific overrides, assignments could be made in the imported
# a networkx modules so cugraph.nx types are used by default.
# NOTE: this has the side-effect of causing all networkx
# imports in this python process/interpreter to use the override (ie. the user
# won't be able to use the original networkx types,
# even from a networkx import)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/algorithms/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from networkx.algorithms import *
from cugraph.experimental.compat.nx.algorithms.link_analysis import *
from cugraph.experimental.compat.nx.algorithms import link_analysis
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/algorithms | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/algorithms/link_analysis/__init__.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from networkx.algorithms.link_analysis import *
from cugraph.experimental.compat.nx.algorithms.link_analysis.pagerank_alg import *
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/algorithms | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/compat/nx/algorithms/link_analysis/pagerank_alg.py | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cugraph
import cugraph.utilities
import cudf
import numpy as np
def create_cudf_from_dict(dict_in):
"""
converts python dictionary to a cudf.Dataframe as needed by this
cugraph pagerank call.
Parameters
----------
dictionary with node ids(key) and values
Returns
-------
a cudf DataFrame of (vertex)ids and values.
"""
if not (isinstance(dict_in, dict)):
raise TypeError("type_name must be a dict, got: " f"{type(dict_in)}")
# FIXME: Looking to replacing fromiter with rename and
# compare performance
k = np.fromiter(dict_in.keys(), dtype="int32")
v = np.fromiter(dict_in.values(), dtype="float32")
df = cudf.DataFrame({"vertex": k, "values": v})
return df
def pagerank(
G,
alpha=0.85,
personalization=None,
max_iter=100,
tol=1.0e-6,
nstart=None,
weight="weight",
dangling=None,
):
"""
Calls the cugraph pagerank algorithm taking in a networkX object.
In future releases it will maintain compatibility but will migrate more
of the workflow to the GPU.
Parameters
----------
G : networkx.Graph
alpha : float, optional (default=0.85)
The damping factor alpha represents the probability to follow an
outgoing edge, standard value is 0.85.
Thus, 1.0-alpha is the probability to “teleport” to a random vertex.
Alpha should be greater than 0.0 and strictly lower than 1.0.
personalization : dictionary, optional (default=None)
dictionary comes from networkx is converted to a dataframe
containing the personalization information.
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned. This can
be used to limit the execution time and do an early exit before the
solver reaches the convergence tolerance.
If this value is lower or equal to 0 cuGraph will use the default
value, which is 100.
tol : float, optional (default=1e-05)
Set the tolerance the approximation, this parameter should be a small
magnitude value.
The lower the tolerance the better the approximation. If this value is
0.0f, cuGraph will use the default value which is 1.0E-5.
Setting too small a tolerance can lead to non-convergence due to
numerical roundoff. Usually values between 0.01 and 0.00001 are
acceptable.
nstart : dictionary, optional (default=None)
dictionary containing the initial guess vertex and value for pagerank.
Will be converted to a Dataframe before calling the cugraph algorithm
nstart['vertex'] : cudf.Series
Subset of vertices of graph for initial guess for pagerank values
nstart['values'] : cudf.Series
Pagerank values for vertices
weight: str, optional (default=None)
This parameter is here for NetworkX compatibility and not
yet supported in this algorithm
dangling : dict, optional (default=None)
This parameter is here for NetworkX compatibility and ignored
Returns
-------
PageRank : dictionary
A dictionary of nodes with the PageRank as value
"""
local_pers = None
local_nstart = None
if personalization is not None:
local_pers = create_cudf_from_dict(personalization)
if nstart is not None:
local_nstart = create_cudf_from_dict(nstart)
return cugraph.pagerank(
G,
alpha=alpha,
personalization=local_pers,
max_iter=max_iter,
tol=tol,
nstart=local_nstart,
weight=weight,
dangling=dangling,
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/structure/bicliques.py | # Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import needed libraries
import cudf
import numpy as np
from collections import OrderedDict
def EXPERIMENTAL__find_bicliques(
df, k, offset=0, max_iter=-1, support=1.0, min_features=1, min_machines=10
):
"""
Find the top k maximal bicliques
Parameters
----------
df : cudf:DataFrame
A dataframe containing the bipartite graph edge list
Columns must be called 'src', 'dst', and 'flag'
k : int
The max number of bicliques to return
-1 mean all
offset : int
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not
the full edge list to save space. Since it is a biclique, it is ease
to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ration'] - the ratio of bad machine / total machines
"""
# must be factor of 10
PART_SIZE = int(1000)
x = [col for col in df.columns]
if "src" not in x:
raise NameError("src column not found")
if "dst" not in x:
raise NameError("dst column not found")
if "flag" not in x:
raise NameError("flag column not found")
if support > 1.0 or support < 0.1:
raise NameError("support must be between 0.1 and 1.0")
# this removes a prep step that offset the values for CUDA process
if offset > 0:
df["dst"] = df["dst"] - offset
# break the data into chunks to improve join/search performance
src_by_dst, num_parts = _partition_data_by_feature(df, PART_SIZE)
# Get a list of all the dst (features) sorted by degree
f_list = _count_features(df, True)
# create a dataframe for the answers
bicliques = cudf.DataFrame()
stats = cudf.DataFrame()
# create a dataframe to help prevent duplication of work
machine_old = cudf.DataFrame()
# create a dataframe for stats
stats = cudf.DataFrame()
answer_id = 0
iter_max = len(f_list)
if max_iter != -1:
iter_max = max_iter
# Loop over all the features (dst) or until K is reached
for i in range(iter_max):
# pop the next feature to process
feature = f_list["dst"][i]
degree = f_list["count"][i]
# compute the index to this item (which dataframe chunk is in)
idx = int(feature / PART_SIZE)
# get all machines that have this feature
machines = get_src_from_dst(src_by_dst[idx], feature)
# if this set of machines is the same as the last, skip this feature
if not is_same_as_last(machine_old, machines):
# now from those machines, hop out to the list of all the features
feature_list = get_all_feature(src_by_dst, machines, num_parts)
# summarize occurances
ic = _count_features(feature_list, True)
goal = int(degree * support) # NOQA
# only get dst nodes with the same degree
c = ic.query("count >= @goal")
# need more than X feature to make a biclique
if len(c) > min_features:
if len(machines) >= min_machines:
bicliques, stats = update_results(
machines, c, answer_id, bicliques, stats
)
answer_id = answer_id + 1
# end - if same
machine_old = machines
if k > -1:
if answer_id == k:
break
# end for loop
# All done, reset data
if offset > 0:
df["dst"] = df["dst"] + offset
return bicliques, stats
def _partition_data_by_feature(_df, PART_SIZE):
# compute the number of sets
m = int((_df["dst"].max() / PART_SIZE) + 1)
_ui = [None] * (m + 1)
# Partition the data into a number of smaller DataFrame
s = 0
e = s + PART_SIZE
for i in range(m):
_ui[i] = _df.query("dst >= @s and dst < @e")
s = e
e = e + PART_SIZE
return _ui, m
def _count_features(_gdf, sort=True):
aggs = OrderedDict()
aggs["dst"] = "count"
c = _gdf.groupby(["dst"], as_index=False).agg(aggs)
c = c.rename(columns={"count_dst": "count"}, copy=False)
if sort:
c = c.sort_values(by="count", ascending=False)
return c
# get all src vertices for a given dst
def get_src_from_dst(_gdf, id):
_src_list = _gdf.query("dst == @id")
_src_list.drop("dst", inplace=True)
return _src_list
def is_same_as_last(_old, _new):
status = False
if len(_old) == len(_new):
m = _old.merge(_new, on="src", how="left")
if m["src"].null_count == 0:
status = True
return status
# get all the items used by the specified users
def get_all_feature(_gdf, src_list_df, N):
c = [None] * N
for i in range(N):
c[i] = src_list_df.merge(_gdf[i], on="src", how="inner")
return cudf.concat(c)
def update_results(m, f, key, b, s):
"""
Input
* m = machines
* f = features
* key = cluster ID
* b = biclique answer
* s = stats answer
Returns
-------
B : cudf.DataFrame
A dataframe containing the list of machine and features. This is not
the full edge list to save space. Since it is a biclique, it is ease
to recreate the edges
B['id'] - a cluster ID (this is a one up number - up to k)
B['vert'] - the vertex ID
B['type'] - 0 == machine, 1 == feature
S : cudf.DataFrame
A Pandas dataframe of statistics on the returned info.
This dataframe is (relatively small) of size k.
S['id'] - the cluster ID
S['total'] - total vertex count
S['machines'] - number of machine nodes
S['features'] - number of feature vertices
S['bad_ratio'] - the ratio of bad machine / total machines
"""
B = cudf.DataFrame()
S = cudf.DataFrame()
m_df = cudf.DataFrame()
m_df["vert"] = m["src"]
m_df["id"] = int(key)
m_df["type"] = int(0)
f_df = cudf.DataFrame()
f_df["vert"] = f["dst"].astype(np.int32)
f_df["id"] = int(key)
f_df["type"] = int(1)
if len(b) == 0:
B = cudf.concat([m_df, f_df])
else:
B = cudf.concat([b, m_df, f_df])
# now update the stats
num_m = len(m_df)
num_f = len(f_df)
total = num_m + num_f
num_bad = len(m.query("flag == 1"))
ratio = num_bad / total
# now stats
s_tmp = cudf.DataFrame()
s_tmp["id"] = key
s_tmp["total"] = total
s_tmp["machines"] = num_m
s_tmp["features"] = num_f
s_tmp["bad_ratio"] = ratio
if len(s) == 0:
S = s_tmp
else:
S = cudf.concat([s, s_tmp])
del m_df
del f_df
return B, S
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental | rapidsai_public_repos/cugraph/python/cugraph/cugraph/experimental/structure/__init__.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/sampling/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
set(cython_sources random_walks_wrapper.pyx)
set(linked_libraries cugraph::cugraph)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES "${linked_libraries}" MODULE_PREFIX sampling_
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/sampling/random_walks.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cudf
import cupy as cp
from pylibcugraph import ResourceHandle
from pylibcugraph import (
uniform_random_walks as pylibcugraph_uniform_random_walks,
)
from cugraph.utilities import ensure_cugraph_obj_for_nx
from cugraph.structure import Graph
import warnings
from cugraph.utilities.utils import import_optional
from typing import Union, Tuple
# FIXME: the networkx.Graph type used in type annotations is specified
# using a string literal to avoid depending on and importing networkx.
# Instead, networkx is imported optionally, which may cause a problem
# for a type checker if run in an environment where networkx is not installed.
networkx = import_optional("networkx")
def uniform_random_walks(
G: Graph,
start_vertices: Union[int, list, cudf.Series, cudf.DataFrame] = None,
max_depth: int = None,
) -> Tuple[cp.ndarray, cp.ndarray, int]:
return pylibcugraph_uniform_random_walks(
resource_handle=ResourceHandle(),
input_graph=G._plc_graph,
start_vertices=start_vertices,
max_length=max_depth,
)
def random_walks(
G: Union[Graph, "networkx.Graph"],
random_walks_type: str = "uniform",
start_vertices: Union[int, list, cudf.Series, cudf.DataFrame] = None,
max_depth: int = None,
use_padding: bool = False,
legacy_result_type: bool = True,
) -> Tuple[cudf.Series, cudf.Series, Union[None, int, cudf.Series]]:
"""
Compute random walks for each nodes in 'start_vertices' and returns
either a padded or a coalesced result. For the padded case, vertices
with no outgoing edges will be padded with -1.
When 'use_padding' is 'False', 'random_walks' returns a coalesced
result which is a compressed version of the padded one. In the padded
form, sources with no out_going edges are padded with -1s in the
'vertex_paths' array and their corresponding edges('edge_weight_paths')
with 0.0s (when 'legacy_result_type' is 'True'). If 'legacy_result_type'
is 'False', 'random_walks' returns padded results (vertex_paths,
edge_weight_paths) but instead of 'sizes = None', returns the 'max_path_lengths'.
When 'legacy_result_type' is 'False', the arhument 'use_padding' is ignored.
parameters
----------
G : cuGraph.Graph or networkx.Graph
The graph can be either directed or undirected.
random_walks_type : str, optional (default='uniform')
Type of random walks: 'uniform', 'biased', 'node2vec'.
Only 'uniform' random walks is currently supported
start_vertices : int or list or cudf.Series or cudf.DataFrame
A single node or a list or a cudf.Series of nodes from which to run
the random walks. In case of multi-column vertices it should be
a cudf.DataFrame
max_depth : int
The maximum depth of the random walks
When 'legacy_result_type' is set to False, 'max_depth' is relative to
the number of edges otherwised, it is relative to the number of vertices.
use_padding : bool, optional (default=False)
If True, padded paths are returned else coalesced paths are returned.
legacy_result_type : bool, optional (default=True)
If True, will return a tuple of vertex_paths, edge_weight_paths and
sizes. If False, will return a tuple of vertex_paths, vertex_paths and
max_path_length
Returns
-------
vertex_paths : cudf.Series or cudf.DataFrame
Series containing the vertices of edges/paths in the random walk.
edge_weight_paths: cudf.Series
Series containing the edge weights of edges represented by the
returned vertex_paths
and
sizes: None or cudf.Series
The path sizes in case of 'coalesced' paths or None if 'padded'.
or
max_path_length : int
The maximum path length if 'legacy_result_type' is 'False'
Examples
--------
>>> from cugraph.datasets import karate
>>> M = karate.get_edgelist(download=True)
>>> G = karate.get_graph()
>>> start_vertices = G.nodes()[:4]
>>> _, _, _ = cugraph.random_walks(G, "uniform", start_vertices, 3)
"""
if legacy_result_type:
warning_msg = (
"Coalesced path results, returned when setting legacy_result_type=True, "
"is deprecated and will no longer be supported in the next releases. "
"only padded paths will be returned instead"
)
warnings.warn(warning_msg, PendingDeprecationWarning)
if max_depth is None:
raise TypeError("must specify a 'max_depth'")
# FIXME: supporting Nx types should mean having a return type that better
# matches Nx expectations (eg. data on the CPU, possibly using a different
# data struct like a dictionary, etc.). The 2nd value is ignored here,
# which is typically named isNx and used to convert the return type.
# Consider a different return type if Nx types are passed in.
G, _ = ensure_cugraph_obj_for_nx(G)
if isinstance(start_vertices, int):
start_vertices = [start_vertices]
if isinstance(start_vertices, list):
# Ensure the 'start_vertices' have the same dtype as the edge list.
# Failing to do that may produce erroneous results.
vertex_dtype = G.edgelist.edgelist_df.dtypes[0]
start_vertices = cudf.Series(start_vertices, dtype=vertex_dtype)
if G.renumbered is True:
if isinstance(start_vertices, cudf.DataFrame):
start_vertices = G.lookup_internal_vertex_id(
start_vertices, start_vertices.columns
)
else:
start_vertices = G.lookup_internal_vertex_id(start_vertices)
if random_walks_type == "uniform":
vertex_paths, edge_wgt_paths, max_path_length = uniform_random_walks(
G, start_vertices, max_depth
)
else:
raise ValueError("Only 'uniform' random walks is currently supported")
vertex_paths = cudf.Series(vertex_paths)
if G.renumbered:
df_ = cudf.DataFrame()
df_["vertex_paths"] = vertex_paths
df_ = G.unrenumber(df_, "vertex_paths", preserve_order=True)
vertex_paths = cudf.Series(df_["vertex_paths"]).fillna(-1)
edge_wgt_paths = cudf.Series(edge_wgt_paths)
# The PLC uniform random walks returns an extra vertex along with an extra
# edge per path. In fact, the max depth is relative to the number of vertices
# for the legacy implementation and edges for the PLC implementation
if legacy_result_type:
warning_msg = (
"The 'max_depth' is relative to the number of vertices and will be "
"deprecated in the next release. For non legacy result type, it is "
"relative to the number of edges which will only be supported."
)
warnings.warn(warning_msg, PendingDeprecationWarning)
# Drop the last vertex and and edge weight from each vertex and edge weight
# paths.
vertex_paths = vertex_paths.drop(
index=vertex_paths[max_depth :: max_depth + 1].index
).reset_index(drop=True)
edge_wgt_paths = edge_wgt_paths.drop(
index=edge_wgt_paths[max_depth - 1 :: max_depth].index
).reset_index(drop=True)
if use_padding:
sizes = None
# FIXME: Is it necessary to slice it with 'edge_wgt_paths_sz'?
return vertex_paths, edge_wgt_paths, sizes
# If 'use_padding' is False, compute the sizes of the unpadded results
sizes = (
vertex_paths.apply(lambda x: 1 if x != -1 else 0)
.groupby(vertex_paths.index // max_depth, sort=True)
.sum()
.reset_index(drop=True)
)
# Drop the -1 values which are representative of no outgoing edges
vertex_paths = vertex_paths.pipe(lambda x: x[x != -1]).reset_index(drop=True)
# Drop the 0.0 values which are representative of no edges.
edge_wgt_paths = edge_wgt_paths.pipe(lambda x: x[x != 0.0]).reset_index(
drop=True
)
return vertex_paths, edge_wgt_paths, sizes
else:
return (
vertex_paths,
edge_wgt_paths,
max_path_length,
)
def rw_path(
num_paths: int, sizes: cudf.Series
) -> Tuple[cudf.Series, cudf.Series, cudf.Series]:
"""
Retrieve more information on the obtained paths in case use_padding
is False.
parameters
----------
num_paths: int
Number of paths in the random walk output.
sizes: cudf.Series
Path size returned in random walk output.
Returns
-------
path_data : cudf.DataFrame
Dataframe containing vetex path offsets, edge weight offsets and
edge weight sizes for each path.
"""
vertex_offsets = cudf.Series(0, dtype=sizes.dtype)
vertex_offsets = cudf.concat(
[vertex_offsets, sizes.cumsum()[:-1]], ignore_index=True
)
weight_sizes = sizes - 1
weight_offsets = cudf.Series(0, dtype=sizes.dtype)
num_edges = vertex_offsets.diff()[1:] - 1
weight_offsets = cudf.concat(
[weight_offsets, num_edges.cumsum()], ignore_index=True
)
# FIXME: CUDF bug. concatenating two series of type int32 but get a CUDF of
# type 'int64' have to cast the results
weight_offsets = weight_offsets.astype(sizes.dtype)
path_data = cudf.DataFrame()
path_data["vertex_offsets"] = vertex_offsets
path_data["weight_sizes"] = weight_sizes
path_data["weight_offsets"] = weight_offsets
return path_data[:num_paths]
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/sampling/sampling_utilities.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cupy
import cudf
import warnings
def sampling_results_from_cupy_array_dict(
cupy_array_dict,
weight_t,
num_hops,
with_edge_properties=False,
return_offsets=False,
renumber=False,
use_legacy_names=True,
include_hop_column=True,
):
"""
Creates a cudf DataFrame from cupy arrays from pylibcugraph wrapper
"""
results_df = cudf.DataFrame()
if use_legacy_names:
major_col_name = "sources"
minor_col_name = "destinations"
warning_msg = (
"The legacy column names (sources, destinations)"
" will no longer be supported for uniform_neighbor_sample"
" in release 23.12. The use_legacy_names=False option will"
" become the only option, and (majors, minors) will be the"
" only supported column names."
)
warnings.warn(warning_msg, FutureWarning)
else:
major_col_name = "majors"
minor_col_name = "minors"
if with_edge_properties:
majors = cupy_array_dict["majors"]
if majors is not None:
results_df["majors"] = majors
results_df_cols = [
"minors",
"weight",
"edge_id",
"edge_type",
]
for col in results_df_cols:
array = cupy_array_dict[col]
# The length of each of these arrays should be the same
results_df[col] = array
results_df.rename(
columns={"majors": major_col_name, "minors": minor_col_name}, inplace=True
)
label_hop_offsets = cupy_array_dict["label_hop_offsets"]
batch_ids = cupy_array_dict["batch_id"]
if renumber:
renumber_df = cudf.DataFrame(
{
"map": cupy_array_dict["renumber_map"],
}
)
if not return_offsets:
if len(batch_ids) > 0:
batch_ids_r = cudf.Series(batch_ids).repeat(
cupy.diff(cupy_array_dict["renumber_map_offsets"])
)
batch_ids_r.reset_index(drop=True, inplace=True)
renumber_df["batch_id"] = batch_ids_r
else:
renumber_df["batch_id"] = None
if return_offsets:
batches_series = cudf.Series(
batch_ids,
name="batch_id",
)
if include_hop_column:
# TODO remove this logic in release 23.12
offsets_df = cudf.Series(
label_hop_offsets[cupy.arange(len(batch_ids) + 1) * num_hops],
name="offsets",
).to_frame()
else:
offsets_df = cudf.Series(
label_hop_offsets,
name="offsets",
).to_frame()
if len(batches_series) > len(offsets_df):
# this is extremely rare so the inefficiency is ok
offsets_df = offsets_df.join(batches_series, how="outer").sort_index()
else:
offsets_df["batch_id"] = batches_series
if renumber:
renumber_offset_series = cudf.Series(
cupy_array_dict["renumber_map_offsets"], name="renumber_map_offsets"
)
if len(renumber_offset_series) > len(offsets_df):
# this is extremely rare so the inefficiency is ok
offsets_df = offsets_df.join(
renumber_offset_series, how="outer"
).sort_index()
else:
offsets_df["renumber_map_offsets"] = renumber_offset_series
else:
if len(batch_ids) > 0:
batch_ids_r = cudf.Series(cupy.repeat(batch_ids, num_hops))
batch_ids_r = cudf.Series(batch_ids_r).repeat(
cupy.diff(label_hop_offsets)
)
batch_ids_r.reset_index(drop=True, inplace=True)
results_df["batch_id"] = batch_ids_r
else:
results_df["batch_id"] = None
# TODO remove this logic in release 23.12, hops will always returned as offsets
if include_hop_column:
if len(batch_ids) > 0:
hop_ids_r = cudf.Series(cupy.arange(num_hops))
hop_ids_r = cudf.concat([hop_ids_r] * len(batch_ids), ignore_index=True)
# generate the hop column
hop_ids_r = (
cudf.Series(hop_ids_r, name="hop_id")
.repeat(cupy.diff(label_hop_offsets))
.reset_index(drop=True)
)
else:
hop_ids_r = cudf.Series(name="hop_id", dtype="int32")
results_df = results_df.join(hop_ids_r, how="outer").sort_index()
if major_col_name not in results_df:
if use_legacy_names:
raise ValueError("Can't use legacy names with major offsets")
major_offsets_series = cudf.Series(
cupy_array_dict["major_offsets"], name="major_offsets"
)
if len(major_offsets_series) > len(results_df):
# this is extremely rare so the inefficiency is ok
results_df = results_df.join(
major_offsets_series, how="outer"
).sort_index()
else:
results_df["major_offsets"] = major_offsets_series
else:
# TODO this is deprecated, remove it in 23.12
results_df[major_col_name] = cupy_array_dict["sources"]
results_df[minor_col_name] = cupy_array_dict["destinations"]
indices = cupy_array_dict["indices"]
if indices is None:
results_df["indices"] = None
else:
results_df["indices"] = indices
if weight_t == "int32":
results_df["indices"] = indices.astype("int32")
elif weight_t == "int64":
results_df["indices"] = indices.astype("int64")
else:
results_df["indices"] = indices
if return_offsets:
if renumber:
return results_df, offsets_df, renumber_df
else:
return results_df, offsets_df
if renumber:
return results_df, renumber_df
return (results_df,)
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/sampling/uniform_neighbor_sample.py | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from pylibcugraph import ResourceHandle
from pylibcugraph import uniform_neighbor_sample as pylibcugraph_uniform_neighbor_sample
from cugraph.sampling.sampling_utilities import sampling_results_from_cupy_array_dict
import numpy
import cudf
import cupy as cp
import warnings
from typing import Union, Tuple, Sequence, List
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from cugraph import Graph
start_col_name = "_START_"
batch_col_name = "_BATCH_"
# FIXME: Move this function to the utility module so that it can be
# shared by other algos
def ensure_valid_dtype(input_graph, start_list):
vertex_dtype = input_graph.edgelist.edgelist_df.dtypes[0]
if isinstance(start_list, cudf.Series):
start_list_dtypes = start_list.dtype
else:
start_list_dtypes = start_list.dtypes[0]
if start_list_dtypes != vertex_dtype:
warning_msg = (
"Uniform neighbor sample requires 'start_list' to match the graph's "
f"'vertex' type. input graph's vertex type is: {vertex_dtype} and got "
f"'start_list' of type: {start_list_dtypes}."
)
warnings.warn(warning_msg, UserWarning)
start_list = start_list.astype(vertex_dtype)
return start_list
def uniform_neighbor_sample(
G: Graph,
start_list: Sequence,
fanout_vals: List[int],
*,
with_replacement: bool = True,
with_edge_properties: bool = False, # deprecated
with_batch_ids: bool = False,
random_state: int = None,
return_offsets: bool = False,
return_hops: bool = True,
include_hop_column: bool = True, # deprecated
prior_sources_behavior: str = None,
deduplicate_sources: bool = False,
renumber: bool = False,
use_legacy_names: bool = True, # deprecated
compress_per_hop: bool = False,
compression: str = "COO",
) -> Union[cudf.DataFrame, Tuple[cudf.DataFrame, cudf.DataFrame]]:
"""
Does neighborhood sampling, which samples nodes from a graph based on the
current node's neighbors, with a corresponding fanout value at each hop.
Parameters
----------
G : cugraph.Graph
cuGraph graph, which contains connectivity information as dask cudf
edge list dataframe
start_list : list or cudf.Series (int32)
a list of starting vertices for sampling
fanout_vals : list (int32)
List of branching out (fan-out) degrees per starting vertex for each
hop level.
with_replacement: bool, optional (default=True)
Flag to specify if the random sampling is done with replacement
with_edge_properties: bool, optional (default=False)
Deprecated.
Flag to specify whether to return edge properties (weight, edge id,
edge type, batch id, hop id) with the sampled edges.
with_batch_ids: bool, optional (default=False)
Flag to specify whether batch ids are present in the start_list
Assumes they are the last column in the start_list dataframe
random_state: int, optional
Random seed to use when making sampling calls.
return_offsets: bool, optional (default=False)
Whether to return the sampling results with batch ids
included as one dataframe, or to instead return two
dataframes, one with sampling results and one with
batch ids and their start offsets.
return_hops: bool, optional (default=True)
Whether to return the sampling results with hop ids
corresponding to the hop where the edge appeared.
Defaults to True.
include_hop_column: bool, optional (default=True)
Deprecated. Defaults to True.
If True, will include the hop column even if
return_offsets is True. This option will
be removed in release 23.12.
prior_sources_behavior: str, optional (default=None)
Options are "carryover", and "exclude".
Default will leave the source list as-is.
Carryover will carry over sources from previous hops to the
current hop.
Exclude will exclude sources from previous hops from reappearing
as sources in future hops.
deduplicate_sources: bool, optional (default=False)
Whether to first deduplicate the list of possible sources
from the previous destinations before performing next
hop.
renumber: bool, optional (default=False)
Whether to renumber on a per-batch basis. If True,
will return the renumber map and renumber map offsets
as an additional dataframe.
use_legacy_names: bool, optional (default=True)
Whether to use the legacy column names (sources, destinations).
If True, will use "sources" and "destinations" as the column names.
If False, will use "majors" and "minors" as the column names.
Deprecated. Will be removed in release 23.12 in favor of always
using the new names "majors" and "minors".
compress_per_hop: bool, optional (default=False)
Whether to compress globally (default), or to produce a separate
compressed edgelist per hop.
compression: str, optional (default=COO)
Sets the compression type for the output minibatches.
Valid options are COO (default), CSR, CSC, DCSR, and DCSC.
Returns
-------
result : cudf.DataFrame or Tuple[cudf.DataFrame, cudf.DataFrame]
GPU data frame containing multiple cudf.Series
If with_edge_properties=False:
df['sources']: cudf.Series
Contains the source vertices from the sampling result
df['destinations']: cudf.Series
Contains the destination vertices from the sampling result
df['indices']: cudf.Series
Contains the indices (edge weights) from the sampling result
for path reconstruction
If with_edge_properties=True:
If return_offsets=False:
df['sources']: cudf.Series
Contains the source vertices from the sampling result
df['destinations']: cudf.Series
Contains the destination vertices from the sampling result
df['edge_weight']: cudf.Series
Contains the edge weights from the sampling result
df['edge_id']: cudf.Series
Contains the edge ids from the sampling result
df['edge_type']: cudf.Series
Contains the edge types from the sampling result
df['batch_id']: cudf.Series
Contains the batch ids from the sampling result
df['hop_id']: cudf.Series
Contains the hop ids from the sampling result
If renumber=True:
(adds the following dataframe)
renumber_df['map']: cudf.Series
Contains the renumber maps for each batch
renumber_df['offsets']: cudf.Series
Contains the batch offsets for the renumber maps
If return_offsets=True:
df['sources']: cudf.Series
Contains the source vertices from the sampling result
df['destinations']: cudf.Series
Contains the destination vertices from the sampling result
df['edge_weight']: cudf.Series
Contains the edge weights from the sampling result
df['edge_id']: cudf.Series
Contains the edge ids from the sampling result
df['edge_type']: cudf.Series
Contains the edge types from the sampling result
df['hop_id']: cudf.Series
Contains the hop ids from the sampling result
offsets_df['batch_id']: cudf.Series
Contains the batch ids from the sampling result
offsets_df['offsets']: cudf.Series
Contains the offsets of each batch in the sampling result
If renumber=True:
(adds the following dataframe)
renumber_df['map']: cudf.Series
Contains the renumber maps for each batch
renumber_df['offsets']: cudf.Series
Contains the batch offsets for the renumber maps
"""
if use_legacy_names:
major_col_name = "sources"
minor_col_name = "destinations"
warning_msg = (
"The legacy column names (sources, destinations)"
" will no longer be supported for uniform_neighbor_sample"
" in release 23.12. The use_legacy_names=False option will"
" become the only option, and (majors, minors) will be the"
" only supported column names."
)
warnings.warn(warning_msg, FutureWarning)
else:
major_col_name = "majors"
minor_col_name = "minors"
if compression not in ["COO", "CSR", "CSC", "DCSR", "DCSC"]:
raise ValueError("compression must be one of COO, CSR, CSC, DCSR, or DCSC")
if (
(compression != "COO")
and (not compress_per_hop)
and prior_sources_behavior != "exclude"
):
raise ValueError(
"hop-agnostic compression is only supported with"
" the exclude prior sources behavior due to limitations "
"of the libcugraph C++ API"
)
if compress_per_hop and prior_sources_behavior != "carryover":
raise ValueError(
"Compressing the edgelist per hop is only supported "
"with the carryover prior sources behavior due to limitations"
" of the libcugraph C++ API"
)
if include_hop_column:
warning_msg = (
"The include_hop_column flag is deprecated and will be"
" removed in the next release in favor of always "
"excluding the hop column when return_offsets is True"
)
warnings.warn(warning_msg, FutureWarning)
if compression != "COO":
raise ValueError(
"Including the hop id column is only supported with COO compression."
)
if with_edge_properties:
warning_msg = (
"The with_edge_properties flag is deprecated"
" and will be removed in the next release in favor"
" of returning all properties in the graph"
)
warnings.warn(warning_msg, FutureWarning)
if isinstance(start_list, int):
start_list = [start_list]
if isinstance(start_list, list):
start_list = cudf.Series(
start_list, dtype=G.edgelist.edgelist_df[G.srcCol].dtype
)
if with_edge_properties and not with_batch_ids:
if isinstance(start_list, cudf.Series):
start_list = start_list.reset_index(drop=True).to_frame()
start_list[batch_col_name] = cudf.Series(
cp.zeros(len(start_list), dtype="int32")
)
# fanout_vals must be passed to pylibcugraph as a host array
if isinstance(fanout_vals, numpy.ndarray):
fanout_vals = fanout_vals.astype("int32")
elif isinstance(fanout_vals, list):
fanout_vals = numpy.asarray(fanout_vals, dtype="int32")
elif isinstance(fanout_vals, cp.ndarray):
fanout_vals = fanout_vals.get().astype("int32")
elif isinstance(fanout_vals, cudf.Series):
fanout_vals = fanout_vals.values_host.astype("int32")
else:
raise TypeError("fanout_vals must be a sequence, " f"got: {type(fanout_vals)}")
if "weights" in G.edgelist.edgelist_df:
weight_t = G.edgelist.edgelist_df["weights"].dtype
else:
weight_t = "float32"
start_list = ensure_valid_dtype(G, start_list)
if isinstance(start_list, cudf.Series):
start_list = start_list.rename(start_col_name)
start_list = start_list.to_frame()
if G.renumbered:
start_list = G.lookup_internal_vertex_id(start_list, start_col_name)
else:
columns = start_list.columns
if with_batch_ids:
if G.renumbered:
start_list = G.lookup_internal_vertex_id(start_list, columns[:-1])
start_list = start_list.rename(
columns={columns[0]: start_col_name, columns[-1]: batch_col_name}
)
else:
if G.renumbered:
start_list = G.lookup_internal_vertex_id(start_list, columns)
start_list = start_list.rename(columns={columns[0]: start_col_name})
sampling_result_array_dict = pylibcugraph_uniform_neighbor_sample(
resource_handle=ResourceHandle(),
input_graph=G._plc_graph,
start_list=start_list[start_col_name],
batch_id_list=start_list[batch_col_name]
if batch_col_name in start_list
else None,
h_fan_out=fanout_vals,
with_replacement=with_replacement,
do_expensive_check=False,
with_edge_properties=with_edge_properties,
random_state=random_state,
prior_sources_behavior=prior_sources_behavior,
deduplicate_sources=deduplicate_sources,
return_hops=return_hops,
renumber=renumber,
compression=compression,
compress_per_hop=compress_per_hop,
return_dict=True,
)
dfs = sampling_results_from_cupy_array_dict(
sampling_result_array_dict,
weight_t,
len(fanout_vals),
with_edge_properties=with_edge_properties,
return_offsets=return_offsets,
renumber=renumber,
use_legacy_names=use_legacy_names,
include_hop_column=include_hop_column,
)
if G.renumbered and not renumber:
dfs[0] = G.unrenumber(dfs[0], major_col_name, preserve_order=True)
dfs[0] = G.unrenumber(dfs[0], minor_col_name, preserve_order=True)
if len(dfs) > 1:
return dfs
return dfs[0]
| 0 |
rapidsai_public_repos/cugraph/python/cugraph/cugraph | rapidsai_public_repos/cugraph/python/cugraph/cugraph/sampling/__init__.py | # Copyright (c) 2021-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cugraph.sampling.random_walks import random_walks, rw_path
from cugraph.sampling.node2vec import node2vec
from cugraph.sampling.uniform_neighbor_sample import uniform_neighbor_sample
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.