repo_id stringlengths 21 96 | file_path stringlengths 31 155 | content stringlengths 1 92.9M | __index_level_0__ int64 0 0 |
|---|---|---|---|
rapidsai_public_repos/cuspatial/docs/cuproj/source | rapidsai_public_repos/cuspatial/docs/cuproj/source/user_guide/index.md | # User Guide
```{toctree}
:maxdepth: 2
cuproj_api_examples
```
| 0 |
rapidsai_public_repos/cuspatial/docs/cuproj/source | rapidsai_public_repos/cuspatial/docs/cuproj/source/api_docs/index.rst | =============
API Reference
=============
This page provides a list of all publicly accessible modules, methods and classes through
``cuproj.*`` namespace.
.. toctree::
:maxdepth: 2
transformer
| 0 |
rapidsai_public_repos/cuspatial/docs/cuproj/source | rapidsai_public_repos/cuspatial/docs/cuproj/source/api_docs/transformer.rst | Transformer
+++++++++++
the cuproj.transformer module contains the Transformer class, which can perform 2D transformations
between coordinate reference systems (CRS).
.. currentmodule:: cuproj
.. autoclass:: cuproj.transformer.Transformer
:members:
:show-inheritance:
:inherited-members:
| 0 |
rapidsai_public_repos/cuspatial/docs/cuproj/source | rapidsai_public_repos/cuspatial/docs/cuproj/source/developer_guide/index.md | # Developer Guide
cuProj has two main components: the cuProj Python package and the `libcuproj` header-only C++
library, referred to as `cuProj` and `libcuproj` respectively in this documentation. This page
discusses the design of `cuProj`. For information on `libcuproj`, see the
[C++ API reference](https://docs.rapids.ai/api/libcuproj/stable/).
```{toctree}
:maxdepth: 2
| 0 |
rapidsai_public_repos/cuspatial/docs | rapidsai_public_repos/cuspatial/docs/source/index.md | # Welcome to cuSpatial's documentation!
cuSpatial is a general, vector-based,
GPU accelerated GIS library that provides functionalities to spatial computation,
indexing, joins and trajectory computations.
Example functions include:
- Spatial indexing and joins supported by GPU accelerated point-in-polygon
- Trajectory identification and reconstruction
- Haversine distance and grid projection
cuSpatial integrate neatly with [GeoPandas](https://geopandas.org/en/stable/)
and [cuDF](https://docs.rapids.ai/api/cuspatial/stable/).
This enables you to accelerate performance critical sections in your `GeoPandas` workflow using and `cuSpatial` and `cuDF`.
```{toctree}
:maxdepth: 2
:caption: Contents
user_guide/index
api_docs/index
developer_guide/index
```
# Indices and tables
- {ref}`genindex`
- {ref}`search`
| 0 |
rapidsai_public_repos/cuspatial/docs | rapidsai_public_repos/cuspatial/docs/source/conf.py | # Copyright (c) 2018-2023, NVIDIA CORPORATION.
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autosectionlabel",
"sphinx.ext.intersphinx",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"numpydoc",
"IPython.sphinxext.ipython_console_highlighting",
"IPython.sphinxext.ipython_directive",
"nbsphinx",
"myst_parser"
]
nb_execution_mode = "force"
nb_execution_timeout = 300
copybutton_prompt_text = ">>> "
autosummary_generate = True
ipython_mplbackend = "str"
myst_heading_anchors = 3
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = {".rst": "restructuredtext", ".md": "markdown"}
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "cuspatial"
copyright = "2019-2023, NVIDIA"
author = "NVIDIA"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '23.12'
# The full version, including alpha/beta/rc tags.
release = '23.12.00'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = "en"
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
html_theme_options = {
"external_links": [],
# https://github.com/pydata/pydata-sphinx-theme/issues/1220
"icon_links": [],
"github_url": "https://github.com/rapidsai/cuspatial",
"twitter_url": "https://twitter.com/rapidsai",
"show_toc_level": 1,
"navbar_align": "right",
}
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = "cuspatialdoc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(
master_doc,
"cuspatial.tex",
"cuspatial Documentation",
"NVIDIA Corporation",
"manual",
)
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "cuspatial", "cuspatial Documentation", [author], 1)]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
master_doc,
"cuspatial",
"cuspatial Documentation",
author,
"cuspatial",
"One line description of project.",
"Miscellaneous",
)
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
"python": ("https://docs.python.org/3", None),
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
"geopandas": ("https://geopandas.readthedocs.io/en/latest/", None),
"cudf": ("https://docs.rapids.ai/api/cudf/stable/", None),
}
# Config numpydoc
numpydoc_show_inherited_class_members = False
numpydoc_class_members_toctree = False
nbsphinx_allow_errors = True
def setup(app):
app.add_css_file("https://docs.rapids.ai/assets/css/custom.css")
app.add_js_file("https://docs.rapids.ai/assets/js/custom.js", loading_method="defer")
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/user_guide/cuspatial_api_examples.ipynb | # !conda create -n rapids-23.12 -c rapidsai -c conda-forge -c nvidia \
# cuspatial=23.12 python=3.9 cudatoolkit=11.5 # Imports used throughout this notebook.
import cuspatial
import cudf
import cupy
import geopandas
import pandas as pd
import numpy as np
from shapely.geometry import *
from shapely import wkt# For deterministic result
np.random.seed(0)
cupy.random.seed(0)host_dataframe = geopandas.read_file(geopandas.datasets.get_path(
"naturalearth_lowres"
))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
print(gpu_dataframe.head())host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
continents_dataframe = gpu_dataframe.sort_values("name")
print(continents_dataframe)host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
sorted_dataframe = gpu_dataframe.sort_values("name")
host_dataframe = sorted_dataframe.to_geopandas()
host_dataframe['geometry'].iloc[0]# 1m random trajectory samples
ids = cupy.random.randint(1, 400, 1000000)
timestamps = cupy.random.random(1000000)*1000000
xy= cupy.random.random(2000000)
trajs = cuspatial.GeoSeries.from_points_xy(xy)
sorted_trajectories, trajectory_offsets = \
cuspatial.core.trajectory.derive_trajectories(ids, trajs, timestamps)
# sorted_trajectories is a DataFrame containing all trajectory samples
# sorted first by `object_id` and then by `timestamp`.
print(sorted_trajectories.head())
# trajectory_offsets is a Series containing the start position of each
# trajectory in sorted_trajectories.
print(trajectory_offsets)trajs = cuspatial.GeoSeries.from_points_xy(
sorted_trajectories[["x", "y"]].interleave_columns()
)
d_and_s = cuspatial.core.trajectory.trajectory_distances_and_speeds(
len(cudf.Series(ids).unique()),
sorted_trajectories['object_id'],
trajs,
sorted_trajectories['timestamp']
)
print(d_and_s.head())bounding_boxes = cuspatial.core.trajectory.trajectory_bounding_boxes(
len(cudf.Series(ids, dtype="int32").unique()),
sorted_trajectories['object_id'],
trajs
)
print(bounding_boxes.head())host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
single_polygons = cuspatial.from_geopandas(
host_dataframe['geometry'][host_dataframe['geometry'].type == "Polygon"]
)
bounding_box_polygons = cuspatial.core.spatial.bounding.polygon_bounding_boxes(
single_polygons
)
print(bounding_box_polygons.head())lines = cuspatial.GeoSeries.from_linestrings_xy(
trajs.points.xy, trajectory_offsets, cupy.arange(len(trajectory_offsets))
)
trajectory_bounding_boxes = cuspatial.core.spatial.bounding.linestring_bounding_boxes(
lines, 0.0001
)
print(trajectory_bounding_boxes.head())host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
afghanistan = gpu_dataframe['geometry'][gpu_dataframe['name'] == 'Afghanistan']
points = cuspatial.GeoSeries.from_points_xy(afghanistan.polygons.xy)
projected = cuspatial.sinusoidal_projection(
afghanistan.polygons.x.mean(),
afghanistan.polygons.y.mean(),
points
)
print(projected.head())coordinates = sorted_trajectories[['x', 'y']].interleave_columns()
spaces = cuspatial.GeoSeries.from_multipoints_xy(
coordinates, trajectory_offsets
)
hausdorff_distances = cuspatial.core.spatial.distance.directed_hausdorff_distance(
spaces
)
print(hausdorff_distances.head())host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
polygons_first = gpu_dataframe['geometry'][0:10]
polygons_second = gpu_dataframe['geometry'][10:20]
points_first = polygons_first.polygons.xy[0:1000]
points_second = polygons_second.polygons.xy[0:1000]
first = cuspatial.GeoSeries.from_points_xy(points_first)
second = cuspatial.GeoSeries.from_points_xy(points_second)
# The number of coordinates in two sets of polygons vary, so
# we'll just compare the first set of 1000 values here.
distances_in_meters = cuspatial.haversine_distance(
first, second
)
cudf.Series(distances_in_meters).head()# Generate data to be used to create a cuDF dataframe.
# The data to be processed by Haversine MUST be a Float.
a = {"latitude":[17.1167, 17.1333, 25.333, 25.255, 24.433, 24.262, 35.317, 34.21, 34.566, 31.5, 36.7167, 30.5667, 28.05, 22.8, 35.7297, 36.97, 36.78, 36.8, 36.8, 36.72],
"longitude": [-61.7833, -61.7833, 55.517, 55.364, 54.651, 55.609, 69.017, 62.228, 69.212, 65.85, 3.25, 2.8667, 9.6331, 5.4331, 0.65, 7.79, 3.07, 3.03, 3.04, 4.05]}
df = cudf.DataFrame(data=a)
# Create cuSpatial GeoSeries from cuDF Dataframe
cuGeoSeries = cuspatial.GeoSeries.from_points_xy(df[['longitude', 'latitude']].interleave_columns())
# Create Comparator cuSpatial GeoSeries from a comparator point
df['atlanta_lat'] = 33.7490
df['atlanta_lng'] = -84.3880
atlGeoSeries = cuspatial.GeoSeries.from_points_xy(df[['atlanta_lat', 'atlanta_lng']].interleave_columns())
# Calculate Haversine Distance of cuDF dataframe to comparator point
df['atlanta_dist'] = cuspatial.haversine_distance(cuGeoSeries, atlGeoSeries)
print(df.head())host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_boundaries = cuspatial.from_geopandas(host_dataframe.geometry.boundary)
zeros = cuspatial.pairwise_linestring_distance(
gpu_boundaries[0:50],
gpu_boundaries[0:50]
)
print(zeros.head())
lines1 = gpu_boundaries[0:50]
lines2 = gpu_boundaries[50:100]
distances = cuspatial.pairwise_linestring_distance(
lines1, lines2
)
print(distances.head())# Convert input dataframe to Pseudo-Mercator projection.
host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres")).to_crs(3857)
polygons = host_dataframe[host_dataframe['geometry'].type == "Polygon"]
gpu_polygons = cuspatial.from_geopandas(polygons)
# Extract mean_x and mean_y from each country
mean_x = [gpu_polygons['geometry'].iloc[[ix]].polygons.x.mean() for ix in range(len(gpu_polygons))]
mean_y = [gpu_polygons['geometry'].iloc[[ix]].polygons.y.mean() for ix in range(len(gpu_polygons))]
# Convert mean_x/mean_y values into Points for use in API.
points = cuspatial.GeoSeries([Point(point) for point in zip(mean_x, mean_y)])
# Convert Polygons into Linestrings for use in API.
linestring_df = cuspatial.from_geopandas(geopandas.geoseries.GeoSeries(
[MultiLineString(mapping(polygons['geometry'].iloc[ix])["coordinates"]) for ix in range(len(polygons))]
))
gpu_polygons['border_distance'] = cuspatial.pairwise_point_linestring_distance(
points, linestring_df
)
print(gpu_polygons.head())cities = geopandas.read_file(geopandas.datasets.get_path("naturalearth_cities")).to_crs(3857)
countries = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres")).to_crs(3857)
gpu_cities = cuspatial.from_geopandas(cities)
gpu_countries = cuspatial.from_geopandas(countries)
dist = cuspatial.pairwise_point_polygon_distance(
gpu_cities.geometry[:len(gpu_countries)], gpu_countries.geometry
)
gpu_countries["distance_from"] = cities.name
gpu_countries["distance"] = dist
gpu_countries.head()# all driveways within 2km range of central park, nyc
# The dataset is downloaded and processed as follows:
# import osmnx as ox
# graph = ox.graph_from_point((40.769361, -73.977655), dist=2000, network_type="drive")
# nodes, streets = ox.graph_to_gdfs(graph)
# streets = streets.to_crs(3857)
# streets = streets.reset_index(drop=True)
# streets.index.name = "index"
# streets[["name", "geometry"]].to_csv("streets_3857.csv")
# The data is under notebooks/streets_3857.csv
streets = pd.read_csv("./streets_3857.csv", index_col="index")
streets.geometry = streets.geometry.apply(wkt.loads)
streets = geopandas.GeoDataFrame(streets)
streets.head()# The polygon of the Empire State Building
# The dataset is downloaded and processed as follows:
# esb = ox.geometries.geometries_from_place('Empire State Building, New York', tags={"building": True})
# esb = esb.to_crs(3857)
# esb = esb.geometry.reset_index(drop=True)
# esb.index.name = "index"
# esb.to_csv("esb_3857.csv")
# The data is under notebooks/esb_3857.csv
esb = pd.read_csv("./esb_3857.csv", index_col="index")
esb.geometry = esb.geometry.apply(wkt.loads)
esb = geopandas.GeoDataFrame(esb)
esb = pd.concat([esb.iloc[0:1]] * len(streets))
esb.head()# Straight line distance between the driveways to the Empire State Building
gpu_streets = cuspatial.from_geopandas(streets.geometry)
gpu_esb = cuspatial.from_geopandas(esb.geometry)
dist = cuspatial.pairwise_linestring_polygon_distance(gpu_streets, gpu_esb).rename("dist")
pd.concat([streets["name"].reset_index(drop=True), dist.to_pandas()], axis=1)countries = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres")).to_crs(3857)
gpu_countries = cuspatial.from_geopandas(countries)
african_countries = gpu_countries[gpu_countries.continent == "Africa"].sort_values("pop_est", ascending=False)
asian_countries = gpu_countries[gpu_countries.continent == "Asia"].sort_values("pop_est", ascending=False)# Straight line distance between the top 10 most populated countries in Asia and Africa
population_top10_africa = african_countries[:10].reset_index(drop=True)
population_top10_asia = asian_countries[:10].reset_index(drop=True)
dist = cuspatial.pairwise_polygon_distance(
population_top10_africa.geometry, population_top10_asia.geometry)
cudf.concat([
population_top10_africa["name"].rename("Africa"),
population_top10_asia["name"].rename("Asia"),
dist.rename("dist")], axis=1
)host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
gpu_dataframe = cuspatial.from_geopandas(host_dataframe)
geometry = gpu_dataframe['geometry']
points = cuspatial.GeoSeries.from_points_xy(geometry.polygons.xy)
mean_x, std_x = (geometry.polygons.x.mean(), geometry.polygons.x.std())
mean_y, std_y = (geometry.polygons.y.mean(), geometry.polygons.y.std())
avg_points = cuspatial.points_in_spatial_window(
points,
mean_x - std_x,
mean_x + std_x,
mean_y - std_y,
mean_y + std_y
)
print(avg_points.head())from cuspatial.core.binops.intersection import pairwise_linestring_intersection
host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
usa_boundary = cuspatial.from_geopandas(host_dataframe[host_dataframe.name == "United States of America"].geometry.boundary)
canada_boundary = cuspatial.from_geopandas(host_dataframe[host_dataframe.name == "Canada"].geometry.boundary)
list_offsets, geometries, look_back_ids = pairwise_linestring_intersection(usa_boundary, canada_boundary)# The first integer series shows that the result contains 1 row (since we only have 1 pair of linestrings as input).
# This row contains 144 geometires.
list_offsets# The second element is a geoseries that contains the intersecting geometries, with 144 rows, including points and linestrings.
geometries# The third element is a dataframe that contains IDs to the input segments and linestrings, 4 for each result row.
# Each represents ids to lhs, rhs linestring and segment ids.
look_back_idshost_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
single_polygons = host_dataframe[host_dataframe['geometry'].type == "Polygon"]
gpu_dataframe = cuspatial.from_geopandas(single_polygons)
x_points = (cupy.random.random(10000000) - 0.5) * 360
y_points = (cupy.random.random(10000000) - 0.5) * 180
xy = cudf.DataFrame({"x": x_points, "y": y_points}).interleave_columns()
points = cuspatial.GeoSeries.from_points_xy(xy)
short_dataframe = gpu_dataframe.iloc[0:31]
geometry = short_dataframe['geometry']
points_in_polygon = cuspatial.point_in_polygon(
points, geometry
)
sum_of_points_in_polygons_0_to_31 = points_in_polygon.sum()
sum_of_points_in_polygons_0_to_31.head()x_points = (cupy.random.random(10000000) - 0.5) * 360
y_points = (cupy.random.random(10000000) - 0.5) * 180
xy = cudf.DataFrame({"x": x_points, "y": y_points}).interleave_columns()
points = cuspatial.GeoSeries.from_points_xy(xy)
scale = 5
max_depth = 7
max_size = 125
point_indices, quadtree = cuspatial.quadtree_on_points(points,
x_points.min(),
x_points.max(),
y_points.min(),
y_points.max(),
scale,
max_depth,
max_size)
print(point_indices.head())
print(quadtree.head())polygons = gpu_dataframe['geometry']
poly_bboxes = cuspatial.polygon_bounding_boxes(
polygons
)
intersections = cuspatial.join_quadtree_and_bounding_boxes(
quadtree,
poly_bboxes,
polygons.polygons.x.min(),
polygons.polygons.x.max(),
polygons.polygons.y.min(),
polygons.polygons.y.max(),
scale,
max_depth
)
polygons_and_points = cuspatial.quadtree_point_in_polygon(
intersections,
quadtree,
point_indices,
points,
polygons
)
print(polygons_and_points.head())host_countries = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
host_cities = geopandas.read_file(geopandas.datasets.get_path("naturalearth_cities"))
gpu_countries = cuspatial.from_geopandas(host_countries[host_countries['geometry'].type == "Polygon"])
gpu_cities = cuspatial.from_geopandas(host_cities[host_cities['geometry'].type == 'Point'])polygons = gpu_countries['geometry'].polygons
boundaries = cuspatial.GeoSeries.from_linestrings_xy(
cudf.DataFrame({"x": polygons.x, "y": polygons.y}).interleave_columns(),
polygons.ring_offset,
cupy.arange(len(polygons.ring_offset))
)
point_indices, quadtree = cuspatial.quadtree_on_points(gpu_cities['geometry'],
polygons.x.min(),
polygons.x.max(),
polygons.y.min(),
polygons.y.max(),
scale,
max_depth,
max_size)
poly_bboxes = cuspatial.linestring_bounding_boxes(
boundaries,
2.0
)
intersections = cuspatial.join_quadtree_and_bounding_boxes(
quadtree,
poly_bboxes,
polygons.x.min(),
polygons.x.max(),
polygons.y.min(),
polygons.y.max(),
scale,
max_depth
)
result = cuspatial.quadtree_point_to_nearest_linestring(
intersections,
quadtree,
point_indices,
gpu_cities['geometry'],
boundaries
)
print(result.head()) | 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/user_guide/index.md | # User Guide
```{toctree}
:maxdepth: 2
cuspatial_api_examples
```
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/_static/EMPTY | 0 | |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/api_docs/io.rst | IO
--
Any host-side GeoPandas DataFrame can be copied into GPU memory for use with cuSpatial algorithms.
.. currentmodule:: cuspatial
.. autofunction:: cuspatial.from_geopandas
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/api_docs/trajectory.rst | Trajectory
----------
Functions for identifying and grouping trajectories from point data.
.. currentmodule:: cuspatial
.. autofunction:: cuspatial.derive_trajectories
.. autofunction:: cuspatial.trajectory_distances_and_speeds
.. autofunction:: cuspatial.trajectory_bounding_boxes
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/api_docs/spatial.rst | Spatial
-------
Functions that operate on spatial data.
.. currentmodule:: cuspatial
Spatial Indexing Functions
++++++++++++++++++++++++++
.. autofunction:: cuspatial.quadtree_on_points
Spatial Join Functions
++++++++++++++++++++++
.. autofunction:: cuspatial.point_in_polygon
.. autofunction:: cuspatial.quadtree_point_in_polygon
.. autofunction:: cuspatial.quadtree_point_to_nearest_linestring
.. autofunction:: cuspatial.join_quadtree_and_bounding_boxes
Measurement Functions
+++++++++++++++++++++
.. autofunction:: cuspatial.directed_hausdorff_distance
.. autofunction:: cuspatial.haversine_distance
.. autofunction:: cuspatial.pairwise_point_distance
.. autofunction:: cuspatial.pairwise_linestring_distance
.. autofunction:: cuspatial.pairwise_point_linestring_distance
Nearest Points Function
+++++++++++++++++++++++
.. autofunction:: cuspatial.pairwise_point_linestring_nearest_points
Bounding Boxes
++++++++++++++
.. autofunction:: cuspatial.polygon_bounding_boxes
.. autofunction:: cuspatial.linestring_bounding_boxes
Projection Functions
++++++++++++++++++++
.. autofunction:: cuspatial.sinusoidal_projection
Spatial Filtering Functions
+++++++++++++++++++++++++++
.. autofunction:: cuspatial.points_in_spatial_window
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/api_docs/geopandas_compatibility.rst | GeoPandas Compatibility
-----------------------
cuSpatial supports any geometry format supported by `GeoPandas`. Load geometry information from a `GeoPandas.GeoSeries` or `GeoPandas.GeoDataFrame`.
>>> host_dataframe = geopandas.read_file(geopandas.datasets.get_path("naturalearth_lowres"))
cugpdf = cuspatial.from_geopandas(host_dataframe)
or
>>> cugpdf = cuspatial.GeoDataFrame(gpdf)
.. currentmodule:: cuspatial
.. autoclass:: cuspatial.GeoDataFrame
:members:
.. autoclass:: cuspatial.GeoSeries
:members:
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/api_docs/index.rst | =============
API Reference
=============
This page provides a list of all publicly accessible modules, methods and classes through
``cuspatial.*`` namespace.
.. toctree::
:maxdepth: 2
spatial
trajectory
geopandas_compatibility
io
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/benchmarking.md | # Benchmarking cuSpatial
The goal of the benchmarks in this repository is to measure the performance of various cuSpatial APIs.
Benchmarks in cuSpatial are written using the
[`pytest-benchmark`](https://pytest-benchmark.readthedocs.io/en/latest/index.html) plugin to the
[`pytest`](https://docs.pytest.org/en/latest/) Python testing framework.
Using `pytest-benchmark` provides a seamless experience for developers familiar with `pytest`.
We include benchmarks of both public APIs and internal functions.
The former give us a macro view of our performance, especially vis-à-vis geopandas.
The latter help us quantify and minimize the overhead of our Python bindings.
```{note}
Our current benchmarks focus entirely on measuring run time.
However, minimizing memory footprint can be just as important for some cases.
In the future, we may update our benchmarks to also include memory usage measurements.
```
## Benchmark organization
At the top level benchmarks are divided into `internal` and `API` directories.
API benchmarks are for public features that we expect users to consume.
Internal benchmarks capture the performance of cuSpatial internals that have no stability guarantees.
Within each directory, benchmarks are organized based on the type of function.
Functions in cuSpatial generally fall into two groups:
1. Methods of classes like `GeoDataFrame` or `GeoSeries`.
2. Free functions operating on the above classes like `cuspatial.from_geopandas`.
The former should be organized into files named `bench_class.py`.
For example, benchmarks of `GeoDataFrame.sjoin` belong in `API/bench_geodataframe.py`.
Benchmarks should be written at the highest level of generality possible with respect to the class hierarchy.
For instance, all classes support the `take` method, so those benchmarks belong in `API/bench_frame_or_index.py`.
```{note}
`pytest` does not support having two benchmark files with the same name, even if they are in separate directories.
Therefore, benchmarks of internal methods of _public_ classes go in files suffixed with `_internal`.
Benchmarks of `GeoDataFrame.polygons.xy`, for instance, belong in `internal/bench_geodataframe_internal.py`.
```
Free functions have more flexibility.
Broadly speaking, they should be grouped into benchmark files containing similar functionality.
For example, I/O benchmarks can all live in `io/bench_io.py`.
For now those groupings are left to the discretion of developers.
## Running benchmarks
By default, pytest discovers test files and functions prefixed with `test_`.
For benchmarks, we configure `pytest` to instead search using the `bench_` prefix.
After installing `pytest-benchmark`, running benchmarks is as simple as just running `pytest`.
When benchmarks are run, the default behavior is to output the results in a table to the terminal.
A common requirement is to then compare the performance of benchmarks before and after a change.
We can generate these comparisons by saving the output using the `--benchmark-autosave` option to pytest.
When using this option, after the benchmarks are run the output will contain a line:
```
Saved benchmark data in: /path/to/XXXX_*.json
```
The `XXXX` is a four-digit number identifying the benchmark.
If preferred, a user may also use the `--benchmark-save=NAME` option,
which allows more control over the resulting filename.
Given two benchmark runs `XXXX` and `YYYY`, benchmarks can then be compared using
```
pytest-benchmark compare XXXX YYYY
```
Note that the comparison uses the `pytest-benchmark` command rather than the `pytest` command.
`pytest-benchmark` has a number of additional options that can be used to customize the output.
The next line contains one useful example, but developers should experiment to find a useful output
```
pytest-benchmark compare XXXX YYYY --sort="name" --columns=Mean --name=short --group-by=param
```
For more details, see the [`pytest-benchmark` documentation](https://pytest-benchmark.readthedocs.io/en/latest/comparing.html).
## Benchmark contents
### Writing benchmarks
Just as benchmarks should be written in terms of the highest level classes in the hierarchy,
they should also assume as little as possible about the nature of the data.
## Comparing to geopandas
As the cuSpatial api matures, we'll be comparing it performance-wise with matching geopandas functions.
## Testing benchmarks
Benchmarks need to be kept up to date with API changes in cuspatial.
The current set of benchmarks are debug benchmarks on a small set of test data.
Our CI testing takes advantage of this to ensure that benchmarks remain valid code.
## Profiling
Although not strictly part of our benchmarking suite, profiling is a common need so we provide some guidelines here.
Here are two easy ways (there may be others) to profile benchmarks:
1. The [`pytest-profiling`](https://github.com/man-group/pytest-plugins/tree/master/pytest-profiling) plugin.
2. The [`py-spy`](https://github.com/benfred/py-spy) package.
Using the former is as simple as adding the `--profile` (or `--profile-svg`) arguments to the `pytest` invocation.
The latter requires instead invoking pytest from py-spy, like so:
```
py-spy record -- pytest bench_foo.py
```
Each tool has different strengths and provides somewhat different information.
Developers should try both and see what works for a particular workflow.
Developers are also encouraged to share useful alternatives that they discover.
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/index.md | # Developer Guide
cuSpatial has two main components: the cuSpatial Python package and the `libcuspatial` C++ library,
referred to as `cuspatial` and `libcuspatial` respectively in this documentation. This page
discusses the design of `cuspatial`. For information on `libcuspatial`, see the [libcuspatial
developer guide](https://docs.rapids.ai/api/libcuspatial/stable/DEVELOPER_GUIDE.html)
and [C++ API reference](https://docs.rapids.ai/api/libcuspatial/stable/).
```{toctree}
:maxdepth: 2
development_environment
build
contributing_guide
library_design
benchmarking
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/build.md | # Build and Install cuSpatial From Source
## Pre-requisites
- gcc >= 7.5
- cmake >= 3.26.4
- miniconda
## Fetch cuSpatial repository
```shell
export `CUSPATIAL_HOME=$(pwd)/cuspatial` && \
git clone https://github.com/rapidsai/cuspatial.git $CUSPATIAL_HOME
```
## Install dependencies
1. `export CUSPATIAL_HOME=$(pwd)/cuspatial`
2. clone the cuSpatial repo
```shell
conda env create -n cuspatial --file conda/environments/all_cuda-118_arch-x86_64.yaml
```
## Build cuSpatial
### From the cuSpatial Dev Container:
Execute `build-cuspatial-cpp to build `libcuspatial`. The following options may be added.
- `-DBUILD_TESTS=ON`: build `libcuspatial` unit tests.
- `-DBUILD_BENCHMARKS=ON`: build `libcuspatial` benchmarks.
- `-DCMAKE_BUILD_TYPE=Debug`: Create a Debug build of `libcuspatial` (default is Release).
In addition, `build-cuspatial-python` to build cuspatial cython components.
### From Bare Metal:
Compile libcuspatial (C++), cuspatial (cython) and C++ tests:
```shell
cd $CUSPATIAL_HOME && \
chmod +x ./build.sh && \
./build.sh libcuspatial cuspatial tests
```
Additionally, the following options are also commonly used:
- `benchmarks`: build libcuspatial benchmarks
- `clean`: remove all existing build artifacts and configuration
Execute `./build.sh -h` for full list of available options.
## Validate Installation with C++ and Python Tests
```{note}
To manage difference between branches and build types, the build directories are located at
`$CUSPATIAL_HOME/cpp/build/[release|debug]` depending on build type, and `$CUSPATIAL_HOME/cpp/build/latest`.
is a symbolic link to the most recent build directory. On bare metal builds, remove the extra `latest` level in
the path below.
```
- C++ tests are located within the `$CUSPATIAL_HOME/cpp/build/latest/gtests` directory.
- Python tests are located within the `$CUSPATIAL_HOME/python/cuspatial/cuspatial/tests` directory.
Execute C++ tests:
```shell
ninja -C $CUSPATIAL_HOME/cpp/build/latest test
```
Execute Python tests:
```shell
pytest $CUSPATIAL_HOME/python/cuspatial/cuspatial/tests/
```
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/library_design.md | # cuSpatial Library Design
## Overview
At a high level, `cuspatial` has three parts:
- A GPU backed `GeoDataFrame` data structure
- A set of computation APIs
- A Cython API layer
## Core Data Structures
```{note}
Note: the core data structure of cuSpatial shares the same name as that of `geopandas`, so we refer
to geopandas' dataframe object as `geopandas.GeoDataFrame` and to cuspatial's dataframe object as
`GeoDataFrame`.
```
### Introduction to GeoArrow Format
Under the hood, cuspatial can perform parallel computation on geometry
data thanks to its
[structure of arrays](https://en.wikipedia.org/wiki/Parallel_array) (SoA)
format. Specifically, cuspatial adopts GeoArrow format, which is an extension
to Apache Arrow format that uses Arrow's
[`Variable-size List Layout`](https://arrow.apache.org/docs/format/Columnar.html#variable-size-list-layout)
to support geometry arrays.
By definition, each increase in geometry complexity (dimension, or multi-
geometry) requires an extra level of indirection. In cuSpatial, we use the following names for the levels of indirection from
highest level to lowest: `geometries`, `parts`, `rings` and `coordinates`. The
first three are integral offset arrays and the last is a floating-point
interleaved xy-coordinate array.
Geoarrow also allows a mixture
of geometry types to be present in the same column by adopting the
[Dense Union Array Layout](https://arrow.apache.org/docs/format/Columnar.html#dense-union).
Read the [geoarrow format specification](https://github.com/geopandas/geo-arrow-spec/blob/main/format.md)
for more detail.
### GeoColumn
cuSpatial implements a specialization of Arrow dense union via `GeoColumn` and
`GeoMeta`. A `GeoColumn` is a composition of child columns and a
`GeoMeta` object. The `GeoMeta` owns two arrays that are similar to the
types buffer and offsets buffer from Arrow dense union.
```{note}
Currently, `GeoColumn` implements four concrete array types: `points`,
`multipoints`, multilinestrings and multipolygons. Linestrings and
multilinestrings are stored uniformly as multilinestrings in the
`multilinestrings` array. Polygons and multipolygons are
stored uniformly as multipolygons in the `multipolygons` array.
Points and multipoints are stored separately in different arrays, because
storing points in a multipoints array requires 50% more storage overhead.
While this may also be true for linestrings and polygons, many uses of
cuSpatial involve more complex linestrings and polygons, where the
storage overhead of multigeometry indirection is lower compared to points.
```
`GeoSeries` and `GeoDataFrame` inherit from `cudf.Series` and
`cudf.DataFrame` respectively. `Series` and `DataFrame` are both generic
`Frame` objects which represent a collection of generic columns. cuSpatial
extends these cuDF objects by allowing `GeoColumn`s to be present in the
frame.
`GeoSeries` and `GeoDataFrame` are convertible to and from `geopandas`.
Interoperability between cuspatial, `geopandas` and other data formats is
maintained in the `cuspatial.io` package.
### UnionArray Compliance
As previously mentioned, cuspatial's `GeoColumn` is a specialization of
Arrow's dense `UnionArray`. A fundamental addition to cuDF data types should be
implemented in cuDF so that `GeoColumn` can simply inherit its
functionality. However, dense `UnionArray` stands distinct from existing data types
in libcudf and requires substantial effort to implement. In the interim,
cuSpatial provides a `GeoColumn` complying to the dense `UnionArray`
specification. This may be upstreamed to libcudf as it matures.
## Geospatial computation APIs
In addition to data structures, cuSpatial provides a set of computation APIs.
The computation APIs are organized into several modules. All spatial
computation modules are further grouped into a `spatial` subpackage.
Module names should correspond to a specific computation category,
such as `distance` or `join`. Cuspatial avoids using general category names,
such as `generic`.
### Legacy and Modern APIs
For historical reasons, older cuSpatial APIs expose raw array inputs for
users to provide raw geometry coordinate arrays and offsets. Newer Python
APIs should accept a `GeoSeries` or `GeoDataFrame` as input. Developers
may extract geometry offsets and coordinates via cuSpatial's geometry
accessors such as `GeoSeries.points`, `GeoSeries.multipoints`,
`GeoSeries.lines`, `GeoSeries.polygons`. Developer can then pass the geometries
offsets and coordinate arrays to Cython APIs.
## Cython Layer
The lowest layer of cuspatial is its interaction with `libcuspatial` via Cython.
The Cython layer is composed of two components: C++ bindings and
Cython wrappers. The first component consists of
[`.pxd` files](https://cython.readthedocs.io/en/latest/src/tutorial/pxd_files.html),
which are Cython declaration files that expose the contents of C++ header
files to other Cython files. The second component consists of Cython
wrappers for this functionality. These wrappers are necessary to expose
this functionality to pure Python code.
To interact with the column-based APIs in `libcuspatial`, developers should
have basic familiarity with `libcudf` objects. `libcudf` is built around two
principal objects whose names are largely self-explanatory: `column` and
`table`. `libcudf` also defines corresponding non-owning "view" types
`column_view` and `table_view`. Both `libcudf` and `libcuspatial` APIs
typically accept views and return owning types. When a `cuspatial` object
owns one ore more c++ owning objects, the lifetime of these objects is
automatically managed by python's reference counting mechanism.
Similar to cuDF, Cython wrappers must convert `Column` objects into
`column_view` objects, call the `libcuspatial` API, and reconstruct a cuDF
object from the c++ result. By the time code reaches this stage, the
objects are assumed to be fully legal inputs to the `libcuspatial` API.
Therefore the wrapper should not contain additional components besides
the above.
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/development_environment.md | # Creating a Development Environment
cuSpatial recommends using [Dev Containers](https://containers.dev/) to setup the development environment.
To setup Dev Containers for cuspatial, please refer to [documentation](https://github.com/rapidsai/cuspatial/tree/main/.devcontainer).
## From Bare Metal
RAPIDS keeps a single source of truth for library dependencies in `dependencies.yaml`. This file divides
the dependencies into several dimensions: building, testing, documentations, notebooks etc. As a developer,
you generally want to generate an environment recipe that includes everything that the library *may* use.
To do so, install the rapids-dependency-file-generator via pip:
```shell
pip install rapids-dependency-file-generator
```
And run under the repo root:
```shell
rapids-dependency-file-generator --clean
```
The environment recipe is generated within the `conda/environments` directory. To continue the next step of building,
see the [build page](https://docs.rapids.ai/api/cuspatial/stable/developer_guide/build.html).
For more information about how RAPIDS manages dependencies, see [README of rapids-dependency-file-generator repo](https://github.com/rapidsai/dependency-file-generator).
| 0 |
rapidsai_public_repos/cuspatial/docs/source | rapidsai_public_repos/cuspatial/docs/source/developer_guide/contributing_guide.md | # How to Contribute to cuSpatial
`cuSpatial` is a part of the RAPIDS community. When contributing to cuSpatial, developers should
follow the RAPIDS contribution guidelines. The RAPIDS documentation
[contributing section](https://docs.rapids.ai/contributing) walks through the process of identifying
an issue, submitting and merging a PR.
## Directory structure and file naming
The `cuspatial` package comprises several subpackages.
- `core` contains the main components of cuspatial
- `io` contains I/O functions for reading and writing external data objects
- `tests` contains unit tests for cuspatial
- `utils` contains utility functions
- `_lib` contains Cython APIs that wrap the C++ `libcuspatial` backend.
[`library_design`](library_design.md) further discusses high-level library design of `cuspatial`.
### Cython code
The `_lib` folder contains all cython code. Each feature in `libcuspatial` exposed to
`cuspatial` should have two Cython files:
1. A `pxd` file declaring C++ APIs so that they may be used in Cython, and
2. A `pyx` file containing Cython functions that wrap those C++ APIs so that they can be called from Python.
`pyx` files are organized under the root of `_lib`. `pxd` files are under `_lib/cpp`.
`pxd` files should mirror the file hierarchy of `cpp/include` in `libcuspatial`.
For more information see [the Cython layer design documentation](./library_design.md#cython-layer).
## Code style
cuSpatial employs a number of linters to ensure consistent style across the code base, and manages
them using [`pre-commit`](https://pre-commit.com/). Developers are strongly recommended to set up
`pre-commit` prior to any development. The `.pre-commit-config.yaml` file at the root of the repo is
the primary source of truth for linting.
To install pre-commit, install via conda/pip:
```bash
# conda
conda install -c conda-forge pre-commit
```
```bash
# pip
pip install pre-commit
```
Then run pre-commit hooks before committing code:
```bash
pre-commit run
```
Optionally, you may set up the pre-commit hooks to run automatically when you make a git commit. This can be done by running the following command in cuspatial repository:
```bash
pre-commit install
```
Now code linters and formatters will be run each time you commit changes.
You can skip these checks with `git commit --no-verify` or with the short version `git commit -n`.
### Linter Details
Specifically, cuSpatial uses the following tools:
- [`flake8`](https://github.com/pycqa/flake8) checks for general code formatting compliance.
- [`black`](https://github.com/psf/black) is an automatic code formatter.
- [`isort`](https://pycqa.github.io/isort/) ensures imports are sorted consistently.
Linter config data is stored in a number of files. cuSpatial generally uses `pyproject.toml` over
`setup.cfg` and avoids project-specific files (e.g. `setup.cfg` > `python/cudf/setup.cfg`). However,
differences between tools and the different packages in the repo result in the following caveats:
- `flake8` has no plans to support `pyproject.toml`, so it must live in `setup.cfg`.
- `isort` must be configured per project to set which project is the "first party" project.
Additionally, cuSpatial's use of `versioneer` means that each project must have a `setup.cfg`.
As a result, cuSpatial currently maintains both root and project-level `pyproject.toml` and
`setup.cfg` files.
## Writing tests
Every new feature contributed to cuspatial should include unit tests. The unit test file should be
added to the `tests` folder. In general, the `tests` folder mirrors the folder hierarchy of the
`cuspatial` package. At the lowest level, each module expands into a folder that contains specific
test files for features in the module.
cuSpatial uses [`pytest`](https://docs.pytest.org/) as the unit testing framework. `conftest.py`
contains useful fixtures that can be shared across different test functions. Reusing these fixtures
reduces redundancy in test code.
cuspatial compute APIs should strive to reach result parity with its host (CPU) equivalent. For
`GeoSeries` and `GeoDataFrame` features, unit tests should compare results with
corresponding `geopandas` functions.
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/test_wheel_cuproj.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -eou pipefail
mkdir -p ./dist
RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})"
RAPIDS_PY_WHEEL_NAME="cuproj_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist
# Install additional dependencies
apt update
DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends libgdal-dev
python -m pip install --no-binary fiona 'fiona>=1.8.19,<1.9'
# Download the cuspatial built in the previous step
RAPIDS_PY_WHEEL_NAME="cuspatial_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./local-cuspatial-dep
python -m pip install --no-deps ./local-cuspatial-dep/cuspatial*.whl
# echo to expand wildcard before adding `[extra]` requires for pip
python -m pip install $(echo ./dist/cuproj*.whl)[test]
if [[ "$(arch)" == "aarch64" && ${RAPIDS_BUILD_TYPE} == "pull-request" ]]; then
python ./ci/wheel_smoke_test_cuproj.py
else
python -m pytest -n 8 ./python/cuproj/cuproj/tests
fi
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/wheel_smoke_test_cuproj.py | # Copyright (c) 2023, NVIDIA CORPORATION.
from cuproj import Transformer as cuTransformer
from cupy.testing import assert_allclose
if __name__ == '__main__':
# Sydney opera house latitude and longitude
lat = -33.8587
lon = 151.2140
# Transform to UTM using cuproj
cu_transformer = cuTransformer.from_crs("epsg:4326", "EPSG:32756")
cuproj_x, cuproj_y = cu_transformer.transform(lat, lon)
assert_allclose(cuproj_x, 334783.9544807102)
assert_allclose(cuproj_y, 6252075.961741454)
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/test_python.sh | #!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
set -euo pipefail
. /opt/conda/etc/profile.d/conda.sh
rapids-logger "Generate Python testing dependencies"
rapids-dependency-file-generator \
--output conda \
--file_key test_python \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml
rapids-mamba-retry env create --force -f env.yaml -n test
# Temporarily allow unbound variables for conda activation.
set +u
conda activate test
set -u
rapids-logger "Downloading artifacts from previous jobs"
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python)
RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}
RAPIDS_COVERAGE_DIR=${RAPIDS_COVERAGE_DIR:-"${PWD}/coverage-results"}
mkdir -p "${RAPIDS_TESTS_DIR}" "${RAPIDS_COVERAGE_DIR}"
# CUSPATIAL_HOME is used to find test files
export CUSPATIAL_HOME="${PWD}"
rapids-print-env
rapids-mamba-retry install \
--channel "${CPP_CHANNEL}" \
--channel "${PYTHON_CHANNEL}" \
libcuspatial cuspatial cuproj
rapids-logger "Check GPU usage"
nvidia-smi
EXITCODE=0
trap "EXITCODE=1" ERR
set +e
rapids-logger "pytest cuspatial"
pushd python/cuspatial/cuspatial
# It is essential to cd into python/cuspatial/cuspatial as `pytest-xdist` + `coverage` seem to work only at this directory level.
pytest \
--cache-clear \
--junitxml="${RAPIDS_TESTS_DIR}/junit-cuspatial.xml" \
--numprocesses=8 \
--dist=loadscope \
--cov-config=../.coveragerc \
--cov=cuspatial \
--cov-report=xml:"${RAPIDS_COVERAGE_DIR}/cuspatial-coverage.xml" \
--cov-report=term \
tests
popd
rapids-logger "pytest cuproj"
pushd python/cuproj/cuproj
# It is essential to cd into python/cuproj/cuproj as `pytest-xdist` + `coverage` seem to work only at this directory level.
pytest \
--cache-clear \
--junitxml="${RAPIDS_TESTS_DIR}/junit-cuproj.xml" \
--numprocesses=8 \
--dist=loadscope \
--cov-config=../.coveragerc \
--cov=cuproj \
--cov-report=xml:"${RAPIDS_COVERAGE_DIR}/cuproj-coverage.xml" \
--cov-report=term \
tests
popd
rapids-logger "Test script exiting with value: $EXITCODE"
exit ${EXITCODE}
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/test_cpp.sh | #!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
set -euo pipefail
. /opt/conda/etc/profile.d/conda.sh
rapids-logger "Generate C++ testing dependencies"
rapids-dependency-file-generator \
--output conda \
--file_key test_cpp \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch)" | tee env.yaml
rapids-mamba-retry env create --force -f env.yaml -n test
# Temporarily allow unbound variables for conda activation.
set +u
conda activate test
set -u
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}/
mkdir -p "${RAPIDS_TESTS_DIR}"
# CUSPATIAL_HOME is used to find test files
export CUSPATIAL_HOME="${PWD}"
rapids-print-env
rapids-mamba-retry install \
--channel "${CPP_CHANNEL}" \
libcuspatial libcuspatial-tests
rapids-logger "Check GPU usage"
nvidia-smi
EXITCODE=0
trap "EXITCODE=1" ERR
set +e
# Run libcuspatial gtests from libcuspatial-tests package
rapids-logger "Run gtests"
for gt in "$CONDA_PREFIX"/bin/gtests/libcuspatial/* ; do
test_name=$(basename ${gt})
echo "Running gtest $test_name"
${gt} --gtest_output=xml:${RAPIDS_TESTS_DIR}
done
rapids-logger "Test script exiting with value: $EXITCODE"
exit ${EXITCODE}
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_python.sh | #!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
set -euo pipefail
source rapids-env-update
export CMAKE_GENERATOR=Ninja
rapids-print-env
package_dir="python"
version=$(rapids-generate-version)
commit=$(git rev-parse HEAD)
echo "${version}" > VERSION
for package_name in cuspatial cuproj; do
sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" "${package_dir}/${package_name}/${package_name}/_version.py"
done
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
rapids-logger "Begin py build cuSpatial"
# TODO: Remove `--no-test` flag once importing on a CPU
# node works correctly
RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \
--no-test \
--channel "${CPP_CHANNEL}" \
conda/recipes/cuspatial
rapids-logger "Begin py build cuProj"
# TODO: Remove `--no-test` flag once importing on a CPU
# node works correctly
RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \
--no-test \
--channel "${CPP_CHANNEL}" \
conda/recipes/cuproj
rapids-upload-conda-to-s3 python
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/wheel_smoke_test_cuspatial.py | # Copyright (c) 2023, NVIDIA CORPORATION.
import numpy as np
import cudf
import cuspatial
import pyarrow as pa
from shapely.geometry import Point
if __name__ == '__main__':
order, quadtree = cuspatial.quadtree_on_points(
cuspatial.GeoSeries([Point(0.5, 0.5), Point(1.5, 1.5)]),
*(0, 2, 0, 2), # bbox
1, # scale
1, # max_depth
1, # min_size
)
cudf.testing.assert_frame_equal(
quadtree,
cudf.DataFrame(
{
"key": cudf.Series(pa.array([0, 3], type=pa.uint32())),
"level": cudf.Series(pa.array([0, 0], type=pa.uint8())),
"is_internal_node": cudf.Series(pa.array([False, False], type=pa.bool_())),
"length": cudf.Series(pa.array([1, 1], type=pa.uint32())),
"offset": cudf.Series(pa.array([0, 1], type=pa.uint32())),
}
),
)
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_wheel.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -euo pipefail
package_name=$1
package_dir=$2
source rapids-configure-sccache
source rapids-date-string
version=$(rapids-generate-version)
commit=$(git rev-parse HEAD)
RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})"
# This is the version of the suffix with a preceding hyphen. It's used
# everywhere except in the final wheel name.
PACKAGE_CUDA_SUFFIX="-${RAPIDS_PY_CUDA_SUFFIX}"
# Patch project metadata files to include the CUDA version suffix and version override.
pyproject_file="${package_dir}/pyproject.toml"
sed -i "s/name = \"${package_name}\"/name = \"${package_name}${PACKAGE_CUDA_SUFFIX}\"/g" ${pyproject_file}
echo "${version}" > VERSION
sed -i "/^__git_commit__/ s/= .*/= \"${commit}\"/g" "${package_dir}/${package_name}/_version.py"
# For nightlies we want to ensure that we're pulling in alphas as well. The
# easiest way to do so is to augment the spec with a constraint containing a
# min alpha version that doesn't affect the version bounds but does allow usage
# of alpha versions for that dependency without --pre
alpha_spec=''
if ! rapids-is-release-build; then
alpha_spec=',>=0.0.0a0'
fi
# Add CUDA version suffix to dependencies
sed -r -i "s/rmm(.*)\"/rmm${PACKAGE_CUDA_SUFFIX}\1${alpha_spec}\"/g" ${pyproject_file}
if [[ ${package_name} == "cuspatial" ]]; then
sed -r -i "s/cudf==(.*)\"/cudf${PACKAGE_CUDA_SUFFIX}==\1${alpha_spec}\"/g" ${pyproject_file}
fi
if [[ ${package_name} == "cuproj" ]]; then
sed -r -i "s/cuspatial==(.*)\"/cuspatial${PACKAGE_CUDA_SUFFIX}==\1${alpha_spec}\"/g" ${pyproject_file}
fi
if [[ $PACKAGE_CUDA_SUFFIX == "-cu12" ]]; then
sed -i "s/cupy-cuda11x/cupy-cuda12x/g" ${pyproject_file}
fi
cd "${package_dir}"
python -m pip wheel . -w dist -vvv --no-deps --disable-pip-version-check
mkdir -p final_dist
python -m auditwheel repair -w final_dist dist/*
RAPIDS_PY_WHEEL_NAME="${package_name}_${RAPIDS_PY_CUDA_SUFFIX}" rapids-upload-wheels-to-s3 final_dist
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_wheel_cuproj.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -euo pipefail
export SKBUILD_CONFIGURE_OPTIONS="-DCUPROJ_BUILD_WHEELS=ON"
ci/build_wheel.sh cuproj python/cuproj
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/check_style.sh | #!/bin/bash
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
set -euo pipefail
rapids-logger "Create checks conda environment"
. /opt/conda/etc/profile.d/conda.sh
rapids-dependency-file-generator \
--output conda \
--file_key checks \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml
rapids-mamba-retry env create --force -f env.yaml -n checks
conda activate checks
FORMAT_FILE_URL=https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-23.02/cmake-format-rapids-cmake.json
export RAPIDS_CMAKE_FORMAT_FILE=/tmp/rapids_cmake_ci/cmake-formats-rapids-cmake.json
mkdir -p $(dirname ${RAPIDS_CMAKE_FORMAT_FILE})
wget -O ${RAPIDS_CMAKE_FORMAT_FILE} ${FORMAT_FILE_URL}
# Run pre-commit checks
pre-commit run --hook-stage manual --all-files --show-diff-on-failure
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_cpp.sh | #!/bin/bash
# Copyright (c) 2022, NVIDIA CORPORATION.
set -euo pipefail
source rapids-env-update
export CMAKE_GENERATOR=Ninja
rapids-print-env
version=$(rapids-generate-version)
rapids-logger "Begin cpp build"
RAPIDS_PACKAGE_VERSION=${version} rapids-conda-retry mambabuild \
conda/recipes/libcuspatial
rapids-upload-conda-to-s3 cpp
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/test_notebooks.sh | #!/bin/bash
# Copyright (c) 2020-2023, NVIDIA CORPORATION.
set -euo pipefail
. /opt/conda/etc/profile.d/conda.sh
rapids-logger "Generate notebook testing dependencies"
rapids-dependency-file-generator \
--output conda \
--file_key test_notebooks \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml
rapids-mamba-retry env create --force -f env.yaml -n test
# Temporarily allow unbound variables for conda activation.
set +u
conda activate test
set -u
rapids-print-env
rapids-logger "Downloading artifacts from previous jobs"
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python)
rapids-mamba-retry install \
--channel "${CPP_CHANNEL}" \
--channel "${PYTHON_CHANNEL}" \
cuspatial libcuspatial cuproj
NBTEST="$(realpath "$(dirname "$0")/utils/nbtest.sh")"
pushd notebooks
# Add notebooks that should be skipped here
# (space-separated list of filenames without paths)
SKIPNBS="binary_predicates.ipynb cuproj_benchmark.ipynb"
EXITCODE=0
trap "EXITCODE=1" ERR
set +e
for nb in $(find . -name "*.ipynb"); do
nbBasename=$(basename ${nb})
if (echo " ${SKIPNBS} " | grep -q " ${nbBasename} "); then
echo "--------------------------------------------------------------------------------"
echo "SKIPPING: ${nb} (listed in skip list)"
echo "--------------------------------------------------------------------------------"
else
nvidia-smi
${NBTEST} ${nbBasename}
fi
done
rapids-logger "Notebook test script exiting with value: $EXITCODE"
exit ${EXITCODE}
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_docs.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -euo pipefail
rapids-logger "Create test conda environment"
. /opt/conda/etc/profile.d/conda.sh
rapids-dependency-file-generator \
--output conda \
--file_key docs \
--matrix "cuda=${RAPIDS_CUDA_VERSION%.*};arch=$(arch);py=${RAPIDS_PY_VERSION}" | tee env.yaml
rapids-mamba-retry env create --force -f env.yaml -n docs
conda activate docs
rapids-print-env
rapids-logger "Downloading artifacts from previous jobs"
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python)
rapids-mamba-retry install \
--channel "${CPP_CHANNEL}" \
--channel "${PYTHON_CHANNEL}" \
libcuspatial \
cuspatial \
cuproj
export RAPIDS_VERSION_NUMBER="23.12"
export RAPIDS_DOCS_DIR="$(mktemp -d)"
rapids-logger "Build cuSpatial CPP docs"
pushd cpp/doxygen
doxygen Doxyfile
mkdir -p "${RAPIDS_DOCS_DIR}/libcuspatial/html"
mv html/* "${RAPIDS_DOCS_DIR}/libcuspatial/html"
popd
rapids-logger "Build cuProj CPP docs"
pushd cpp/cuproj/doxygen
doxygen Doxyfile
mkdir -p "${RAPIDS_DOCS_DIR}/libcuproj/html"
mv html/* "${RAPIDS_DOCS_DIR}/libcuproj/html"
popd
rapids-logger "Build cuSpatial Python docs"
pushd docs
sphinx-build -b dirhtml source _html -W
sphinx-build -b text source _text -W
mkdir -p "${RAPIDS_DOCS_DIR}/cuspatial/"{html,txt}
mv _html/* "${RAPIDS_DOCS_DIR}/cuspatial/html"
mv _text/* "${RAPIDS_DOCS_DIR}/cuspatial/txt"
popd
rapids-logger "Build cuProj Python docs"
pushd docs/cuproj
sphinx-build -b dirhtml source _html -W
sphinx-build -b text source _text -W
mkdir -p "${RAPIDS_DOCS_DIR}/cuproj/"{html,txt}
mv _html/* "${RAPIDS_DOCS_DIR}/cuproj/html"
mv _text/* "${RAPIDS_DOCS_DIR}/cuproj/txt"
popd
rapids-upload-docs
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/build_wheel_cuspatial.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -euo pipefail
export SKBUILD_CONFIGURE_OPTIONS="-DCUSPATIAL_BUILD_WHEELS=ON"
ci/build_wheel.sh cuspatial python/cuspatial
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/ci/test_wheel_cuspatial.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
set -eou pipefail
mkdir -p ./dist
RAPIDS_PY_CUDA_SUFFIX="$(rapids-wheel-ctk-name-gen ${RAPIDS_CUDA_VERSION})"
RAPIDS_PY_WHEEL_NAME="cuspatial_${RAPIDS_PY_CUDA_SUFFIX}" rapids-download-wheels-from-s3 ./dist
# Install additional dependencies
apt update
DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends libgdal-dev
python -m pip install --no-binary fiona 'fiona>=1.8.19,<1.9'
# echo to expand wildcard before adding `[extra]` requires for pip
python -m pip install $(echo ./dist/cuspatial*.whl)[test]
if [[ "$(arch)" == "aarch64" && ${RAPIDS_BUILD_TYPE} == "pull-request" ]]; then
python ./ci/wheel_smoke_test_cuspatial.py
else
python -m pytest -n 8 ./python/cuspatial/cuspatial/tests
fi
| 0 |
rapidsai_public_repos/cuspatial/ci | rapidsai_public_repos/cuspatial/ci/release/update-version.sh | #!/bin/bash
#############################
# cuSpatial Version Updater #
#############################
## Usage
# bash update-version.sh <new_version>
# Format is YY.MM.PP - no leading 'v' or trailing 'a'
NEXT_FULL_TAG=$1
# Get current version
CURRENT_TAG=$(git tag --merged HEAD | grep -xE '^v.*' | sort --version-sort | tail -n 1 | tr -d 'v')
CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}')
CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}')
CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}')
CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR}
#Get <major>.<minor> for next version
NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}')
NEXT_MINOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[2]}')
NEXT_SHORT_TAG=${NEXT_MAJOR}.${NEXT_MINOR}
echo "Preparing release $CURRENT_TAG => $NEXT_FULL_TAG"
# Inplace sed replace; workaround for Linux and Mac
function sed_runner() {
sed -i.bak ''"$1"'' $2 && rm -f ${2}.bak
}
# python/cpp update
sed_runner 's/'"CUSPATIAL VERSION .* LANGUAGES"'/'"CUSPATIAL VERSION ${NEXT_FULL_TAG} LANGUAGES"'/g' cpp/CMakeLists.txt
sed_runner 's/'"CUPROJ VERSION .* LANGUAGES"'/'"CUPROJ VERSION ${NEXT_FULL_TAG} LANGUAGES"'/g' cpp/cuproj/CMakeLists.txt
sed_runner 's/'"cuspatial_version .*)"'/'"cuspatial_version ${NEXT_FULL_TAG})"'/g' python/cuspatial/CMakeLists.txt
sed_runner 's/'"cuproj_version .*)"'/'"cuproj_version ${NEXT_FULL_TAG})"'/g' python/cuproj/CMakeLists.txt
sed_runner 's/'"cuproj_version .*)"'/'"cuproj_version ${NEXT_FULL_TAG})"'/g' python/cuproj/cuproj/cuprojshim/CMakeLists.txt
# RTD update
sed_runner 's/version = .*/version = '"'${NEXT_SHORT_TAG}'"'/g' docs/source/conf.py
sed_runner 's/release = .*/release = '"'${NEXT_FULL_TAG}'"'/g' docs/source/conf.py
sed_runner 's/version = .*/version = '"'${NEXT_SHORT_TAG}'"'/g' docs/cuproj/source/conf.py
sed_runner 's/release = .*/release = '"'${NEXT_FULL_TAG}'"'/g' docs/cuproj/source/conf.py
# Centralized version file update
echo "${NEXT_FULL_TAG}" > VERSION
# rapids-cmake version
sed_runner 's/'"branch-.*\/RAPIDS.cmake"'/'"branch-${NEXT_SHORT_TAG}\/RAPIDS.cmake"'/g' fetch_rapids.cmake
# Doxyfile update - cuspatial
sed_runner "/PROJECT_NUMBER[ ]*=/ s|=.*|= ${NEXT_FULL_TAG}|g" cpp/doxygen/Doxyfile
sed_runner "/TAGFILES/ s|[0-9]\+.[0-9]\+|${NEXT_SHORT_TAG}|g" cpp/doxygen/Doxyfile
#Doxyfile update - cuproj
sed_runner "/PROJECT_NUMBER[ ]*=/ s|=.*|= ${NEXT_FULL_TAG}|g" cpp/cuproj/doxygen/Doxyfile
sed_runner "/TAGFILES/ s|[0-9]\+.[0-9]\+|${NEXT_SHORT_TAG}|g" cpp/cuproj/doxygen/Doxyfile
# CI files
for FILE in .github/workflows/*.yaml; do
sed_runner "/shared-workflows/ s/@.*/@branch-${NEXT_SHORT_TAG}/g" "${FILE}"
done
sed_runner "s/RAPIDS_VERSION_NUMBER=\".*/RAPIDS_VERSION_NUMBER=\"${NEXT_SHORT_TAG}\"/g" ci/build_docs.sh
# Need to distutils-normalize the original version
NEXT_SHORT_TAG_PEP440=$(python -c "from setuptools.extern import packaging; print(packaging.version.Version('${NEXT_SHORT_TAG}'))")
DEPENDENCIES=(
cudf
cuml
libcudf
librmm
rmm
cuspatial
cuproj
)
for DEP in "${DEPENDENCIES[@]}"; do
for FILE in dependencies.yaml conda/environments/*.yaml; do
sed_runner "/-.* ${DEP}==/ s/==.*/==${NEXT_SHORT_TAG_PEP440}.*/g" ${FILE}
done
sed_runner "s/${DEP}==.*\",/${DEP}==${NEXT_SHORT_TAG_PEP440}.*\",/g" python/cuspatial/pyproject.toml
sed_runner "s/${DEP}==.*\",/${DEP}==${NEXT_SHORT_TAG_PEP440}.*\",/g" python/cuproj/pyproject.toml
done
# Dependency versions in dependencies.yaml
sed_runner "/-cu[0-9]\{2\}==/ s/==.*/==${NEXT_SHORT_TAG_PEP440}.*/g" dependencies.yaml
# Version in cuspatial_api_examples.ipynb
sed_runner "s/rapids-[0-9]*\.[0-9]*/rapids-${NEXT_SHORT_TAG}/g" docs/source/user_guide/cuspatial_api_examples.ipynb
sed_runner "s/cuproj=[0-9]*\.[0-9]*/cuproj=${NEXT_SHORT_TAG}/g" docs/source/user_guide/cuspatial_api_examples.ipynb
sed_runner "s/cuspatial=[0-9]*\.[0-9]*/cuspatial=${NEXT_SHORT_TAG}/g" docs/source/user_guide/cuspatial_api_examples.ipynb
# Version in cuproj_api_examples.ipynb
sed_runner "s/rapids-[0-9]*\.[0-9]*/rapids-${NEXT_SHORT_TAG}/g" docs/cuproj/source/user_guide/cuproj_api_examples.ipynb
sed_runner "s/cuproj=[0-9]*\.[0-9]*/cuproj-${NEXT_SHORT_TAG}/g" docs/cuproj/source/user_guide/cuproj_api_examples.ipynb
sed_runner "s/cuspatial=[0-9]*\.[0-9]*/cuspatial=${NEXT_SHORT_TAG}/g" docs/cuproj/source/user_guide/cuproj_api_examples.ipynb
# Versions in README.md
sed_runner "s/cuspatial:[0-9]\+\.[0-9]\+/cuspatial:${NEXT_SHORT_TAG}/g" README.md
sed_runner "s/cuspatial=[0-9]\+\.[0-9]\+/cuspatial=${NEXT_SHORT_TAG}/g" README.md
sed_runner "s/notebooks:[0-9]\+\.[0-9]\+/notebooks:${NEXT_SHORT_TAG}/g" README.md
# .devcontainer files
find .devcontainer/ -type f -name devcontainer.json -print0 | while IFS= read -r -d '' filename; do
sed_runner "s@rapidsai/devcontainers:[0-9.]*@rapidsai/devcontainers:${NEXT_SHORT_TAG}@g" "${filename}"
sed_runner "s@rapidsai/devcontainers/features/rapids-build-utils:[0-9.]*@rapidsai/devcontainers/features/rapids-build-utils:${NEXT_SHORT_TAG_PEP440}@" "${filename}"
done
| 0 |
rapidsai_public_repos/cuspatial/ci | rapidsai_public_repos/cuspatial/ci/utils/nbtest.sh | #!/bin/bash
# Copyright (c) 2023, NVIDIA CORPORATION.
MAGIC_OVERRIDE_CODE="
def my_run_line_magic(*args, **kwargs):
g=globals()
l={}
for a in args:
try:
exec(str(a),g,l)
except Exception as e:
print('WARNING: %s\n While executing this magic function code:\n%s\n continuing...\n' % (e, a))
else:
g.update(l)
def my_run_cell_magic(*args, **kwargs):
my_run_line_magic(*args, **kwargs)
get_ipython().run_line_magic=my_run_line_magic
get_ipython().run_cell_magic=my_run_cell_magic
"
NO_COLORS=--colors=NoColor
NBTMPDIR="$WORKSPACE/tmp"
mkdir -p ${NBTMPDIR}
EXITCODE=0
trap "EXITCODE=1" ERR
for nb in $*; do
NBFILENAME=$1
NBNAME=${NBFILENAME%.*}
NBNAME=${NBNAME##*/}
NBTESTSCRIPT=${NBTMPDIR}/${NBNAME}-test.py
shift
echo --------------------------------------------------------------------------------
echo STARTING: ${NBNAME}
echo --------------------------------------------------------------------------------
jupyter nbconvert --to script ${NBFILENAME} --output ${NBTMPDIR}/${NBNAME}-test
echo "${MAGIC_OVERRIDE_CODE}" > ${NBTMPDIR}/tmpfile
cat ${NBTESTSCRIPT} >> ${NBTMPDIR}/tmpfile
mv ${NBTMPDIR}/tmpfile ${NBTESTSCRIPT}
echo "Running \"ipython ${NO_COLORS} ${NBTESTSCRIPT}\" on $(date)"
echo
time bash -c "ipython ${NO_COLORS} ${NBTESTSCRIPT}; EC=\$?; echo -------------------------------------------------------------------------------- ; echo DONE: ${NBNAME}; exit \$EC"
done
exit ${EXITCODE}
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/data/its.cat | its_4326_roi.shp | 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/data/README.md | # Data pre-processing for C++/Python test code
## Data Sources
The schema data derived from a traffic surveillance camera dataset named
schema_HWY_20_AND_LOCUST-filtered.json can be
[downloaded here](https://drive.google.com/file/d/1GKTB5SV2RK7lEOIWz8tWab5MGWDtMWWW/view?usp=sharing).
Regions of Interest (ROIs) covered by cameras in ESRI shapefile format (named
its_4326_roi.*) can be
[downloaded here](https://nvidia-my.sharepoint.com/:u:/p/jiantingz/ESvNHXtWgSxDtf2xXTcVN1IByp5HKoUWLhuPTr_bS2ecSw?e=gf4VUu).
The camera parameter file (for 27 ROIs) can be
[downloaded here](https://nvidia-my.sharepoint.com/:x:/p/jiantingz/EZPkLpJPrUtOmwmBPSlNNxwBgeh8UAYlEyrRuT5QLkvj7Q?e=thLUQS)
For application background [see here](https://www.nvidia.com/en-us/deep-learning-ai/industries/ai-cities/)
## Instructions
Download these three data files to {cudf_home}/data and compile/run two data
preprocessing C++ programs in the folder to prepare the data files for the
C++/Python test code. In addition to its_4326_roi.* and its_camera_2.csv,
four derived SoA data files are needed for the tests: vehicle identification
(`.objectid`), timestamp (`.time`), lon/lat location
(`.location`) and polygon (`.ply`). The instructions to compile and run
`json2soa.cpp` and `poly2soa.cpp` are provided at the beginning of the two
programs.
### json2soa
To compile, download cJSON.c and cJSON.h from the
[cJson website](https://github.com/DaveGamble/cJSON) and put them in the
current directory.
```
g++ json2soa.cpp cJSON.c -o json2soa -O3
```
To run:
```
./json2soa schema_HWY_20_AND_LOCUST-filtered.json locust -1
```
The three parameters for the program are: input json file name
(schema_HWY_20_AND_LOCUST-filtered.json, must follow the specific schema),
the output root file name and the number of records to be processed. A total of
five files with `.time`, `.objectid`, `.bbox`, `.location`, `.coordinate`
extensions will be generated and three will be used: `.time`, `.objectid` and
`.location`. The last parameter is for the desired number of locations to be
processed; -1 indicates all records but the value can be a smaller number for
easy inspection.
### poly2soa
To compile, install a recent version of [GDAL](https://gdal.org/download.html)
under `/usr/local`.
```
g++ -I /usr/local/include -L /usr/local/lib poly2soa.cpp -lgdal -o poly2soa
```
To run:
```
./poly2soa its.cat itsroi.ply
```
The first parameter is the catalog file of all Shapefiles from which to extract
polygons. Currently, the provided `its.cat` has only one line which is the path
(relative or full) of the provided ROI polygon file (its_4326_roi.shp). If you
have multiple ROI shapefiles, you can list them in `its.cat` file, one `.shp`
file name per line.
## Additional Notes
The design supports multiple polygons from multiple shapefiles and polygons in
each file is considered to be a group. However, the group information is not
exposed in the current implementation but this can be changed in the future if
needed.
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/data/poly2soa.cpp | // g++ -I /usr/local/include -L /usr/local/lib poly2soa.cpp -lgdal -o poly2soa
// ./poly2soa its.cat itsroi.ply
#include <sys/time.h>
#include <time.h>
#include <algorithm>
#include <cassert>
#include <fstream>
#include <iostream>
#include <iterator>
#include <map>
#include <set>
#include <string>
#include <vector>
#include "cpl_conv.h"
#include "cpl_string.h"
#include "gdal.h"
#include "gdal_alg.h"
#include "gdal_priv.h"
#include "ogr_api.h"
#include "ogr_geometry.h"
#include "ogr_srs_api.h"
#include "ogrsf_frmts.h"
using namespace std;
void GDALCollectRingsFromGeometry(OGRGeometry* poShape,
std::vector<double>& aPointX,
std::vector<double>& aPointY,
std::vector<int>& aPartSize)
{
if (poShape == NULL) return;
OGRwkbGeometryType eFlatType = wkbFlatten(poShape->getGeometryType());
int i;
if (eFlatType == wkbPoint) {
OGRPoint* poPoint = (OGRPoint*)poShape;
int nNewCount = aPointX.size() + 1;
aPointX.reserve(nNewCount);
aPointY.reserve(nNewCount);
aPointX.push_back(poPoint->getX());
aPointY.push_back(poPoint->getY());
aPartSize.push_back(1);
} else if (eFlatType == wkbLineString) {
OGRLineString* poLine = (OGRLineString*)poShape;
int nCount = poLine->getNumPoints();
int nNewCount = aPointX.size() + nCount;
aPointX.reserve(nNewCount);
aPointY.reserve(nNewCount);
for (i = nCount - 1; i >= 0; i--) {
aPointX.push_back(poLine->getX(i));
aPointY.push_back(poLine->getY(i));
}
aPartSize.push_back(nCount);
} else if (EQUAL(poShape->getGeometryName(), "LINEARRING")) {
OGRLinearRing* poRing = (OGRLinearRing*)poShape;
int nCount = poRing->getNumPoints();
int nNewCount = aPointX.size() + nCount;
aPointX.reserve(nNewCount);
aPointY.reserve(nNewCount);
for (i = nCount - 1; i >= 0; i--) {
aPointX.push_back(poRing->getX(i));
aPointY.push_back(poRing->getY(i));
}
aPartSize.push_back(nCount);
} else if (eFlatType == wkbPolygon) {
OGRPolygon* poPolygon = (OGRPolygon*)poShape;
GDALCollectRingsFromGeometry(poPolygon->getExteriorRing(), aPointX, aPointY, aPartSize);
for (i = 0; i < poPolygon->getNumInteriorRings(); i++)
GDALCollectRingsFromGeometry(poPolygon->getInteriorRing(i), aPointX, aPointY, aPartSize);
}
else if (eFlatType == wkbMultiPoint || eFlatType == wkbMultiLineString ||
eFlatType == wkbMultiPolygon || eFlatType == wkbGeometryCollection) {
OGRGeometryCollection* poGC = (OGRGeometryCollection*)poShape;
for (i = 0; i < poGC->getNumGeometries(); i++)
GDALCollectRingsFromGeometry(poGC->getGeometryRef(i), aPointX, aPointY, aPartSize);
} else {
CPLDebug("GDAL", "Rasterizer ignoring non-polygonal geometry.");
}
}
int addData(const OGRLayerH layer,
vector<int>& g_len_v,
vector<int>& f_len_v,
vector<int>& r_len_v,
vector<double>& xx_v,
vector<double>& yy_v)
{
int num_feature = 0;
OGR_L_ResetReading(layer);
OGRFeatureH hFeat;
int this_rings = 0, this_points = 0;
while ((hFeat = OGR_L_GetNextFeature(layer)) != NULL) {
OGRGeometry* poShape = (OGRGeometry*)OGR_F_GetGeometryRef(hFeat);
if (poShape == NULL) {
cout << "error:............shape is NULL" << endl;
num_feature++;
continue;
}
OGRwkbGeometryType eFlatType = wkbFlatten(poShape->getGeometryType());
if (eFlatType == wkbPolygon) {
OGRPolygon* poPolygon = (OGRPolygon*)poShape;
this_rings += (poPolygon->getNumInteriorRings() + 1);
} else {
}
std::vector<double> aPointX;
std::vector<double> aPointY;
std::vector<int> aPartSize;
GDALCollectRingsFromGeometry(poShape, aPointX, aPointY, aPartSize);
if (aPartSize.size() == 0) {
printf("warning: aPartSize.size()==0\n");
// num_feature++;
}
xx_v.insert(xx_v.end(), aPointX.begin(), aPointX.end());
yy_v.insert(yy_v.end(), aPointY.begin(), aPointY.end());
r_len_v.insert(r_len_v.end(), aPartSize.begin(), aPartSize.end());
f_len_v.push_back(aPartSize.size());
OGR_F_Destroy(hFeat);
num_feature++;
}
g_len_v.push_back(num_feature);
return num_feature;
}
void process_coll(char* catfn,
vector<int>& g_len_v,
vector<int>& f_len_v,
vector<int>& r_len_v,
vector<double>& xx_v,
vector<double>& yy_v)
{
FILE* fp;
if ((fp = fopen(catfn, "r")) == NULL) {
printf("can not open catalog file\n");
exit(-1);
}
int this_seq = 0;
// while(!feof(fp))
for (int i = 0; i < 1; i++) {
char fn[100];
fscanf(fp, "%s", fn);
GDALDatasetH hDS = GDALOpenEx(fn, GDAL_OF_VECTOR, NULL, NULL, NULL);
if (hDS == NULL) {
printf("hDS is NULL, skipping 1......\n");
// skiplist.push_back(fn);
continue;
}
OGRLayerH hLayer = GDALDatasetGetLayer(hDS, 0);
if (hLayer == NULL) {
printf("Unable to find layer 0, skipping 2......\n");
// skiplist.push_back(fn);
continue;
}
printf("%d %s \n", this_seq, fn);
int num0 = addData(hLayer, g_len_v, f_len_v, r_len_v, xx_v, yy_v);
if (num0 == 0) {
printf("zero features, skipping 3......\n");
// skiplist.push_back(fn);
}
this_seq++;
}
}
int main(int argc, char** argv)
{
if (argc != 3) {
printf("EXE cat_fn out_fn \n");
exit(-1);
}
vector<int> g_len_v, f_len_v, r_len_v;
vector<double> xx_v, yy_v;
GDALAllRegister();
char* inc = argv[1];
timeval start, end;
gettimeofday(&start, NULL);
printf("catalog=%s output=%s\n", argv[1], argv[2]);
process_coll(inc, g_len_v, f_len_v, r_len_v, xx_v, yy_v);
printf("skip list.............\n");
gettimeofday(&end, NULL);
long diff = end.tv_sec * 1000000 + end.tv_usec - start.tv_sec * 1000000 - start.tv_usec;
printf("CPU Processing time.......%10.2f\n", diff / (float)1000);
printf("%lu %lu %lu %lu\n", g_len_v.size(), f_len_v.size(), r_len_v.size(), xx_v.size());
int gc = g_len_v.size();
int fc = 0, rc = 0, vc = 0;
printf("#of groups(datasets)=%d\n", gc);
for (int g = 0; g < gc; g++) {
printf("#of features: (%d,%d,%d)\n", g, fc, g_len_v[g]);
for (int f = fc; f < fc + g_len_v[g]; f++) {
printf("#of rings= (%d,%d,%d)\n", f, fc, f_len_v[f]);
for (int r = rc; r < rc + f_len_v[f]; r++) {
printf("#of vertices (%d,%d,%d)\n", r, rc, r_len_v[r]);
printf("...v:");
for (int v = vc; v < rc + r_len_v[r]; v++)
printf("%5d", v);
printf("\n");
vc += r_len_v[r];
}
rc += f_len_v[f];
}
fc += g_len_v[g];
}
printf("%d %d %d %d\n", gc, fc, rc, vc);
FILE* fp = fopen(argv[2], "wb");
assert(fp != NULL);
fwrite(&gc, sizeof(int), 1, fp);
fwrite(&fc, sizeof(int), 1, fp);
fwrite(&rc, sizeof(int), 1, fp);
fwrite(&vc, sizeof(int), 1, fp);
int* g_p = &(g_len_v[0]);
int* f_p = &(f_len_v[0]);
int* r_p = &(r_len_v[0]);
double* x_p = &(xx_v[0]);
double* y_p = &(yy_v[0]);
fwrite(g_p, sizeof(int), gc, fp);
fwrite(f_p, sizeof(int), fc, fp);
fwrite(r_p, sizeof(int), rc, fp);
fwrite(x_p, sizeof(double), vc, fp);
fwrite(y_p, sizeof(double), vc, fp);
fclose(fp);
}
| 0 |
rapidsai_public_repos/cuspatial | rapidsai_public_repos/cuspatial/data/json2soa.cpp | // g++ json2soa.cpp cJSON.c -o json2soa -O3
//./json2soa schema_HWY_20_AND_LOCUST-filtered.json locust -1
#include <assert.h>
#include <ctype.h>
#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <time.h>
#include <unistd.h>
#include <iostream>
#include <map>
#include <sstream>
#include <string>
#include <vector>
#include "cJSON.h"
using namespace std;
#define MAXLINE 4096
#define NUM_FIELDS 5
typedef unsigned int uint;
typedef unsigned short ushort;
typedef struct Time {
uint y : 6;
uint m : 4;
uint d : 5;
uint hh : 5;
uint mm : 6;
uint ss : 6;
uint wd : 3;
uint yd : 9;
uint mili : 10;
uint pid : 10;
} Time;
ostream& operator<<(ostream& os, const Time& t)
{
os << t.y << "," << t.m << "," << t.d << "," << t.hh << "," << t.mm << "," << t.ss << ","
<< t.mili;
return os;
}
bool operator<(const Time& t1, const Time& t2)
{
if (t1.y < t2.y)
return true;
else if ((t1.y == t2.y) && (t1.m < t2.m))
return true;
else if ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d < t2.d))
return true;
else if ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d == t2.d) && (t1.hh < t2.hh))
return true;
else if ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d == t2.d) && (t1.hh == t2.hh) &&
(t1.mm < t2.mm))
return true;
else if ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d == t2.d) && (t1.hh == t2.hh) &&
(t1.mm == t2.mm) && (t1.ss < t2.ss))
return true;
else if ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d == t2.d) && (t1.hh == t2.hh) &&
(t1.mm == t2.mm) && (t1.ss == t2.ss) && (t1.mili < t2.mili))
return true;
return false;
}
bool operator==(const Time& t1, const Time& t2)
{
return ((t1.y == t2.y) && (t1.m == t2.m) && (t1.d == t2.d) && (t1.hh == t2.hh) &&
(t1.mm == t2.mm) && (t1.ss == t2.ss) && (t1.mili == t2.mili));
}
template <class T>
void append_map(map<T, int>& m, const T& key)
{
typename map<T, int>::iterator it = m.find(key);
if (it == m.end())
m[key] = 1;
else
it->second++;
}
template <class T>
int output_map(const map<T, int>& m)
{
int cnt = 0;
typename map<T, int>::const_iterator it = m.begin();
for (; it != m.end(); ++it) {
std::cout << "(" << it->first << ")==>" << it->second << "\n";
cnt += it->second;
}
return cnt;
}
int main(int argc, char* argv[])
{
printf("sizeof(Time)=%ld\n", sizeof(Time));
// std::map<Time,int> t_map;
// std::map<int,int> p_map;
std::map<int, int> oid_map;
char line[MAXLINE];
struct timeval t0, t1;
gettimeofday(&t0, NULL);
if (argc != 4) {
printf("USAGE: %s in_fn out_root run_num(-1 for all)\n", argv[0]);
exit(1);
}
const char* in_name = argv[1];
const char* out_root = argv[2];
enum FIELDS { time_id = 0, objid_id, bbox_id, location_id, coordinate_id };
const char* out_ext[NUM_FIELDS] = {".time", ".objectid", ".bbox", ".location", ".coordinate"};
FILE* in_fp = fopen(in_name, "r");
if (in_fp == NULL) {
printf("can not open data file %s for input\n", in_name);
return -1;
}
FILE* out_fp[NUM_FIELDS];
for (int i = 0; i < NUM_FIELDS; i++) {
char out_name[100];
strcpy(out_name, out_root);
strcat(out_name, out_ext[i]);
out_fp[i] = fopen(out_name, "wb");
if (out_fp[i] == NULL) {
printf("can not open data file %s for output\n", out_name);
return -1;
}
}
int run_num = atoi(argv[3]);
printf("using run_num %d\n", run_num);
size_t pos = 0;
while (!feof(in_fp)) {
// printf("processing #%d\n",pos);
ssize_t n;
char* lp = fgets(line, MAXLINE, in_fp);
// printf("%s\n",line);
cJSON* root = cJSON_Parse(line);
cJSON* timestamp = cJSON_GetObjectItem(root, "@timestamp");
char* t_str = timestamp->valuestring;
struct tm it;
strptime(t_str, "%Y-%m-%dT%H:%M:%S", &it);
char* p = strstr(t_str, ".");
p++;
char st[4];
strncpy(st, p, 3);
st[3] = '\n';
int in_mili = atoi(st);
// printf("s=%s t=%s:%3d %d\n",t_str,asctime(&it),in_mili,it.tm_year);
Time ot;
ot.y = it.tm_year - 100; // shifting starting year from 1900 to 2000, max 64 years allowed
ot.m = it.tm_mon;
ot.d = it.tm_mday;
ot.hh = it.tm_hour;
ot.mm = it.tm_min;
ot.ss = it.tm_sec;
ot.wd = it.tm_wday;
ot.yd = it.tm_yday;
ot.mili = in_mili;
// append_map(t_map,ot);
cJSON* place = cJSON_GetObjectItem(root, "place");
string place_str = cJSON_GetObjectItem(place, "id")->valuestring;
// cout<<place_str<<" ";
int place_id = stoi(place_str);
assert(place_id < 1024);
ot.pid = place_id;
fwrite(&ot, sizeof(Time), 1, out_fp[time_id]);
// append_map(p_map,place_id);
cJSON* object = cJSON_GetObjectItem(root, "object");
string objid_str = cJSON_GetObjectItem(object, "id")->valuestring;
// cout<<objid_str<<" ";
int obj_id = stoi(objid_str);
fwrite(&obj_id, sizeof(int), 1, out_fp[objid_id]);
append_map(oid_map, obj_id);
cJSON* bbox = cJSON_GetObjectItem(object, "bbox");
cJSON* location = cJSON_GetObjectItem(object, "location");
cJSON* coordinate = cJSON_GetObjectItem(object, "coordinate");
double topleftx = cJSON_GetObjectItem(bbox, "topleftx")->valuedouble;
double toplefty = cJSON_GetObjectItem(bbox, "toplefty")->valuedouble;
double bottomrightx = cJSON_GetObjectItem(bbox, "bottomrightx")->valuedouble;
double bottomrighty = cJSON_GetObjectItem(bbox, "bottomrighty")->valuedouble;
// printf("%15.10f %15.10f %15.10f %15.10f\n",topleftx, toplefty, bottomrightx,bottomrighty);
fwrite(&topleftx, sizeof(double), 1, out_fp[bbox_id]);
fwrite(&toplefty, sizeof(double), 1, out_fp[bbox_id]);
fwrite(&bottomrightx, sizeof(double), 1, out_fp[bbox_id]);
fwrite(&bottomrighty, sizeof(double), 1, out_fp[bbox_id]);
double lat = cJSON_GetObjectItem(location, "lat")->valuedouble;
double lon = cJSON_GetObjectItem(location, "lon")->valuedouble;
double alt = cJSON_GetObjectItem(location, "alt")->valuedouble;
// printf("%15.10f %15.10f %15.10f\n",lat, lon, alt);
fwrite(&lat, sizeof(double), 1, out_fp[location_id]);
fwrite(&lon, sizeof(double), 1, out_fp[location_id]);
fwrite(&alt, sizeof(double), 1, out_fp[location_id]);
double x = cJSON_GetObjectItem(coordinate, "x")->valuedouble;
double y = cJSON_GetObjectItem(coordinate, "y")->valuedouble;
double z = cJSON_GetObjectItem(coordinate, "z")->valuedouble;
// printf("%15.10f %15.10f %15.10f\n",x, y, z);
fwrite(&x, sizeof(double), 1, out_fp[coordinate_id]);
fwrite(&y, sizeof(double), 1, out_fp[coordinate_id]);
fwrite(&z, sizeof(double), 1, out_fp[coordinate_id]);
cJSON_Delete(root);
pos++;
if (pos == run_num) break;
}
fclose(in_fp);
for (int i = 0; i < NUM_FIELDS; i++)
fclose(out_fp[i]);
// printf("output time map.................%d\n",output_map(t_map));
// printf("output place id map.................%d\n",output_map(p_map));
printf("output object id map.................%d\n", output_map(oid_map));
printf("num_rec=%lu\n", pos);
return 0;
} | 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/.pre-commit-config.yaml | ## https://pre-commit.com/
#
# Before first use: `pre-commit install`
# To run: `pre-commit run --all-files`
exclude: '^thirdparty'
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
- id: debug-statements
- id: mixed-line-ending
- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
language_version: python3
args: [--target-version=py38]
files: ^(python/.*|benchmarks/.*)$
- repo: https://github.com/PyCQA/flake8
rev: 6.0.0
hooks:
- id: flake8
args: ["--config=.flake8"]
files: python/.*$
types: [file]
types_or: [python] # TODO: Enable [python, cython]
additional_dependencies: ["flake8-force"]
- repo: https://github.com/asottile/yesqa
rev: v1.3.0
hooks:
- id: yesqa
additional_dependencies:
- flake8==6.0.0
- repo: https://github.com/pre-commit/mirrors-clang-format
rev: v16.0.6
hooks:
- id: clang-format
exclude: |
(?x)^(
cpp/libcugraph_etl|
cpp/tests/c_api
)
types_or: [c, c++, cuda]
args: ["-fallback-style=none", "-style=file", "-i"]
- repo: local
hooks:
- id: copyright-check
name: copyright-check
entry: python ./ci/checks/copyright.py --git-modified-only --update-current-year
language: python
pass_filenames: false
additional_dependencies: [gitpython]
- repo: https://github.com/rapidsai/dependency-file-generator
rev: v1.5.1
hooks:
- id: rapids-dependency-file-generator
args: ["--clean"]
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/.flake8 | # Copyright (c) 2022, NVIDIA CORPORATION.
[flake8]
filename = *.py, *.pyx, *.pxd, *.pxi
exclude = __init__.py, *.egg, build, docs, .git
force-check = True
max-line-length = 88
ignore =
# line break before binary operator
W503,
# whitespace before :
E203
per-file-ignores =
# Rules ignored only in Cython:
# E211: whitespace before '(' (used in multi-line imports)
# E225: Missing whitespace around operators (breaks cython casting syntax like <int>)
# E226: Missing whitespace around arithmetic operators (breaks cython pointer syntax like int*)
# E227: Missing whitespace around bitwise or shift operator (Can also break casting syntax)
# E275: Missing whitespace after keyword (Doesn't work with Cython except?)
# E402: invalid syntax (works for Python, not Cython)
# E999: invalid syntax (works for Python, not Cython)
# W504: line break after binary operator (breaks lines that end with a pointer)
*.pyx: E211, E225, E226, E227, E275, E402, E999, W504
*.pxd: E211, E225, E226, E227, E275, E402, E999, W504
*.pxi: E211, E225, E226, E227, E275, E402, E999, W504
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/fetch_rapids.cmake | # =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
if(NOT EXISTS ${CMAKE_CURRENT_BINARY_DIR}/CUGRAPH_RAPIDS.cmake)
file(DOWNLOAD https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-23.12/RAPIDS.cmake
${CMAKE_CURRENT_BINARY_DIR}/CUGRAPH_RAPIDS.cmake
)
endif()
include(${CMAKE_CURRENT_BINARY_DIR}/CUGRAPH_RAPIDS.cmake)
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/README.md | <h1 align="center"; style="font-style: italic";>
<br>
<img src="img/cugraph_logo_2.png" alt="cuGraph" width="500">
</h1>
<div align="center">
<a href="https://github.com/rapidsai/cugraph/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"></a>
<img alt="GitHub tag (latest by date)" src="https://img.shields.io/github/v/tag/rapidsai/cugraph">
<a href="https://github.com/rapidsai/cugraph/stargazers">
<img src="https://img.shields.io/github/stars/rapidsai/cugraph"></a>
<img alt="Conda" src="https://img.shields.io/conda/dn/rapidsai/cugraph">
<img alt="GitHub last commit" src="https://img.shields.io/github/last-commit/rapidsai/cugraph">
<img alt="Conda" src="https://img.shields.io/conda/pn/rapidsai/cugraph" />
<a href="https://rapids.ai/"><img src="img/rapids_logo.png" alt="RAPIDS" width="125"></a>
</div>
<br>
[RAPIDS](https://rapids.ai) cuGraph is a monorepo that represents a collection of packages focused on GPU-accelerated graph analytics, including support for property graphs, remote (graph as a service) operations, and graph neural networks (GNNs). cuGraph supports the creation and manipulation of graphs followed by the execution of scalable fast graph algorithms.
<div align="center">
[Getting cuGraph](./docs/cugraph/source/installation/getting_cugraph.md) *
[Graph Algorithms](./docs/cugraph/source/graph_support/algorithms.md) *
[Graph Service](./readme_pages/cugraph_service.md) *
[Property Graph](./readme_pages/property_graph.md) *
[GNN Support](./readme_pages/gnn_support.md)
</div>
-----
## News
___NEW!___ _[nx-cugraph](./python/nx-cugraph/README.md)_, a NetworkX backend that provides GPU acceleration to NetworkX with zero code change.
```
> pip install nx-cugraph-cu11 --extra-index-url https://pypi.nvidia.com
> export NETWORKX_AUTOMATIC_BACKENDS=cugraph
```
That's it. NetworkX now leverages cuGraph for accelerated graph algorithms.
-----
## Table of contents
- Installation
- [Getting cuGraph Packages](./docs/cugraph/source/installation/getting_cugraph.md)
- [Building from Source](./docs/cugraph/source/installation/source_build.md)
- [Contributing to cuGraph](./readme_pages/CONTRIBUTING.md)
- General
- [Latest News](./readme_pages/news.md)
- [Current list of algorithms](./docs/cugraph/source/graph_support/algorithms.md)
- [Blogs and Presentation](./docs/cugraph/source/tutorials/cugraph_blogs.rst)
- [Performance](./readme_pages/performance/performance.md)
- Packages
- [cuGraph Python](./readme_pages/cugraph_python.md)
- [Property Graph](./readme_pages/property_graph.md)
- [External Data Types](./readme_pages/data_types.md)
- [pylibcugraph](./readme_pages/pylibcugraph.md)
- [libcugraph (C/C++/CUDA)](./readme_pages/libcugraph.md)
- [nx-cugraph](./python/nx-cugraph/README.md)
- [cugraph-service](./readme_pages/cugraph_service.md)
- [cugraph-dgl](./readme_pages/cugraph_dgl.md)
- [cugraph-ops](./readme_pages/cugraph_ops.md)
- API Docs
- Python
- [Python Nightly](https://docs.rapids.ai/api/cugraph/nightly/)
- [Python Stable](https://docs.rapids.ai/api/cugraph/stable/)
- C++
- [C++ Nightly](https://docs.rapids.ai/api/libcugraph/nightly/)
- [C++ Stable](https://docs.rapids.ai/api/libcugraph/stable/)
- References
- [RAPIDS](https://rapids.ai/)
- [ARROW](https://arrow.apache.org/)
- [DASK](https://www.dask.org/)
<br><br>
-----
<img src="img/Stack2.png" alt="Stack" width="800">
[RAPIDS](https://rapids.ai) cuGraph is a collection of GPU-accelerated graph algorithms and services. At the Python layer, cuGraph operates on [GPU DataFrames](https://github.com/rapidsai/cudf), thereby allowing for seamless passing of data between ETL tasks in [cuDF](https://github.com/rapidsai/cudf) and machine learning tasks in [cuML](https://github.com/rapidsai/cuml). Data scientists familiar with Python will quickly pick up how cuGraph integrates with the Pandas-like API of cuDF. Likewise, users familiar with NetworkX will quickly recognize the NetworkX-like API provided in cuGraph, with the goal to allow existing code to be ported with minimal effort into RAPIDS. To simplify integration, cuGraph also supports data found in [Pandas DataFrame](https://pandas.pydata.org/), [NetworkX Graph Objects](https://networkx.org/) and several other formats.
While the high-level cugraph python API provides an easy-to-use and familiar interface for data scientists that's consistent with other RAPIDS libraries in their workflow, some use cases require access to lower-level graph theory concepts. For these users, we provide an additional Python API called pylibcugraph, intended for applications that require a tighter integration with cuGraph at the Python layer with fewer dependencies. Users familiar with C/C++/CUDA and graph structures can access libcugraph and libcugraph_c for low level integration outside of python.
**NOTE:** For the latest stable [README.md](https://github.com/rapidsai/cugraph/blob/main/README.md) ensure you are on the latest branch.
As an example, the following Python snippet loads graph data and computes PageRank:
```python
import cudf
import cugraph
# read data into a cuDF DataFrame using read_csv
gdf = cudf.read_csv("graph_data.csv", names=["src", "dst"], dtype=["int32", "int32"])
# We now have data as edge pairs
# create a Graph using the source (src) and destination (dst) vertex pairs
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
# Let's now get the PageRank score of each vertex by calling cugraph.pagerank
df_page = cugraph.pagerank(G)
# Let's look at the top 10 PageRank Score
df_page.sort_values('pagerank', ascending=False).head(10)
```
</br>
[Why cuGraph does not support Method Cascading](https://docs.rapids.ai/api/cugraph/nightly/basics/cugraph_cascading.html)
------
# Projects that use cuGraph
(alphabetical order)
* ArangoDB - a free and open-source native multi-model database system - https://www.arangodb.com/
* CuPy - "NumPy/SciPy-compatible Array Library for GPU-accelerated Computing with Python" - https://cupy.dev/
* Memgraph - In-memory Graph database - https://memgraph.com/
* NetworkX (via [nx-cugraph](./python/nx-cugraph/README.md) backend) - an extremely popular, free and open-source package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks - https://networkx.org/
* PyGraphistry - free and open-source GPU graph ETL, AI, and visualization, including native RAPIDS & cuGraph support - http://github.com/graphistry/pygraphistry
* ScanPy - a scalable toolkit for analyzing single-cell gene expression data - https://scanpy.readthedocs.io/en/stable/
(please post an issue if you have a project to add to this list)
------
<br>
## <div align="center"><img src="img/rapids_logo.png" width="265px"/></div> Open GPU Data Science <a name="rapids"></a>
The RAPIDS suite of open source software libraries aims to enable execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
<p align="center"><img src="img/rapids_arrow.png" width="50%"/></p>
For more project details, see [rapids.ai](https://rapids.ai/).
<br><br>
### Apache Arrow on GPU <a name="arrow"></a>
The GPU version of [Apache Arrow](https://arrow.apache.org/) is a common API that enables efficient interchange of tabular data between processes running on the GPU. End-to-end computation on the GPU avoids unnecessary copying and converting of data off the GPU, reducing compute time and cost for high-performance analytics common in artificial intelligence workloads. As the name implies, cuDF uses the Apache Arrow columnar data format on the GPU. Currently, a subset of the features in Apache Arrow are supported.
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/CHANGELOG.md | # cuGraph 23.10.00 (11 Oct 2023)
## 🚨 Breaking Changes
- Rename `cugraph-nx` to `nx-cugraph` ([#3840](https://github.com/rapidsai/cugraph/pull/3840)) [@eriknw](https://github.com/eriknw)
- Remove legacy betweenness centrality ([#3829](https://github.com/rapidsai/cugraph/pull/3829)) [@jnke2016](https://github.com/jnke2016)
- Remove Deprecated Sampling Options ([#3816](https://github.com/rapidsai/cugraph/pull/3816)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- cuGraph-PyG Loader Improvements ([#3795](https://github.com/rapidsai/cugraph/pull/3795)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Expose threshold in louvain ([#3792](https://github.com/rapidsai/cugraph/pull/3792)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix ValueError Caused By Batches With No Samples ([#3789](https://github.com/rapidsai/cugraph/pull/3789)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update to Cython 3.0.0 ([#3716](https://github.com/rapidsai/cugraph/pull/3716)) [@vyasr](https://github.com/vyasr)
## 🐛 Bug Fixes
- Add wget to test_notebook dependencies ([#3918](https://github.com/rapidsai/cugraph/pull/3918)) [@raydouglass](https://github.com/raydouglass)
- Increase dask-related timeouts for CI testing ([#3907](https://github.com/rapidsai/cugraph/pull/3907)) [@jnke2016](https://github.com/jnke2016)
- Remove `dask_cudf` dataframe for the `_make_plc_graph` while creating `cugraph.Graph` ([#3895](https://github.com/rapidsai/cugraph/pull/3895)) [@VibhuJawa](https://github.com/VibhuJawa)
- Adds logic to handle isolated vertices at python layer ([#3886](https://github.com/rapidsai/cugraph/pull/3886)) [@naimnv](https://github.com/naimnv)
- Update Allocator Selection in cuGraph-DGL Example ([#3877](https://github.com/rapidsai/cugraph/pull/3877)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Add file to update-version.sh ([#3870](https://github.com/rapidsai/cugraph/pull/3870)) [@raydouglass](https://github.com/raydouglass)
- Fix torch seed in `cugraph-dgl` and `-pyg` tests for conv layers ([#3869](https://github.com/rapidsai/cugraph/pull/3869)) [@tingyu66](https://github.com/tingyu66)
- MFG C++ code bug fix ([#3865](https://github.com/rapidsai/cugraph/pull/3865)) [@seunghwak](https://github.com/seunghwak)
- Fix subtle memory leak in nbr_intersection primitive ([#3858](https://github.com/rapidsai/cugraph/pull/3858)) [@ChuckHastings](https://github.com/ChuckHastings)
- Uses `conda mambabuild` rather than `mamba mambabuild` ([#3853](https://github.com/rapidsai/cugraph/pull/3853)) [@rlratzel](https://github.com/rlratzel)
- Remove the assumption made on the client data's keys ([#3835](https://github.com/rapidsai/cugraph/pull/3835)) [@jnke2016](https://github.com/jnke2016)
- Disable mg tests ([#3833](https://github.com/rapidsai/cugraph/pull/3833)) [@naimnv](https://github.com/naimnv)
- Refactor python code for similarity algos to use latest CAPI ([#3828](https://github.com/rapidsai/cugraph/pull/3828)) [@naimnv](https://github.com/naimnv)
- [BUG] Fix Batch Renumbering of Empty Batches ([#3823](https://github.com/rapidsai/cugraph/pull/3823)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Temporarily disable the deletion of the dask dataframe ([#3814](https://github.com/rapidsai/cugraph/pull/3814)) [@jnke2016](https://github.com/jnke2016)
- Fix OD shortest distance matrix computation test failures. ([#3813](https://github.com/rapidsai/cugraph/pull/3813)) [@seunghwak](https://github.com/seunghwak)
- Use rapidsai/ci:cuda11.8.0-ubuntu22.04-py3.10 for docs build ([#3811](https://github.com/rapidsai/cugraph/pull/3811)) [@naimnv](https://github.com/naimnv)
- Fix ValueError Caused By Batches With No Samples ([#3789](https://github.com/rapidsai/cugraph/pull/3789)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update `python_run_cugraph` in `dependencies.yaml` ([#3781](https://github.com/rapidsai/cugraph/pull/3781)) [@nv-rliu](https://github.com/nv-rliu)
- Fixes `KeyError` for `get_two_hop_neighbors` when called with a small start vertices list ([#3778](https://github.com/rapidsai/cugraph/pull/3778)) [@rlratzel](https://github.com/rlratzel)
## 📖 Documentation
- Update the docstrings of the similarity algorithms ([#3817](https://github.com/rapidsai/cugraph/pull/3817)) [@jnke2016](https://github.com/jnke2016)
## 🚀 New Features
- WholeGraph Feature Store for cuGraph-PyG and cuGraph-DGL ([#3874](https://github.com/rapidsai/cugraph/pull/3874)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- similarity notebook to compare link prediction algos ([#3868](https://github.com/rapidsai/cugraph/pull/3868)) [@acostadon](https://github.com/acostadon)
- adding dining preference dataset ([#3866](https://github.com/rapidsai/cugraph/pull/3866)) [@acostadon](https://github.com/acostadon)
- Integrate C++ Renumbering and Compression ([#3841](https://github.com/rapidsai/cugraph/pull/3841)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Sampling post processing functions to accelerate MFG creation. ([#3815](https://github.com/rapidsai/cugraph/pull/3815)) [@seunghwak](https://github.com/seunghwak)
- [REVIEW] Add Pure DGL Dataloading benchmark ([#3660](https://github.com/rapidsai/cugraph/pull/3660)) [@VibhuJawa](https://github.com/VibhuJawa)
## 🛠️ Improvements
- nx-cugraph: handle louvain with isolated nodes ([#3897](https://github.com/rapidsai/cugraph/pull/3897)) [@eriknw](https://github.com/eriknw)
- Pin `dask` and `distributed` for `23.10` release ([#3896](https://github.com/rapidsai/cugraph/pull/3896)) [@galipremsagar](https://github.com/galipremsagar)
- Updates the source build docs to include libcugraphops as a build prerequisite ([#3893](https://github.com/rapidsai/cugraph/pull/3893)) [@rlratzel](https://github.com/rlratzel)
- fixes force atlas to allow string as vertex names ([#3891](https://github.com/rapidsai/cugraph/pull/3891)) [@acostadon](https://github.com/acostadon)
- Integrate renumbering and compression to `cugraph-dgl` to accelerate MFG creation ([#3887](https://github.com/rapidsai/cugraph/pull/3887)) [@tingyu66](https://github.com/tingyu66)
- Enable weights for MG similarity algorithms ([#3879](https://github.com/rapidsai/cugraph/pull/3879)) [@jnke2016](https://github.com/jnke2016)
- cuGraph-PyG MFG Creation and Conversion ([#3873](https://github.com/rapidsai/cugraph/pull/3873)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update image names ([#3867](https://github.com/rapidsai/cugraph/pull/3867)) [@AyodeAwe](https://github.com/AyodeAwe)
- Update to clang 16.0.6. ([#3859](https://github.com/rapidsai/cugraph/pull/3859)) [@bdice](https://github.com/bdice)
- Updates to build and test `nx-cugraph` wheel as part of CI and nightly workflows ([#3852](https://github.com/rapidsai/cugraph/pull/3852)) [@rlratzel](https://github.com/rlratzel)
- Update `cugraph-dgl` conv layers to use improved graph class ([#3849](https://github.com/rapidsai/cugraph/pull/3849)) [@tingyu66](https://github.com/tingyu66)
- Add entry point to tell NetworkX about nx-cugraph without importing it. ([#3848](https://github.com/rapidsai/cugraph/pull/3848)) [@eriknw](https://github.com/eriknw)
- [IMP] Add ability to get batch size from the loader in cuGraph-PyG ([#3846](https://github.com/rapidsai/cugraph/pull/3846)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Refactor legacy k truss ([#3843](https://github.com/rapidsai/cugraph/pull/3843)) [@jnke2016](https://github.com/jnke2016)
- Use new `raft::compiled_static` targets ([#3842](https://github.com/rapidsai/cugraph/pull/3842)) [@divyegala](https://github.com/divyegala)
- Rename `cugraph-nx` to `nx-cugraph` ([#3840](https://github.com/rapidsai/cugraph/pull/3840)) [@eriknw](https://github.com/eriknw)
- Add cuGraph devcontainers ([#3838](https://github.com/rapidsai/cugraph/pull/3838)) [@trxcllnt](https://github.com/trxcllnt)
- Enable temporarily disabled MG tests ([#3837](https://github.com/rapidsai/cugraph/pull/3837)) [@naimnv](https://github.com/naimnv)
- Remove legacy betweenness centrality ([#3829](https://github.com/rapidsai/cugraph/pull/3829)) [@jnke2016](https://github.com/jnke2016)
- Use `copy-pr-bot` ([#3827](https://github.com/rapidsai/cugraph/pull/3827)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update README.md ([#3826](https://github.com/rapidsai/cugraph/pull/3826)) [@lmeyerov](https://github.com/lmeyerov)
- Adding metadata getter methods to datasets API ([#3821](https://github.com/rapidsai/cugraph/pull/3821)) [@nv-rliu](https://github.com/nv-rliu)
- Unpin `dask` and `distributed` for `23.10` development ([#3818](https://github.com/rapidsai/cugraph/pull/3818)) [@galipremsagar](https://github.com/galipremsagar)
- Remove Deprecated Sampling Options ([#3816](https://github.com/rapidsai/cugraph/pull/3816)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [REVIEW] Cugraph dgl block improvements ([#3810](https://github.com/rapidsai/cugraph/pull/3810)) [@VibhuJawa](https://github.com/VibhuJawa)
- Simplify wheel build scripts and allow alphas of RAPIDS dependencies ([#3809](https://github.com/rapidsai/cugraph/pull/3809)) [@vyasr](https://github.com/vyasr)
- Allow cugraph-nx to run networkx tests for nx versions 3.0, 3.1, and 3.2 ([#3808](https://github.com/rapidsai/cugraph/pull/3808)) [@eriknw](https://github.com/eriknw)
- Add `louvain_communities` to cugraph-nx ([#3803](https://github.com/rapidsai/cugraph/pull/3803)) [@eriknw](https://github.com/eriknw)
- Adds missing copyright and license text to __init__.py package files ([#3799](https://github.com/rapidsai/cugraph/pull/3799)) [@rlratzel](https://github.com/rlratzel)
- cuGraph-PyG Loader Improvements ([#3795](https://github.com/rapidsai/cugraph/pull/3795)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Adds updates to build wheel and conda packages for `cugraph-nx` ([#3793](https://github.com/rapidsai/cugraph/pull/3793)) [@rlratzel](https://github.com/rlratzel)
- Expose threshold in louvain ([#3792](https://github.com/rapidsai/cugraph/pull/3792)) [@ChuckHastings](https://github.com/ChuckHastings)
- Allow models to use a lightweight sparse structure ([#3782](https://github.com/rapidsai/cugraph/pull/3782)) [@tingyu66](https://github.com/tingyu66)
- Clean-up old testing conventions in `test_ecg.py` ([#3779](https://github.com/rapidsai/cugraph/pull/3779)) [@nv-rliu](https://github.com/nv-rliu)
- Calling `dataset.get_edgelist()` returns a copy of an edge list instead of global ([#3777](https://github.com/rapidsai/cugraph/pull/3777)) [@nv-rliu](https://github.com/nv-rliu)
- Update dgl benchmarks ([#3775](https://github.com/rapidsai/cugraph/pull/3775)) [@VibhuJawa](https://github.com/VibhuJawa)
- Forward-merge branch-23.08 to branch-23.10 ([#3774](https://github.com/rapidsai/cugraph/pull/3774)) [@nv-rliu](https://github.com/nv-rliu)
- Migrate upstream models to `cugraph-pyg` ([#3763](https://github.com/rapidsai/cugraph/pull/3763)) [@tingyu66](https://github.com/tingyu66)
- Branch 23.10 merge 23.08 ([#3743](https://github.com/rapidsai/cugraph/pull/3743)) [@vyasr](https://github.com/vyasr)
- Update to Cython 3.0.0 ([#3716](https://github.com/rapidsai/cugraph/pull/3716)) [@vyasr](https://github.com/vyasr)
- Testing util improvements and refactoring ([#3705](https://github.com/rapidsai/cugraph/pull/3705)) [@betochimas](https://github.com/betochimas)
- Add new cugraph-nx package (networkx backend using pylibcugraph) ([#3614](https://github.com/rapidsai/cugraph/pull/3614)) [@eriknw](https://github.com/eriknw)
- New mtmg API for integration ([#3521](https://github.com/rapidsai/cugraph/pull/3521)) [@ChuckHastings](https://github.com/ChuckHastings)
# cuGraph 23.08.00 (9 Aug 2023)
## 🚨 Breaking Changes
- Change the renumber_sampled_edgelist function behavior. ([#3762](https://github.com/rapidsai/cugraph/pull/3762)) [@seunghwak](https://github.com/seunghwak)
- PLC and Python Support for Sample-Side MFG Creation ([#3734](https://github.com/rapidsai/cugraph/pull/3734)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Stop using setup.py in build.sh ([#3704](https://github.com/rapidsai/cugraph/pull/3704)) [@vyasr](https://github.com/vyasr)
- Refactor edge betweenness centrality ([#3672](https://github.com/rapidsai/cugraph/pull/3672)) [@jnke2016](https://github.com/jnke2016)
- [FIX] Fix the hang in cuGraph Python Uniform Neighbor Sample, Add Logging to Bulk Sampler ([#3669](https://github.com/rapidsai/cugraph/pull/3669)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
## 🐛 Bug Fixes
- Change the renumber_sampled_edgelist function behavior. ([#3762](https://github.com/rapidsai/cugraph/pull/3762)) [@seunghwak](https://github.com/seunghwak)
- Fix bug discovered in Jaccard testing ([#3758](https://github.com/rapidsai/cugraph/pull/3758)) [@ChuckHastings](https://github.com/ChuckHastings)
- fix inconsistent graph properties between the SG and the MG API ([#3757](https://github.com/rapidsai/cugraph/pull/3757)) [@jnke2016](https://github.com/jnke2016)
- Fixes options for `--pydevelop` to remove unneeded CWD path ("."), restores use of `setup.py` temporarily for develop builds ([#3747](https://github.com/rapidsai/cugraph/pull/3747)) [@rlratzel](https://github.com/rlratzel)
- Fix sampling call parameters if compiled with -DNO_CUGRAPH_OPS ([#3729](https://github.com/rapidsai/cugraph/pull/3729)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix primitive bug discovered in MG edge betweenness centrality testing ([#3723](https://github.com/rapidsai/cugraph/pull/3723)) [@ChuckHastings](https://github.com/ChuckHastings)
- Reorder dependencies.yaml channels ([#3721](https://github.com/rapidsai/cugraph/pull/3721)) [@raydouglass](https://github.com/raydouglass)
- [BUG] Fix namesapce to default_hash and hash_functions ([#3711](https://github.com/rapidsai/cugraph/pull/3711)) [@naimnv](https://github.com/naimnv)
- [BUG] Fix Bulk Sampling Test Issue ([#3701](https://github.com/rapidsai/cugraph/pull/3701)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Make `pylibcugraphops` optional imports in `cugraph-dgl` and `-pyg` ([#3693](https://github.com/rapidsai/cugraph/pull/3693)) [@tingyu66](https://github.com/tingyu66)
- [FIX] Rename `cugraph-ops` symbols (refactoring) and update GHA workflows to call pytest via `python -m pytest` ([#3688](https://github.com/rapidsai/cugraph/pull/3688)) [@naimnv](https://github.com/naimnv)
- [FIX] Fix the hang in cuGraph Python Uniform Neighbor Sample, Add Logging to Bulk Sampler ([#3669](https://github.com/rapidsai/cugraph/pull/3669)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- force atlas notebook changes to run in cugraph 23.08 container. ([#3656](https://github.com/rapidsai/cugraph/pull/3656)) [@acostadon](https://github.com/acostadon)
## 📖 Documentation
- this fixes github links in cugraph, cugraph-dgl and cugraph-pyg ([#3650](https://github.com/rapidsai/cugraph/pull/3650)) [@acostadon](https://github.com/acostadon)
- Fix minor typo in README.md ([#3636](https://github.com/rapidsai/cugraph/pull/3636)) [@akasper](https://github.com/akasper)
- Created landing spot for centrality and similarity algorithms ([#3620](https://github.com/rapidsai/cugraph/pull/3620)) [@acostadon](https://github.com/acostadon)
## 🚀 New Features
- Compute shortest distances between given sets of origins and destinations for large diameter graphs ([#3741](https://github.com/rapidsai/cugraph/pull/3741)) [@seunghwak](https://github.com/seunghwak)
- Update primitive to compute weighted Jaccard, Sorensen and Overlap similarity ([#3728](https://github.com/rapidsai/cugraph/pull/3728)) [@naimnv](https://github.com/naimnv)
- Add CUDA 12.0 conda environment. ([#3725](https://github.com/rapidsai/cugraph/pull/3725)) [@bdice](https://github.com/bdice)
- Renumber utility function for sampling output ([#3707](https://github.com/rapidsai/cugraph/pull/3707)) [@seunghwak](https://github.com/seunghwak)
- Integrate C++ Sampling Source Behavior Updates ([#3699](https://github.com/rapidsai/cugraph/pull/3699)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Adds `fail_on_nonconvergence` option to `pagerank` to provide pagerank results even on non-convergence ([#3639](https://github.com/rapidsai/cugraph/pull/3639)) [@rlratzel](https://github.com/rlratzel)
- Add Benchmark for Bulk Sampling ([#3628](https://github.com/rapidsai/cugraph/pull/3628)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- cugraph: Build CUDA 12 packages ([#3456](https://github.com/rapidsai/cugraph/pull/3456)) [@vyasr](https://github.com/vyasr)
## 🛠️ Improvements
- Pin `dask` and `distributed` for `23.08` release ([#3761](https://github.com/rapidsai/cugraph/pull/3761)) [@galipremsagar](https://github.com/galipremsagar)
- Fix `build.yaml` workflow ([#3756](https://github.com/rapidsai/cugraph/pull/3756)) [@ajschmidt8](https://github.com/ajschmidt8)
- Support MFG creation on sampling gpus for cugraph dgl ([#3742](https://github.com/rapidsai/cugraph/pull/3742)) [@VibhuJawa](https://github.com/VibhuJawa)
- PLC and Python Support for Sample-Side MFG Creation ([#3734](https://github.com/rapidsai/cugraph/pull/3734)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Switch to new wheel building pipeline ([#3731](https://github.com/rapidsai/cugraph/pull/3731)) [@vyasr](https://github.com/vyasr)
- Remove RAFT specialization. ([#3727](https://github.com/rapidsai/cugraph/pull/3727)) [@bdice](https://github.com/bdice)
- C API for renumbering the samples ([#3724](https://github.com/rapidsai/cugraph/pull/3724)) [@ChuckHastings](https://github.com/ChuckHastings)
- Only run cugraph conda CI for CUDA 11. ([#3713](https://github.com/rapidsai/cugraph/pull/3713)) [@bdice](https://github.com/bdice)
- Promote `Datasets` to stable and clean-up unit tests ([#3712](https://github.com/rapidsai/cugraph/pull/3712)) [@nv-rliu](https://github.com/nv-rliu)
- [BUG] Unsupported graph for similiarity algos ([#3710](https://github.com/rapidsai/cugraph/pull/3710)) [@jnke2016](https://github.com/jnke2016)
- Stop using setup.py in build.sh ([#3704](https://github.com/rapidsai/cugraph/pull/3704)) [@vyasr](https://github.com/vyasr)
- [WIP] Make edge ids optional ([#3702](https://github.com/rapidsai/cugraph/pull/3702)) [@VibhuJawa](https://github.com/VibhuJawa)
- Use rapids-cmake testing to run tests in parallel ([#3697](https://github.com/rapidsai/cugraph/pull/3697)) [@robertmaynard](https://github.com/robertmaynard)
- Sampling modifications to support PyG and DGL options ([#3696](https://github.com/rapidsai/cugraph/pull/3696)) [@ChuckHastings](https://github.com/ChuckHastings)
- Include cuCollection public header for hash functions ([#3694](https://github.com/rapidsai/cugraph/pull/3694)) [@seunghwak](https://github.com/seunghwak)
- Refactor edge betweenness centrality ([#3672](https://github.com/rapidsai/cugraph/pull/3672)) [@jnke2016](https://github.com/jnke2016)
- Refactor RMAT ([#3662](https://github.com/rapidsai/cugraph/pull/3662)) [@jnke2016](https://github.com/jnke2016)
- [REVIEW] Optimize bulk sampling ([#3661](https://github.com/rapidsai/cugraph/pull/3661)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update to CMake 3.26.4 ([#3648](https://github.com/rapidsai/cugraph/pull/3648)) [@vyasr](https://github.com/vyasr)
- Optimize cugraph-dgl MFG creation ([#3646](https://github.com/rapidsai/cugraph/pull/3646)) [@VibhuJawa](https://github.com/VibhuJawa)
- use rapids-upload-docs script ([#3640](https://github.com/rapidsai/cugraph/pull/3640)) [@AyodeAwe](https://github.com/AyodeAwe)
- Fix dependency versions for `23.08` ([#3638](https://github.com/rapidsai/cugraph/pull/3638)) [@ajschmidt8](https://github.com/ajschmidt8)
- Unpin `dask` and `distributed` for development ([#3634](https://github.com/rapidsai/cugraph/pull/3634)) [@galipremsagar](https://github.com/galipremsagar)
- Remove documentation build scripts for Jenkins ([#3627](https://github.com/rapidsai/cugraph/pull/3627)) [@ajschmidt8](https://github.com/ajschmidt8)
- Unpin scikit-build upper bound ([#3609](https://github.com/rapidsai/cugraph/pull/3609)) [@vyasr](https://github.com/vyasr)
- Implement C++ Edge Betweenness Centrality ([#3602](https://github.com/rapidsai/cugraph/pull/3602)) [@ChuckHastings](https://github.com/ChuckHastings)
# cuGraph 23.06.00 (7 Jun 2023)
## 🚨 Breaking Changes
- [BUG] Fix Incorrect File Selection in cuGraph-PyG Loader ([#3599](https://github.com/rapidsai/cugraph/pull/3599)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Remove legacy leiden ([#3581](https://github.com/rapidsai/cugraph/pull/3581)) [@ChuckHastings](https://github.com/ChuckHastings)
- [IMP] Match Default PyG Hop ID Behavior in cuGraph-PyG ([#3565](https://github.com/rapidsai/cugraph/pull/3565)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [IMP] Sample with Offsets in the Bulk Sampler ([#3524](https://github.com/rapidsai/cugraph/pull/3524)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Dropping Python 3.8 ([#3505](https://github.com/rapidsai/cugraph/pull/3505)) [@divyegala](https://github.com/divyegala)
- Remove legacy renumber and shuffle calls from cython.cu ([#3467](https://github.com/rapidsai/cugraph/pull/3467)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy implementation of induce subgraph ([#3464](https://github.com/rapidsai/cugraph/pull/3464)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🐛 Bug Fixes
- Fix MG Test Failing due to Removal of np.float ([#3621](https://github.com/rapidsai/cugraph/pull/3621)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- fix logic for shuffling results ([#3619](https://github.com/rapidsai/cugraph/pull/3619)) [@ChuckHastings](https://github.com/ChuckHastings)
- [BUG] Fix Calls to cudf.DataFrame/Series.unique that relied on old behavior ([#3616](https://github.com/rapidsai/cugraph/pull/3616)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- correct dgl version in `cugraph-dgl` conda recipe ([#3612](https://github.com/rapidsai/cugraph/pull/3612)) [@tingyu66](https://github.com/tingyu66)
- [BUG] Fix Issue in cuGraph-PyG Tests Blocking CI ([#3607](https://github.com/rapidsai/cugraph/pull/3607)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [BUG] Critical: Fix cuGraph-PyG Edge Index Renumbering for Single-Edge Graphs ([#3605](https://github.com/rapidsai/cugraph/pull/3605)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [BUG] Skip Empty Partitions in Bulk Sample Writing ([#3600](https://github.com/rapidsai/cugraph/pull/3600)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [BUG] Fix Incorrect File Selection in cuGraph-PyG Loader ([#3599](https://github.com/rapidsai/cugraph/pull/3599)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Fix SSSP bug ([#3597](https://github.com/rapidsai/cugraph/pull/3597)) [@jnke2016](https://github.com/jnke2016)
- update cudf column constructor calls ([#3592](https://github.com/rapidsai/cugraph/pull/3592)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix one more path to cugraphops in build workflow ([#3554](https://github.com/rapidsai/cugraph/pull/3554)) [@vyasr](https://github.com/vyasr)
- Fix path to cugraphops in build workflow ([#3547](https://github.com/rapidsai/cugraph/pull/3547)) [@vyasr](https://github.com/vyasr)
- Update dgl APIs for v1.1.0 ([#3546](https://github.com/rapidsai/cugraph/pull/3546)) [@tingyu66](https://github.com/tingyu66)
- Pin to scikit-build<17.2 ([#3538](https://github.com/rapidsai/cugraph/pull/3538)) [@vyasr](https://github.com/vyasr)
- Correct results from sampling when grouping batches on specific GPUs ([#3517](https://github.com/rapidsai/cugraph/pull/3517)) [@ChuckHastings](https://github.com/ChuckHastings)
- [FIX] Match the PyG API for Node Input to the Loader ([#3514](https://github.com/rapidsai/cugraph/pull/3514)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Correct MG Leiden and SCC tests ([#3509](https://github.com/rapidsai/cugraph/pull/3509)) [@ChuckHastings](https://github.com/ChuckHastings)
- per_v_transform_reduce_incoming|outgoing_e bug fix (when we're using (key, value) pairs to store edge src|dst property values) ([#3508](https://github.com/rapidsai/cugraph/pull/3508)) [@seunghwak](https://github.com/seunghwak)
- Updates to allow python benchmarks to run on additional datasets by default ([#3506](https://github.com/rapidsai/cugraph/pull/3506)) [@rlratzel](https://github.com/rlratzel)
- [BUG] Fix Intermittent Error when Converting cuDF DataFrame to Tensor by Converting to cuPy Array First ([#3498](https://github.com/rapidsai/cugraph/pull/3498)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [FIX] Update cugraph-PyG Dependencies to include cuGraph ([#3497](https://github.com/rapidsai/cugraph/pull/3497)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Fix graph_properties_t's members order ([#3484](https://github.com/rapidsai/cugraph/pull/3484)) [@naimnv](https://github.com/naimnv)
- Fix issue with latest rapids-make ([#3481](https://github.com/rapidsai/cugraph/pull/3481)) [@ChuckHastings](https://github.com/ChuckHastings)
- Branch 23.06 Fix Forward Merge ([#3462](https://github.com/rapidsai/cugraph/pull/3462)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update raft dependency to 23.06 ([#3410](https://github.com/rapidsai/cugraph/pull/3410)) [@ChuckHastings](https://github.com/ChuckHastings)
## 📖 Documentation
- updated cugraph Demo notebooks for 23.06 ([#3558](https://github.com/rapidsai/cugraph/pull/3558)) [@acostadon](https://github.com/acostadon)
- cugraph-ops license ([#3553](https://github.com/rapidsai/cugraph/pull/3553)) [@BradReesWork](https://github.com/BradReesWork)
- Notebook clean-up and run verification ([#3551](https://github.com/rapidsai/cugraph/pull/3551)) [@acostadon](https://github.com/acostadon)
- Updates contributing steps to add copyright and license text inclusion instruction ([#3519](https://github.com/rapidsai/cugraph/pull/3519)) [@rlratzel](https://github.com/rlratzel)
- Fixed notebook links in algorithm and cugraph notebook pages ([#3515](https://github.com/rapidsai/cugraph/pull/3515)) [@acostadon](https://github.com/acostadon)
- adding cugraph-ops ([#3488](https://github.com/rapidsai/cugraph/pull/3488)) [@BradReesWork](https://github.com/BradReesWork)
- Sphinx updates ([#3468](https://github.com/rapidsai/cugraph/pull/3468)) [@BradReesWork](https://github.com/BradReesWork)
## 🚀 New Features
- [REVIEW] Add MNMG with training ([#3603](https://github.com/rapidsai/cugraph/pull/3603)) [@VibhuJawa](https://github.com/VibhuJawa)
- MG Leiden and MG MIS ([#3582](https://github.com/rapidsai/cugraph/pull/3582)) [@naimnv](https://github.com/naimnv)
- graph primitive transform_e ([#3548](https://github.com/rapidsai/cugraph/pull/3548)) [@seunghwak](https://github.com/seunghwak)
- Support CUDA 12.0 for pip wheels ([#3544](https://github.com/rapidsai/cugraph/pull/3544)) [@divyegala](https://github.com/divyegala)
- Updates pytest benchmarks to use synthetic data and multi-GPUs ([#3540](https://github.com/rapidsai/cugraph/pull/3540)) [@rlratzel](https://github.com/rlratzel)
- Enable edge masking ([#3522](https://github.com/rapidsai/cugraph/pull/3522)) [@seunghwak](https://github.com/seunghwak)
- [REVIEW] Profile graph creation runtime and memory footprint ([#3518](https://github.com/rapidsai/cugraph/pull/3518)) [@VibhuJawa](https://github.com/VibhuJawa)
- Bipartite R-mat graph generation. ([#3512](https://github.com/rapidsai/cugraph/pull/3512)) [@seunghwak](https://github.com/seunghwak)
- Dropping Python 3.8 ([#3505](https://github.com/rapidsai/cugraph/pull/3505)) [@divyegala](https://github.com/divyegala)
- Creates Notebook that runs Multi-GPU versions of Jaccard, Sorenson and overlap. ([#3504](https://github.com/rapidsai/cugraph/pull/3504)) [@acostadon](https://github.com/acostadon)
- [cugraph-dgl] Add support for bipartite node features and optional edge features in GATConv ([#3503](https://github.com/rapidsai/cugraph/pull/3503)) [@tingyu66](https://github.com/tingyu66)
- [cugraph-dgl] Add TransformerConv ([#3501](https://github.com/rapidsai/cugraph/pull/3501)) [@tingyu66](https://github.com/tingyu66)
- [cugraph-pyg] Add TransformerConv and support for bipartite node features in GATConv ([#3489](https://github.com/rapidsai/cugraph/pull/3489)) [@tingyu66](https://github.com/tingyu66)
- Branch 23.06 resolve merge conflict for forward merge ([#3409](https://github.com/rapidsai/cugraph/pull/3409)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Refactor Leiden ([#3327](https://github.com/rapidsai/cugraph/pull/3327)) [@jnke2016](https://github.com/jnke2016)
## 🛠️ Improvements
- Refresh requirements ([#3622](https://github.com/rapidsai/cugraph/pull/3622)) [@jakirkham](https://github.com/jakirkham)
- Pr3266 continue (optional arg for weight attribute for Nx graphs in `sssp`) ([#3611](https://github.com/rapidsai/cugraph/pull/3611)) [@eriknw](https://github.com/eriknw)
- Enables MG python tests using a single-GPU LocalCUDACluster in CI ([#3596](https://github.com/rapidsai/cugraph/pull/3596)) [@rlratzel](https://github.com/rlratzel)
- UVM notebook update and add tracker for notebooks to readme ([#3595](https://github.com/rapidsai/cugraph/pull/3595)) [@acostadon](https://github.com/acostadon)
- [REVIEW] Skip adding edge types, edge weights ([#3583](https://github.com/rapidsai/cugraph/pull/3583)) [@VibhuJawa](https://github.com/VibhuJawa)
- Remove legacy leiden ([#3581](https://github.com/rapidsai/cugraph/pull/3581)) [@ChuckHastings](https://github.com/ChuckHastings)
- run docs nightly too ([#3568](https://github.com/rapidsai/cugraph/pull/3568)) [@AyodeAwe](https://github.com/AyodeAwe)
- include hop as part of the sort criteria for sampling results ([#3567](https://github.com/rapidsai/cugraph/pull/3567)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add MG python implementation of Leiden ([#3566](https://github.com/rapidsai/cugraph/pull/3566)) [@jnke2016](https://github.com/jnke2016)
- [IMP] Match Default PyG Hop ID Behavior in cuGraph-PyG ([#3565](https://github.com/rapidsai/cugraph/pull/3565)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Switch back to using primary shared-action-workflows branch ([#3562](https://github.com/rapidsai/cugraph/pull/3562)) [@vyasr](https://github.com/vyasr)
- removed deprecated calls and modified demo notebooks to run with 23.06 ([#3561](https://github.com/rapidsai/cugraph/pull/3561)) [@acostadon](https://github.com/acostadon)
- add unit test for checking is_symmetric is valid, update documentatio… ([#3559](https://github.com/rapidsai/cugraph/pull/3559)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update recipes to GTest version >=1.13.0 ([#3549](https://github.com/rapidsai/cugraph/pull/3549)) [@bdice](https://github.com/bdice)
- Improve memory footprint and performance of graph creation ([#3542](https://github.com/rapidsai/cugraph/pull/3542)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update cupy dependency ([#3539](https://github.com/rapidsai/cugraph/pull/3539)) [@vyasr](https://github.com/vyasr)
- Perform expensive edge list check in create_graph_from_edgelist() ([#3533](https://github.com/rapidsai/cugraph/pull/3533)) [@seunghwak](https://github.com/seunghwak)
- Enable sccache hits from local builds ([#3526](https://github.com/rapidsai/cugraph/pull/3526)) [@AyodeAwe](https://github.com/AyodeAwe)
- Build wheels using new single image workflow ([#3525](https://github.com/rapidsai/cugraph/pull/3525)) [@vyasr](https://github.com/vyasr)
- [IMP] Sample with Offsets in the Bulk Sampler ([#3524](https://github.com/rapidsai/cugraph/pull/3524)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Revert shared-action-workflows pin ([#3523](https://github.com/rapidsai/cugraph/pull/3523)) [@divyegala](https://github.com/divyegala)
- [FIX] fix cugraphops namespace ([#3520](https://github.com/rapidsai/cugraph/pull/3520)) [@stadlmax](https://github.com/stadlmax)
- Add support in C API for handling unweighted graphs in algorithms that expect weights ([#3513](https://github.com/rapidsai/cugraph/pull/3513)) [@ChuckHastings](https://github.com/ChuckHastings)
- Changes to support gtest version 1.11 ([#3511](https://github.com/rapidsai/cugraph/pull/3511)) [@ChuckHastings](https://github.com/ChuckHastings)
- update docs ([#3510](https://github.com/rapidsai/cugraph/pull/3510)) [@BradReesWork](https://github.com/BradReesWork)
- Remove usage of rapids-get-rapids-version-from-git ([#3502](https://github.com/rapidsai/cugraph/pull/3502)) [@jjacobelli](https://github.com/jjacobelli)
- Remove Dummy Edge Weights, Support Specifying Edge Ids/Edge Types/Weights Separately ([#3495](https://github.com/rapidsai/cugraph/pull/3495)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- [ENH] Add missing include of thrust/optional.h ([#3493](https://github.com/rapidsai/cugraph/pull/3493)) [@ahendriksen](https://github.com/ahendriksen)
- Remove wheel pytest verbosity ([#3492](https://github.com/rapidsai/cugraph/pull/3492)) [@sevagh](https://github.com/sevagh)
- Update clang-format to 16.0.1. ([#3485](https://github.com/rapidsai/cugraph/pull/3485)) [@bdice](https://github.com/bdice)
- Use ARC V2 self-hosted runners for GPU jobs ([#3483](https://github.com/rapidsai/cugraph/pull/3483)) [@jjacobelli](https://github.com/jjacobelli)
- packed bool specialization to store edge endpoint|edge properties ([#3482](https://github.com/rapidsai/cugraph/pull/3482)) [@seunghwak](https://github.com/seunghwak)
- Remove legacy renumber and shuffle calls from cython.cu ([#3467](https://github.com/rapidsai/cugraph/pull/3467)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy implementation of induce subgraph ([#3464](https://github.com/rapidsai/cugraph/pull/3464)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove uses-setup-env-vars ([#3463](https://github.com/rapidsai/cugraph/pull/3463)) [@vyasr](https://github.com/vyasr)
- Optimize random walks ([#3460](https://github.com/rapidsai/cugraph/pull/3460)) [@jnke2016](https://github.com/jnke2016)
- Update select_random_vertices to sample from a given distributed set or from (0, V] ([#3455](https://github.com/rapidsai/cugraph/pull/3455)) [@naimnv](https://github.com/naimnv)
# cuGraph 23.04.00 (6 Apr 2023)
## 🚨 Breaking Changes
- Pin `dask` and `distributed` for release ([#3427](https://github.com/rapidsai/cugraph/pull/3427)) [@galipremsagar](https://github.com/galipremsagar)
- Use Correct Searchsorted Function and Drop cupy from CuGraphStore in cugraph-pyg ([#3382](https://github.com/rapidsai/cugraph/pull/3382)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- New PyG End-to-End Examples ([#3326](https://github.com/rapidsai/cugraph/pull/3326)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update cugraph-pyg Recipe and CI Script ([#3288](https://github.com/rapidsai/cugraph/pull/3288)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- remove legacy WCC code ([#3283](https://github.com/rapidsai/cugraph/pull/3283)) [@ChuckHastings](https://github.com/ChuckHastings)
- API improvements for end-to-end MG sampling performance ([#3269](https://github.com/rapidsai/cugraph/pull/3269)) [@ChuckHastings](https://github.com/ChuckHastings)
- Cleanup obsolete visitor code ([#3268](https://github.com/rapidsai/cugraph/pull/3268)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy sampling implementation, no longer used ([#3252](https://github.com/rapidsai/cugraph/pull/3252)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy mg bfs ([#3250](https://github.com/rapidsai/cugraph/pull/3250)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy two_hop_neighbors function ([#3248](https://github.com/rapidsai/cugraph/pull/3248)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy C++ code for k-core algorithms ([#3246](https://github.com/rapidsai/cugraph/pull/3246)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🐛 Bug Fixes
- Support Minor Releases of PyG ([#3422](https://github.com/rapidsai/cugraph/pull/3422)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Critical: Force cudf.concat when passing in a cudf Series to MG Uniform Neighbor Sample ([#3416](https://github.com/rapidsai/cugraph/pull/3416)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Add back deleted version attribute ([#3411](https://github.com/rapidsai/cugraph/pull/3411)) [@vyasr](https://github.com/vyasr)
- Reindex Start Vertices and Batch Ids Prior to Sampling Call ([#3393](https://github.com/rapidsai/cugraph/pull/3393)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Replace CUDA_TRY with RAFT_CUDA_TRY ([#3389](https://github.com/rapidsai/cugraph/pull/3389)) [@naimnv](https://github.com/naimnv)
- Use Correct Searchsorted Function and Drop cupy from CuGraphStore in cugraph-pyg ([#3382](https://github.com/rapidsai/cugraph/pull/3382)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Properly handle dask change ([#3361](https://github.com/rapidsai/cugraph/pull/3361)) [@jnke2016](https://github.com/jnke2016)
- Missing indentation leading to an UnboundedLocalError ([#3354](https://github.com/rapidsai/cugraph/pull/3354)) [@AmroAlJundi](https://github.com/AmroAlJundi)
- Remove MANIFEST.in use auto-generated one for sdists and package_data for wheels ([#3342](https://github.com/rapidsai/cugraph/pull/3342)) [@vyasr](https://github.com/vyasr)
- Remove unused RAFT import causing `ImportError` ([#3306](https://github.com/rapidsai/cugraph/pull/3306)) [@jnke2016](https://github.com/jnke2016)
- Add missing cugraph-ops conditional ([#3270](https://github.com/rapidsai/cugraph/pull/3270)) [@vyasr](https://github.com/vyasr)
- Bug fix to BulkSampler ([#3249](https://github.com/rapidsai/cugraph/pull/3249)) [@VibhuJawa](https://github.com/VibhuJawa)
- Bug Fixes to DGL Dataloader ([#3247](https://github.com/rapidsai/cugraph/pull/3247)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix `libraft-distance` version in `23.04` ([#3241](https://github.com/rapidsai/cugraph/pull/3241)) [@galipremsagar](https://github.com/galipremsagar)
- Fix Edge case in Bulk Sampler ([#3229](https://github.com/rapidsai/cugraph/pull/3229)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix libcugraph debug build warnings/errors ([#3214](https://github.com/rapidsai/cugraph/pull/3214)) [@seunghwak](https://github.com/seunghwak)
## 📖 Documentation
- New cugraph site structure ([#3343](https://github.com/rapidsai/cugraph/pull/3343)) [@acostadon](https://github.com/acostadon)
- docs: Typo on Leiden docstring ([#3329](https://github.com/rapidsai/cugraph/pull/3329)) [@lvxhnat](https://github.com/lvxhnat)
- Changed docs to reflect need for undirected graph in wcc algo ([#3322](https://github.com/rapidsai/cugraph/pull/3322)) [@acostadon](https://github.com/acostadon)
- docs: RMAT doc string typo ([#3308](https://github.com/rapidsai/cugraph/pull/3308)) [@ArturKasymov](https://github.com/ArturKasymov)
- Doc fix and change to Louvain notebook ([#3224](https://github.com/rapidsai/cugraph/pull/3224)) [@acostadon](https://github.com/acostadon)
## 🚀 New Features
- Allow adding data to PropertyGraph that already has indices set ([#3175](https://github.com/rapidsai/cugraph/pull/3175)) [@eriknw](https://github.com/eriknw)
- SG tested Leiden ([#2980](https://github.com/rapidsai/cugraph/pull/2980)) [@naimnv](https://github.com/naimnv)
## 🛠️ Improvements
- Pin `dask` and `distributed` for release ([#3427](https://github.com/rapidsai/cugraph/pull/3427)) [@galipremsagar](https://github.com/galipremsagar)
- Doc Updates ([#3418](https://github.com/rapidsai/cugraph/pull/3418)) [@BradReesWork](https://github.com/BradReesWork)
- Pin cupy in wheel tests to supported versions ([#3400](https://github.com/rapidsai/cugraph/pull/3400)) [@vyasr](https://github.com/vyasr)
- Add MG implementation of induced subgraph ([#3391](https://github.com/rapidsai/cugraph/pull/3391)) [@jnke2016](https://github.com/jnke2016)
- Properly retrieve the dask worker from client calls ([#3379](https://github.com/rapidsai/cugraph/pull/3379)) [@jnke2016](https://github.com/jnke2016)
- MG C++ test updates ([#3371](https://github.com/rapidsai/cugraph/pull/3371)) [@seunghwak](https://github.com/seunghwak)
- update conv layers in cugraph-dgl for pylibcugraphops 23.04 ([#3360](https://github.com/rapidsai/cugraph/pull/3360)) [@tingyu66](https://github.com/tingyu66)
- Generate pyproject dependencies using dfg ([#3355](https://github.com/rapidsai/cugraph/pull/3355)) [@vyasr](https://github.com/vyasr)
- Fix `PropertyGraph.renumber_*_by_type` with only default types ([#3352](https://github.com/rapidsai/cugraph/pull/3352)) [@eriknw](https://github.com/eriknw)
- Stop setting package version attribute in wheels ([#3350](https://github.com/rapidsai/cugraph/pull/3350)) [@vyasr](https://github.com/vyasr)
- Create a subgraph as a PropertyGraph via `extract_subgraph` ([#3349](https://github.com/rapidsai/cugraph/pull/3349)) [@eriknw](https://github.com/eriknw)
- Updating cugraph to use consolidated libraft target ([#3348](https://github.com/rapidsai/cugraph/pull/3348)) [@cjnolet](https://github.com/cjnolet)
- Random vertex sampling utility function for C++ tests ([#3347](https://github.com/rapidsai/cugraph/pull/3347)) [@seunghwak](https://github.com/seunghwak)
- Add c api for several legacy algorithms ([#3346](https://github.com/rapidsai/cugraph/pull/3346)) [@ChuckHastings](https://github.com/ChuckHastings)
- elementwise_min|max reduction op ([#3341](https://github.com/rapidsai/cugraph/pull/3341)) [@seunghwak](https://github.com/seunghwak)
- New PyG End-to-End Examples ([#3326](https://github.com/rapidsai/cugraph/pull/3326)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Pass `AWS_SESSION_TOKEN` and `SCCACHE_S3_USE_SSL` vars to conda build ([#3324](https://github.com/rapidsai/cugraph/pull/3324)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update extract_if_e to extract_transform_e ([#3323](https://github.com/rapidsai/cugraph/pull/3323)) [@seunghwak](https://github.com/seunghwak)
- Update aarch64 to GCC 11 ([#3319](https://github.com/rapidsai/cugraph/pull/3319)) [@bdice](https://github.com/bdice)
- Migrate as much as possible to pyproject.toml ([#3317](https://github.com/rapidsai/cugraph/pull/3317)) [@vyasr](https://github.com/vyasr)
- Add CI to cugraph_dgl ([#3312](https://github.com/rapidsai/cugraph/pull/3312)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update to GCC 11 ([#3307](https://github.com/rapidsai/cugraph/pull/3307)) [@bdice](https://github.com/bdice)
- Update datasets download URL ([#3305](https://github.com/rapidsai/cugraph/pull/3305)) [@jjacobelli](https://github.com/jjacobelli)
- Adapt to rapidsai/rmm#1221 which moves allocator callbacks ([#3300](https://github.com/rapidsai/cugraph/pull/3300)) [@wence-](https://github.com/wence-)
- Update datasets download URL ([#3299](https://github.com/rapidsai/cugraph/pull/3299)) [@jjacobelli](https://github.com/jjacobelli)
- Stop using versioneer to manage versions ([#3298](https://github.com/rapidsai/cugraph/pull/3298)) [@vyasr](https://github.com/vyasr)
- Disable dataset downloads in ARM smoke tests. ([#3295](https://github.com/rapidsai/cugraph/pull/3295)) [@bdice](https://github.com/bdice)
- Add dfg as a pre-commit hook. ([#3294](https://github.com/rapidsai/cugraph/pull/3294)) [@vyasr](https://github.com/vyasr)
- Refactoring tests ([#3292](https://github.com/rapidsai/cugraph/pull/3292)) [@BradReesWork](https://github.com/BradReesWork)
- Remove dead-code from cugraph-dgl ([#3291](https://github.com/rapidsai/cugraph/pull/3291)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update cuGraph-PyG Tests and Support Loading Saved Bulk Samples ([#3289](https://github.com/rapidsai/cugraph/pull/3289)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update cugraph-pyg Recipe and CI Script ([#3288](https://github.com/rapidsai/cugraph/pull/3288)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Define and implement C API for RMAT generators ([#3285](https://github.com/rapidsai/cugraph/pull/3285)) [@ChuckHastings](https://github.com/ChuckHastings)
- Flexible mapping between graph 2D partitioning and GPU 2D partitioning. ([#3284](https://github.com/rapidsai/cugraph/pull/3284)) [@seunghwak](https://github.com/seunghwak)
- remove legacy WCC code ([#3283](https://github.com/rapidsai/cugraph/pull/3283)) [@ChuckHastings](https://github.com/ChuckHastings)
- API improvements for end-to-end MG sampling performance ([#3269](https://github.com/rapidsai/cugraph/pull/3269)) [@ChuckHastings](https://github.com/ChuckHastings)
- Cleanup obsolete visitor code ([#3268](https://github.com/rapidsai/cugraph/pull/3268)) [@ChuckHastings](https://github.com/ChuckHastings)
- Download datasets in ARM wheel tests. ([#3267](https://github.com/rapidsai/cugraph/pull/3267)) [@bdice](https://github.com/bdice)
- Remove cublas from the link dependencies ([#3265](https://github.com/rapidsai/cugraph/pull/3265)) [@ChuckHastings](https://github.com/ChuckHastings)
- Reduce error handling verbosity in CI tests scripts ([#3258](https://github.com/rapidsai/cugraph/pull/3258)) [@AjayThorve](https://github.com/AjayThorve)
- Bump pinned pip wheel deps to 23.4 ([#3253](https://github.com/rapidsai/cugraph/pull/3253)) [@sevagh](https://github.com/sevagh)
- Remove legacy sampling implementation, no longer used ([#3252](https://github.com/rapidsai/cugraph/pull/3252)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update shared workflow branches ([#3251](https://github.com/rapidsai/cugraph/pull/3251)) [@ajschmidt8](https://github.com/ajschmidt8)
- Remove legacy mg bfs ([#3250](https://github.com/rapidsai/cugraph/pull/3250)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy two_hop_neighbors function ([#3248](https://github.com/rapidsai/cugraph/pull/3248)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy C++ code for k-core algorithms ([#3246](https://github.com/rapidsai/cugraph/pull/3246)) [@ChuckHastings](https://github.com/ChuckHastings)
- Unpin `dask` and `distributed` for development ([#3243](https://github.com/rapidsai/cugraph/pull/3243)) [@galipremsagar](https://github.com/galipremsagar)
- Remove gpuCI scripts. ([#3242](https://github.com/rapidsai/cugraph/pull/3242)) [@bdice](https://github.com/bdice)
- Resolve auto merger ([#3240](https://github.com/rapidsai/cugraph/pull/3240)) [@galipremsagar](https://github.com/galipremsagar)
- Uniform sampling code cleanup and minor performance tuning ([#3238](https://github.com/rapidsai/cugraph/pull/3238)) [@seunghwak](https://github.com/seunghwak)
- Minor code clean-up ([#3237](https://github.com/rapidsai/cugraph/pull/3237)) [@seunghwak](https://github.com/seunghwak)
- Move date to build string in `conda` recipe ([#3222](https://github.com/rapidsai/cugraph/pull/3222)) [@ajschmidt8](https://github.com/ajschmidt8)
- Multi-trainers cugraph-DGL examples ([#3212](https://github.com/rapidsai/cugraph/pull/3212)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix merge conflicts ([#3183](https://github.com/rapidsai/cugraph/pull/3183)) [@ajschmidt8](https://github.com/ajschmidt8)
- Performance tuning the sampling primitive for multi-node multi-GPU systems. ([#3169](https://github.com/rapidsai/cugraph/pull/3169)) [@seunghwak](https://github.com/seunghwak)
- Initial implementation of the Leiden C API ([#3165](https://github.com/rapidsai/cugraph/pull/3165)) [@ChuckHastings](https://github.com/ChuckHastings)
- Implement Vertex betweenness centrality ([#3160](https://github.com/rapidsai/cugraph/pull/3160)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add docs build job ([#3157](https://github.com/rapidsai/cugraph/pull/3157)) [@AyodeAwe](https://github.com/AyodeAwe)
- Refactor betweenness centrality ([#2971](https://github.com/rapidsai/cugraph/pull/2971)) [@jnke2016](https://github.com/jnke2016)
- Remove legacy renumbering ([#2949](https://github.com/rapidsai/cugraph/pull/2949)) [@jnke2016](https://github.com/jnke2016)
- Graph sage example ([#2925](https://github.com/rapidsai/cugraph/pull/2925)) [@VibhuJawa](https://github.com/VibhuJawa)
# cuGraph 23.02.00 (9 Feb 2023)
## 🚨 Breaking Changes
- Pin `dask` and `distributed` for release ([#3232](https://github.com/rapidsai/cugraph/pull/3232)) [@galipremsagar](https://github.com/galipremsagar)
- Replace PropertyGraph in cugraph-PyG with FeatureStore ([#3159](https://github.com/rapidsai/cugraph/pull/3159)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Remove CGS from cuGraph-PyG ([#3155](https://github.com/rapidsai/cugraph/pull/3155)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update cugraph_dgl to use the new FeatureStore ([#3143](https://github.com/rapidsai/cugraph/pull/3143)) [@VibhuJawa](https://github.com/VibhuJawa)
- Implement New Sampling API in Python ([#3082](https://github.com/rapidsai/cugraph/pull/3082)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Adds parameterized benchmarks for `uniform_neighbor_sampling`, updates `benchmarks` dir for future additions ([#3048](https://github.com/rapidsai/cugraph/pull/3048)) [@rlratzel](https://github.com/rlratzel)
## 🐛 Bug Fixes
- Import handle from core ([#3190](https://github.com/rapidsai/cugraph/pull/3190)) [@vyasr](https://github.com/vyasr)
- Pin gcc to 9.x. ([#3174](https://github.com/rapidsai/cugraph/pull/3174)) [@vyasr](https://github.com/vyasr)
- Fixes devices vector alloc to fix seg fault, removes unused RAFT code in PLC, re-enables full CI testing ([#3167](https://github.com/rapidsai/cugraph/pull/3167)) [@rlratzel](https://github.com/rlratzel)
- TEMPORARILY allows python and notebook tests that return exit code 139 to pass. ([#3132](https://github.com/rapidsai/cugraph/pull/3132)) [@rlratzel](https://github.com/rlratzel)
- Bug fix in the C++ CSV file reader (used in C++ testing only). ([#3055](https://github.com/rapidsai/cugraph/pull/3055)) [@seunghwak](https://github.com/seunghwak)
## 📖 Documentation
- Create a notebook comparing nx and cuGraph using synthetic data ([#3135](https://github.com/rapidsai/cugraph/pull/3135)) [@acostadon](https://github.com/acostadon)
- Add API's for dgl, pyg, cugraph service (server and client) to sphinx ([#3075](https://github.com/rapidsai/cugraph/pull/3075)) [@acostadon](https://github.com/acostadon)
- redo cuGraph main docs ([#3060](https://github.com/rapidsai/cugraph/pull/3060)) [@acostadon](https://github.com/acostadon)
## 🚀 New Features
- Bulk Loading Support for cuGraph-PyG ([#3170](https://github.com/rapidsai/cugraph/pull/3170)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Feature storage ([#3139](https://github.com/rapidsai/cugraph/pull/3139)) [@VibhuJawa](https://github.com/VibhuJawa)
- Add `RelGraphConv`, `GATConv` and `SAGEConv` models to `cugraph_dgl` ([#3131](https://github.com/rapidsai/cugraph/pull/3131)) [@tingyu66](https://github.com/tingyu66)
- Created notebook for running louvain algorithm on a Multi-GPU Property Graph ([#3130](https://github.com/rapidsai/cugraph/pull/3130)) [@acostadon](https://github.com/acostadon)
- cugraph_dgl benchmarks ([#3092](https://github.com/rapidsai/cugraph/pull/3092)) [@VibhuJawa](https://github.com/VibhuJawa)
- Add DGL benchmarks ([#3089](https://github.com/rapidsai/cugraph/pull/3089)) [@VibhuJawa](https://github.com/VibhuJawa)
- Add cugraph+UCX build instructions ([#3088](https://github.com/rapidsai/cugraph/pull/3088)) [@VibhuJawa](https://github.com/VibhuJawa)
- Implement New Sampling API in Python ([#3082](https://github.com/rapidsai/cugraph/pull/3082)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update per_v_transform_reduce_incoming|outgoing_e to take a reduction operator. ([#2975](https://github.com/rapidsai/cugraph/pull/2975)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Pin `dask` and `distributed` for release ([#3232](https://github.com/rapidsai/cugraph/pull/3232)) [@galipremsagar](https://github.com/galipremsagar)
- Update shared workflow branches ([#3231](https://github.com/rapidsai/cugraph/pull/3231)) [@ajschmidt8](https://github.com/ajschmidt8)
- Updates dependency to latest DGL ([#3211](https://github.com/rapidsai/cugraph/pull/3211)) [@rlratzel](https://github.com/rlratzel)
- Make graph objects accessible across multiple clients ([#3192](https://github.com/rapidsai/cugraph/pull/3192)) [@VibhuJawa](https://github.com/VibhuJawa)
- Drop extraneous columns that were appearing in MGPropertyGraph ([#3191](https://github.com/rapidsai/cugraph/pull/3191)) [@eriknw](https://github.com/eriknw)
- Enable using cugraph uniform sampling in multi client environments ([#3184](https://github.com/rapidsai/cugraph/pull/3184)) [@VibhuJawa](https://github.com/VibhuJawa)
- DGL Dataloader ([#3181](https://github.com/rapidsai/cugraph/pull/3181)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update cuhornet to fix `using namespace rmm;`. ([#3171](https://github.com/rapidsai/cugraph/pull/3171)) [@bdice](https://github.com/bdice)
- add type annotations to `cugraph_dgl` nn modules ([#3166](https://github.com/rapidsai/cugraph/pull/3166)) [@tingyu66](https://github.com/tingyu66)
- Replace Raft header ([#3162](https://github.com/rapidsai/cugraph/pull/3162)) [@lowener](https://github.com/lowener)
- Update to support NetworkX 3.0 (and handle other deprecations) ([#3161](https://github.com/rapidsai/cugraph/pull/3161)) [@eriknw](https://github.com/eriknw)
- Replace PropertyGraph in cugraph-PyG with FeatureStore ([#3159](https://github.com/rapidsai/cugraph/pull/3159)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Adding density algorithm and test ([#3156](https://github.com/rapidsai/cugraph/pull/3156)) [@BradReesWork](https://github.com/BradReesWork)
- Remove CGS from cuGraph-PyG ([#3155](https://github.com/rapidsai/cugraph/pull/3155)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update cugraph_dgl to use the new FeatureStore ([#3143](https://github.com/rapidsai/cugraph/pull/3143)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix documentation author ([#3128](https://github.com/rapidsai/cugraph/pull/3128)) [@bdice](https://github.com/bdice)
- build.sh switch to use `RAPIDS` magic value ([#3127](https://github.com/rapidsai/cugraph/pull/3127)) [@robertmaynard](https://github.com/robertmaynard)
- Drop DiGraph ([#3126](https://github.com/rapidsai/cugraph/pull/3126)) [@BradReesWork](https://github.com/BradReesWork)
- MGPropertyGraph: fix OOM when renumbering by type ([#3123](https://github.com/rapidsai/cugraph/pull/3123)) [@eriknw](https://github.com/eriknw)
- Build CUDA 11.8 and Python 3.10 Packages ([#3120](https://github.com/rapidsai/cugraph/pull/3120)) [@bdice](https://github.com/bdice)
- Updates README for cugraph-service to provide an up-to-date quickstart ([#3119](https://github.com/rapidsai/cugraph/pull/3119)) [@rlratzel](https://github.com/rlratzel)
- Speed Improvements for cuGraph-PyG (Short Circuit, Use Type Indices) ([#3101](https://github.com/rapidsai/cugraph/pull/3101)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update workflows for nightly tests ([#3098](https://github.com/rapidsai/cugraph/pull/3098)) [@ajschmidt8](https://github.com/ajschmidt8)
- GH Actions Notebook Testing Fixes ([#3097](https://github.com/rapidsai/cugraph/pull/3097)) [@ajschmidt8](https://github.com/ajschmidt8)
- Build pip wheels alongside conda CI ([#3096](https://github.com/rapidsai/cugraph/pull/3096)) [@sevagh](https://github.com/sevagh)
- Add notebooks testing to GH Actions PR Workflow ([#3095](https://github.com/rapidsai/cugraph/pull/3095)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix C++ Bugs in Graph Creation with Edge Properties ([#3093](https://github.com/rapidsai/cugraph/pull/3093)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update `cugraph` recipes ([#3091](https://github.com/rapidsai/cugraph/pull/3091)) [@ajschmidt8](https://github.com/ajschmidt8)
- Fix tests for MG property graph ([#3090](https://github.com/rapidsai/cugraph/pull/3090)) [@eriknw](https://github.com/eriknw)
- Adds initial cugraph-service client scaling benchmark, refactorings, performance config updates ([#3087](https://github.com/rapidsai/cugraph/pull/3087)) [@rlratzel](https://github.com/rlratzel)
- Optimize pg.get_x_data APIs ([#3086](https://github.com/rapidsai/cugraph/pull/3086)) [@VibhuJawa](https://github.com/VibhuJawa)
- Add GitHub Actions Workflows ([#3076](https://github.com/rapidsai/cugraph/pull/3076)) [@bdice](https://github.com/bdice)
- Updates conda versioning to install correct dependencies, changes CI script to better track deps from individual build installs ([#3066](https://github.com/rapidsai/cugraph/pull/3066)) [@seunghwak](https://github.com/seunghwak)
- Use pre-commit for CI style checks. ([#3062](https://github.com/rapidsai/cugraph/pull/3062)) [@bdice](https://github.com/bdice)
- Sampling primitive performance optimization. ([#3061](https://github.com/rapidsai/cugraph/pull/3061)) [@seunghwak](https://github.com/seunghwak)
- Replace clock_gettime with std::chrono::steady_clock ([#3049](https://github.com/rapidsai/cugraph/pull/3049)) [@seunghwak](https://github.com/seunghwak)
- Adds parameterized benchmarks for `uniform_neighbor_sampling`, updates `benchmarks` dir for future additions ([#3048](https://github.com/rapidsai/cugraph/pull/3048)) [@rlratzel](https://github.com/rlratzel)
- Add dependencies.yaml for rapids-dependency-file-generator ([#3042](https://github.com/rapidsai/cugraph/pull/3042)) [@ChuckHastings](https://github.com/ChuckHastings)
- Unpin `dask` and `distributed` for development ([#3036](https://github.com/rapidsai/cugraph/pull/3036)) [@galipremsagar](https://github.com/galipremsagar)
- Forward merge 22.12 into 23.02 ([#3033](https://github.com/rapidsai/cugraph/pull/3033)) [@vyasr](https://github.com/vyasr)
- Optimize pg.add_data for vector properties ([#3022](https://github.com/rapidsai/cugraph/pull/3022)) [@VibhuJawa](https://github.com/VibhuJawa)
- Adds better reporting of server subprocess errors during testing ([#3012](https://github.com/rapidsai/cugraph/pull/3012)) [@rlratzel](https://github.com/rlratzel)
- Update cugraph_dgl to use vector_properties ([#3000](https://github.com/rapidsai/cugraph/pull/3000)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix MG C++ Jaccard/Overlap/Sorensen coefficients tests. ([#2999](https://github.com/rapidsai/cugraph/pull/2999)) [@seunghwak](https://github.com/seunghwak)
- Update Uniform Neighborhood Sampling API ([#2997](https://github.com/rapidsai/cugraph/pull/2997)) [@ChuckHastings](https://github.com/ChuckHastings)
- Use Vertex ID Offsets in CuGraphStorage ([#2996](https://github.com/rapidsai/cugraph/pull/2996)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Replace deprecated raft headers ([#2978](https://github.com/rapidsai/cugraph/pull/2978)) [@lowener](https://github.com/lowener)
# cuGraph 22.12.00 (8 Dec 2022)
## 🚨 Breaking Changes
- remove all algorithms from cython.cu ([#2955](https://github.com/rapidsai/cugraph/pull/2955)) [@ChuckHastings](https://github.com/ChuckHastings)
- PyG Monorepo Refactor ([#2905](https://github.com/rapidsai/cugraph/pull/2905)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Fix PyG Loaders by properly supporting `multi_get_tensor` ([#2860](https://github.com/rapidsai/cugraph/pull/2860)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Adds arbitrary server extension support to cugraph-service ([#2850](https://github.com/rapidsai/cugraph/pull/2850)) [@rlratzel](https://github.com/rlratzel)
- Separate edge weights from graph objects and update primitives to support general edge properties. ([#2843](https://github.com/rapidsai/cugraph/pull/2843)) [@seunghwak](https://github.com/seunghwak)
- Move weight-related graph_t and graph_view_t member functions to standalone functions ([#2841](https://github.com/rapidsai/cugraph/pull/2841)) [@seunghwak](https://github.com/seunghwak)
- Avoid directly calling graph constructor (as code cleanup before edge property support in primitives) ([#2834](https://github.com/rapidsai/cugraph/pull/2834)) [@seunghwak](https://github.com/seunghwak)
- Split Sampler from Graph Store to Support New PyG Sampling API ([#2803](https://github.com/rapidsai/cugraph/pull/2803)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Code cleanup (remove dead code and move legacy files to the legacy directory) ([#2798](https://github.com/rapidsai/cugraph/pull/2798)) [@seunghwak](https://github.com/seunghwak)
- remove graph broadcast and serialization object, not used ([#2783](https://github.com/rapidsai/cugraph/pull/2783)) [@ChuckHastings](https://github.com/ChuckHastings)
- Multi-GPU induced subgraph tests code ([#2602](https://github.com/rapidsai/cugraph/pull/2602)) [@yang-hu-nv](https://github.com/yang-hu-nv)
## 🐛 Bug Fixes
- Always build without isolation ([#3052](https://github.com/rapidsai/cugraph/pull/3052)) [@vyasr](https://github.com/vyasr)
- Makes `cugraph-pyg` an optional depenency for `cugraph-service` tests ([#3051](https://github.com/rapidsai/cugraph/pull/3051)) [@rlratzel](https://github.com/rlratzel)
- Fix cugraph_c target name in Python builds ([#3045](https://github.com/rapidsai/cugraph/pull/3045)) [@vyasr](https://github.com/vyasr)
- Initialize CUDA architectures for all Python cugraph builds ([#3041](https://github.com/rapidsai/cugraph/pull/3041)) [@vyasr](https://github.com/vyasr)
- Update the python API to create a PLC graph from a CSR ([#3027](https://github.com/rapidsai/cugraph/pull/3027)) [@jnke2016](https://github.com/jnke2016)
- Updates experimental warning wrapper and PropertyGraph docs for correct experimental namespace name ([#3007](https://github.com/rapidsai/cugraph/pull/3007)) [@rlratzel](https://github.com/rlratzel)
- Fix cluster startup script ([#2977](https://github.com/rapidsai/cugraph/pull/2977)) [@VibhuJawa](https://github.com/VibhuJawa)
- Don't use CMake 3.25.0 as it has a FindCUDAToolkit show stopping bug ([#2957](https://github.com/rapidsai/cugraph/pull/2957)) [@robertmaynard](https://github.com/robertmaynard)
- Fix build script to install dask main ([#2943](https://github.com/rapidsai/cugraph/pull/2943)) [@galipremsagar](https://github.com/galipremsagar)
- Fixes options added to build.sh for building without cugraph-ops that were dropped in a merge mistake. ([#2935](https://github.com/rapidsai/cugraph/pull/2935)) [@rlratzel](https://github.com/rlratzel)
- Update dgl dependency to dglcuda=11.6 ([#2929](https://github.com/rapidsai/cugraph/pull/2929)) [@VibhuJawa](https://github.com/VibhuJawa)
- Adds option to build.sh to build without cugraphops, updates docs ([#2904](https://github.com/rapidsai/cugraph/pull/2904)) [@rlratzel](https://github.com/rlratzel)
- Fix bug in how is_symmetric is set when transposing storage ([#2898](https://github.com/rapidsai/cugraph/pull/2898)) [@ChuckHastings](https://github.com/ChuckHastings)
- Correct build failures when doing a local build ([#2895](https://github.com/rapidsai/cugraph/pull/2895)) [@robertmaynard](https://github.com/robertmaynard)
- Update `cuda-python` dependency to 11.7.1 ([#2865](https://github.com/rapidsai/cugraph/pull/2865)) [@galipremsagar](https://github.com/galipremsagar)
- Add package to the list of dependencies ([#2858](https://github.com/rapidsai/cugraph/pull/2858)) [@jnke2016](https://github.com/jnke2016)
- Add parameter checks to BFS and SSSP in C API ([#2844](https://github.com/rapidsai/cugraph/pull/2844)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix uniform neighborhood sampling memory leak ([#2835](https://github.com/rapidsai/cugraph/pull/2835)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix out of index errors encountered with sampling on out of index samples ([#2825](https://github.com/rapidsai/cugraph/pull/2825)) [@VibhuJawa](https://github.com/VibhuJawa)
- Fix MG tests bugs ([#2819](https://github.com/rapidsai/cugraph/pull/2819)) [@jnke2016](https://github.com/jnke2016)
- Fix MNMG failures in mg_dgl_extensions ([#2786](https://github.com/rapidsai/cugraph/pull/2786)) [@VibhuJawa](https://github.com/VibhuJawa)
- Bug fix when -1 is used as a valid external vertex ID ([#2776](https://github.com/rapidsai/cugraph/pull/2776)) [@seunghwak](https://github.com/seunghwak)
## 📖 Documentation
- Update dgl-cuda conda installation instructions ([#2972](https://github.com/rapidsai/cugraph/pull/2972)) [@VibhuJawa](https://github.com/VibhuJawa)
- cuGraph Readme pages and Documentation API structure refactoring ([#2894](https://github.com/rapidsai/cugraph/pull/2894)) [@acostadon](https://github.com/acostadon)
- Create a page on why we do not support cascading ([#2842](https://github.com/rapidsai/cugraph/pull/2842)) [@BradReesWork](https://github.com/BradReesWork)
- Add ProperyGraph to doc generation and update docstrings ([#2826](https://github.com/rapidsai/cugraph/pull/2826)) [@acostadon](https://github.com/acostadon)
- Updated Release Notebook for changes in latest cuGraph release ([#2800](https://github.com/rapidsai/cugraph/pull/2800)) [@acostadon](https://github.com/acostadon)
## 🚀 New Features
- Add wheel builds ([#2964](https://github.com/rapidsai/cugraph/pull/2964)) [@vyasr](https://github.com/vyasr)
- Reenable copy_prs ([#2959](https://github.com/rapidsai/cugraph/pull/2959)) [@vyasr](https://github.com/vyasr)
- Provide option to keep original vertex/edge IDs when renumbering ([#2951](https://github.com/rapidsai/cugraph/pull/2951)) [@eriknw](https://github.com/eriknw)
- Support cuGraph-Service in cuGraph-PyG ([#2946](https://github.com/rapidsai/cugraph/pull/2946)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Add conda yml for`cugraph+torch+DGL` dev ([#2919](https://github.com/rapidsai/cugraph/pull/2919)) [@VibhuJawa](https://github.com/VibhuJawa)
- Bring up cugraph_dgl_repo ([#2896](https://github.com/rapidsai/cugraph/pull/2896)) [@VibhuJawa](https://github.com/VibhuJawa)
- Adds setup.py files and conda recipes for cugraph-service ([#2862](https://github.com/rapidsai/cugraph/pull/2862)) [@BradReesWork](https://github.com/BradReesWork)
- Add remote storage support ([#2859](https://github.com/rapidsai/cugraph/pull/2859)) [@VibhuJawa](https://github.com/VibhuJawa)
- Separate edge weights from graph objects and update primitives to support general edge properties. ([#2843](https://github.com/rapidsai/cugraph/pull/2843)) [@seunghwak](https://github.com/seunghwak)
- GitHub Action adding issues/prs to project board ([#2837](https://github.com/rapidsai/cugraph/pull/2837)) [@jarmak-nv](https://github.com/jarmak-nv)
- Replacing markdown issue templates with yml forms ([#2836](https://github.com/rapidsai/cugraph/pull/2836)) [@jarmak-nv](https://github.com/jarmak-nv)
- Cugraph-Service Remote Graphs and Algorithm Dispatch ([#2832](https://github.com/rapidsai/cugraph/pull/2832)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Remote Graph Wrappers for cuGraph-Service ([#2821](https://github.com/rapidsai/cugraph/pull/2821)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Updte transform_reduce_e_by_src|dst_key to take a custom reduction op ([#2813](https://github.com/rapidsai/cugraph/pull/2813)) [@seunghwak](https://github.com/seunghwak)
- C++ minimal CSV reader ([#2791](https://github.com/rapidsai/cugraph/pull/2791)) [@seunghwak](https://github.com/seunghwak)
- K-hop neighbors ([#2782](https://github.com/rapidsai/cugraph/pull/2782)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Update dask-cuda version and disable wheel builds in CI ([#3009](https://github.com/rapidsai/cugraph/pull/3009)) [@vyasr](https://github.com/vyasr)
- Branch 22.12 merge 22.10 ([#3008](https://github.com/rapidsai/cugraph/pull/3008)) [@rlratzel](https://github.com/rlratzel)
- Shuffle the vertex pair ([#3002](https://github.com/rapidsai/cugraph/pull/3002)) [@jnke2016](https://github.com/jnke2016)
- remove all algorithms from cython.cu ([#2955](https://github.com/rapidsai/cugraph/pull/2955)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update gitignore to Exclude Egg Files ([#2948](https://github.com/rapidsai/cugraph/pull/2948)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Pin `dask` and `distributed` for release ([#2940](https://github.com/rapidsai/cugraph/pull/2940)) [@galipremsagar](https://github.com/galipremsagar)
- Make dgl, pytorch optional imports for cugraph_dgl package ([#2936](https://github.com/rapidsai/cugraph/pull/2936)) [@VibhuJawa](https://github.com/VibhuJawa)
- Implement k core ([#2933](https://github.com/rapidsai/cugraph/pull/2933)) [@ChuckHastings](https://github.com/ChuckHastings)
- CuGraph-Service Asyncio Fix ([#2932](https://github.com/rapidsai/cugraph/pull/2932)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Debug MG egonet issues ([#2926](https://github.com/rapidsai/cugraph/pull/2926)) [@ChuckHastings](https://github.com/ChuckHastings)
- Optimize `PG.add_data` ([#2924](https://github.com/rapidsai/cugraph/pull/2924)) [@VibhuJawa](https://github.com/VibhuJawa)
- Implement C API Similarity ([#2923](https://github.com/rapidsai/cugraph/pull/2923)) [@ChuckHastings](https://github.com/ChuckHastings)
- Adds `cugraph-dgl` conda package, updates CI scripts to build and upload it ([#2921](https://github.com/rapidsai/cugraph/pull/2921)) [@rlratzel](https://github.com/rlratzel)
- key, value store abstraction ([#2920](https://github.com/rapidsai/cugraph/pull/2920)) [@seunghwak](https://github.com/seunghwak)
- Implement two_hop_neighbors C API ([#2915](https://github.com/rapidsai/cugraph/pull/2915)) [@ChuckHastings](https://github.com/ChuckHastings)
- PyG Monorepo Refactor ([#2905](https://github.com/rapidsai/cugraph/pull/2905)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update cugraph to support building for Ada and Hopper ([#2889](https://github.com/rapidsai/cugraph/pull/2889)) [@robertmaynard](https://github.com/robertmaynard)
- Optimize dask.uniform_neighbor_sample ([#2887](https://github.com/rapidsai/cugraph/pull/2887)) [@VibhuJawa](https://github.com/VibhuJawa)
- Add vector properties ([#2882](https://github.com/rapidsai/cugraph/pull/2882)) [@eriknw](https://github.com/eriknw)
- Add view_concat for edge_minor_property_view_t and update transform_reduce_e_by_dst_key to support reduce_op on tuple types ([#2879](https://github.com/rapidsai/cugraph/pull/2879)) [@naimnv](https://github.com/naimnv)
- Update egonet implementation ([#2874](https://github.com/rapidsai/cugraph/pull/2874)) [@jnke2016](https://github.com/jnke2016)
- Use new rapids-cmake functionality for rpath handling. ([#2868](https://github.com/rapidsai/cugraph/pull/2868)) [@vyasr](https://github.com/vyasr)
- Update python WCC to leverage the CAPI ([#2866](https://github.com/rapidsai/cugraph/pull/2866)) [@jnke2016](https://github.com/jnke2016)
- Define and implement C/C++ for MNMG Egonet ([#2864](https://github.com/rapidsai/cugraph/pull/2864)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update uniform random walks implementation ([#2861](https://github.com/rapidsai/cugraph/pull/2861)) [@jnke2016](https://github.com/jnke2016)
- Fix PyG Loaders by properly supporting `multi_get_tensor` ([#2860](https://github.com/rapidsai/cugraph/pull/2860)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- CAPI create graph from CSR ([#2856](https://github.com/rapidsai/cugraph/pull/2856)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove pg dependency from cugraph store.py ([#2855](https://github.com/rapidsai/cugraph/pull/2855)) [@VibhuJawa](https://github.com/VibhuJawa)
- Define C API and implement induced subgraph ([#2854](https://github.com/rapidsai/cugraph/pull/2854)) [@ChuckHastings](https://github.com/ChuckHastings)
- Adds arbitrary server extension support to cugraph-service ([#2850](https://github.com/rapidsai/cugraph/pull/2850)) [@rlratzel](https://github.com/rlratzel)
- Remove stale labeler ([#2849](https://github.com/rapidsai/cugraph/pull/2849)) [@raydouglass](https://github.com/raydouglass)
- Ensure correct data type ([#2847](https://github.com/rapidsai/cugraph/pull/2847)) [@jnke2016](https://github.com/jnke2016)
- Move weight-related graph_t and graph_view_t member functions to standalone functions ([#2841](https://github.com/rapidsai/cugraph/pull/2841)) [@seunghwak](https://github.com/seunghwak)
- Move 'graph_store.py' under dgl_extensions ([#2839](https://github.com/rapidsai/cugraph/pull/2839)) [@VibhuJawa](https://github.com/VibhuJawa)
- Avoid directly calling graph constructor (as code cleanup before edge property support in primitives) ([#2834](https://github.com/rapidsai/cugraph/pull/2834)) [@seunghwak](https://github.com/seunghwak)
- removed docs from cugraph build defaults and updated docs clean ([#2831](https://github.com/rapidsai/cugraph/pull/2831)) [@acostadon](https://github.com/acostadon)
- Define API for Betweenness Centrality ([#2823](https://github.com/rapidsai/cugraph/pull/2823)) [@ChuckHastings](https://github.com/ChuckHastings)
- Adds `.git-blame-ignore-revs` for recent .py files reformatting by `black` ([#2809](https://github.com/rapidsai/cugraph/pull/2809)) [@rlratzel](https://github.com/rlratzel)
- Delete dead code in cython.cu ([#2807](https://github.com/rapidsai/cugraph/pull/2807)) [@seunghwak](https://github.com/seunghwak)
- Persist more in MGPropertyGraph ([#2805](https://github.com/rapidsai/cugraph/pull/2805)) [@eriknw](https://github.com/eriknw)
- Fix concat with different index dtypes in SG PropertyGraph ([#2804](https://github.com/rapidsai/cugraph/pull/2804)) [@eriknw](https://github.com/eriknw)
- Split Sampler from Graph Store to Support New PyG Sampling API ([#2803](https://github.com/rapidsai/cugraph/pull/2803)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- added a passthrough for storing transposed ([#2799](https://github.com/rapidsai/cugraph/pull/2799)) [@BradReesWork](https://github.com/BradReesWork)
- Code cleanup (remove dead code and move legacy files to the legacy directory) ([#2798](https://github.com/rapidsai/cugraph/pull/2798)) [@seunghwak](https://github.com/seunghwak)
- PG: join new vertex data by vertex ids ([#2796](https://github.com/rapidsai/cugraph/pull/2796)) [@eriknw](https://github.com/eriknw)
- Allow passing a dict in feat_name for add_edge_data and add_node_data ([#2795](https://github.com/rapidsai/cugraph/pull/2795)) [@VibhuJawa](https://github.com/VibhuJawa)
- remove graph broadcast and serialization object, not used ([#2783](https://github.com/rapidsai/cugraph/pull/2783)) [@ChuckHastings](https://github.com/ChuckHastings)
- Format Python code with black ([#2778](https://github.com/rapidsai/cugraph/pull/2778)) [@eriknw](https://github.com/eriknw)
- remove unused mechanism for calling Louvain ([#2777](https://github.com/rapidsai/cugraph/pull/2777)) [@ChuckHastings](https://github.com/ChuckHastings)
- Unpin `dask` and `distributed` for development ([#2772](https://github.com/rapidsai/cugraph/pull/2772)) [@galipremsagar](https://github.com/galipremsagar)
- Fix auto-merger ([#2771](https://github.com/rapidsai/cugraph/pull/2771)) [@galipremsagar](https://github.com/galipremsagar)
- Fix library version in yml files ([#2764](https://github.com/rapidsai/cugraph/pull/2764)) [@galipremsagar](https://github.com/galipremsagar)
- Refactor k-core ([#2731](https://github.com/rapidsai/cugraph/pull/2731)) [@jnke2016](https://github.com/jnke2016)
- Adds API option to `uniform_neighbor_sample()` and UCX-Py infrastructure to allow for a client-side device to directly receive results ([#2715](https://github.com/rapidsai/cugraph/pull/2715)) [@rlratzel](https://github.com/rlratzel)
- Add or Update Similarity algorithms ([#2704](https://github.com/rapidsai/cugraph/pull/2704)) [@jnke2016](https://github.com/jnke2016)
- Define a C API for data masking ([#2630](https://github.com/rapidsai/cugraph/pull/2630)) [@ChuckHastings](https://github.com/ChuckHastings)
- Multi-GPU induced subgraph tests code ([#2602](https://github.com/rapidsai/cugraph/pull/2602)) [@yang-hu-nv](https://github.com/yang-hu-nv)
# cuGraph 22.10.00 (12 Oct 2022)
## 🚨 Breaking Changes
- Add `is_multigraph` to PG and change `has_duplicate_edges` to use types ([#2708](https://github.com/rapidsai/cugraph/pull/2708)) [@eriknw](https://github.com/eriknw)
- Enable PLC algos to leverage the PLC graph ([#2682](https://github.com/rapidsai/cugraph/pull/2682)) [@jnke2016](https://github.com/jnke2016)
- Reduce cuGraph Sampling Overhead for PyG ([#2653](https://github.com/rapidsai/cugraph/pull/2653)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Code cleanup ([#2617](https://github.com/rapidsai/cugraph/pull/2617)) [@seunghwak](https://github.com/seunghwak)
- Update vertex_frontier_t to take unsorted (tagged-)vertex list with possible duplicates ([#2584](https://github.com/rapidsai/cugraph/pull/2584)) [@seunghwak](https://github.com/seunghwak)
- CuGraph+PyG Wrappers and Loaders ([#2567](https://github.com/rapidsai/cugraph/pull/2567)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Rename multiple .cuh (.cu) files to .hpp (.cpp) ([#2501](https://github.com/rapidsai/cugraph/pull/2501)) [@seunghwak](https://github.com/seunghwak)
## 🐛 Bug Fixes
- Properly Distribute Start Vertices for MG Uniform Neighbor Sample ([#2765](https://github.com/rapidsai/cugraph/pull/2765)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Removes unneeded test dependency on cugraph from pylibcugraph tests ([#2738](https://github.com/rapidsai/cugraph/pull/2738)) [@rlratzel](https://github.com/rlratzel)
- Add modularity to return result for louvain ([#2706](https://github.com/rapidsai/cugraph/pull/2706)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fixes bug in `NumberMap` preventing use of string vertex IDs for MG graphs ([#2688](https://github.com/rapidsai/cugraph/pull/2688)) [@rlratzel](https://github.com/rlratzel)
- Release all inactive futures ([#2659](https://github.com/rapidsai/cugraph/pull/2659)) [@jnke2016](https://github.com/jnke2016)
- Fix MG PLC algos intermittent hang ([#2607](https://github.com/rapidsai/cugraph/pull/2607)) [@jnke2016](https://github.com/jnke2016)
- Fix MG Louvain C API test ([#2588](https://github.com/rapidsai/cugraph/pull/2588)) [@ChuckHastings](https://github.com/ChuckHastings)
## 📖 Documentation
- Adding new classes to api docs ([#2754](https://github.com/rapidsai/cugraph/pull/2754)) [@acostadon](https://github.com/acostadon)
- Removed reference to hard limit of 2 billion vertices for dask cugraph ([#2680](https://github.com/rapidsai/cugraph/pull/2680)) [@acostadon](https://github.com/acostadon)
- updated list of conferences ([#2672](https://github.com/rapidsai/cugraph/pull/2672)) [@BradReesWork](https://github.com/BradReesWork)
- Refactor Sampling, Structure and Traversal Notebooks ([#2628](https://github.com/rapidsai/cugraph/pull/2628)) [@acostadon](https://github.com/acostadon)
## 🚀 New Features
- Implement a vertex pair intersection primitive ([#2728](https://github.com/rapidsai/cugraph/pull/2728)) [@seunghwak](https://github.com/seunghwak)
- Implement a random selection primitive ([#2703](https://github.com/rapidsai/cugraph/pull/2703)) [@seunghwak](https://github.com/seunghwak)
- adds mechanism to skip notebook directories for different run types ([#2693](https://github.com/rapidsai/cugraph/pull/2693)) [@acostadon](https://github.com/acostadon)
- Create graph with edge property values ([#2660](https://github.com/rapidsai/cugraph/pull/2660)) [@seunghwak](https://github.com/seunghwak)
- Reduce cuGraph Sampling Overhead for PyG ([#2653](https://github.com/rapidsai/cugraph/pull/2653)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Primitive to support gathering one hop neighbors ([#2623](https://github.com/rapidsai/cugraph/pull/2623)) [@seunghwak](https://github.com/seunghwak)
- Define a selection primtive API ([#2586](https://github.com/rapidsai/cugraph/pull/2586)) [@seunghwak](https://github.com/seunghwak)
- Leiden C++ API ([#2569](https://github.com/rapidsai/cugraph/pull/2569)) [@naimnv](https://github.com/naimnv)
- CuGraph+PyG Wrappers and Loaders ([#2567](https://github.com/rapidsai/cugraph/pull/2567)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- create a graph with additional edge properties ([#2521](https://github.com/rapidsai/cugraph/pull/2521)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Add missing entries in `update-version.sh` ([#2763](https://github.com/rapidsai/cugraph/pull/2763)) [@galipremsagar](https://github.com/galipremsagar)
- Pin `dask` and `distributed` for release ([#2758](https://github.com/rapidsai/cugraph/pull/2758)) [@galipremsagar](https://github.com/galipremsagar)
- Allow users to provide their own edge IDS to PropertyGraph ([#2757](https://github.com/rapidsai/cugraph/pull/2757)) [@eriknw](https://github.com/eriknw)
- Raise a warning for certain algorithms ([#2756](https://github.com/rapidsai/cugraph/pull/2756)) [@jnke2016](https://github.com/jnke2016)
- Fix cuGraph compile-time warnings. ([#2755](https://github.com/rapidsai/cugraph/pull/2755)) [@seunghwak](https://github.com/seunghwak)
- Use new sampling primitives ([#2751](https://github.com/rapidsai/cugraph/pull/2751)) [@ChuckHastings](https://github.com/ChuckHastings)
- C++ implementation for unweighted Jaccard/Sorensen/Overlap ([#2750](https://github.com/rapidsai/cugraph/pull/2750)) [@ChuckHastings](https://github.com/ChuckHastings)
- suppress expansion of unused raft spectral templates ([#2739](https://github.com/rapidsai/cugraph/pull/2739)) [@cjnolet](https://github.com/cjnolet)
- Update unit tests to leverage the datasets API ([#2733](https://github.com/rapidsai/cugraph/pull/2733)) [@jnke2016](https://github.com/jnke2016)
- Update raft import ([#2729](https://github.com/rapidsai/cugraph/pull/2729)) [@jnke2016](https://github.com/jnke2016)
- Document that minimum required CMake version is now 3.23.1 ([#2725](https://github.com/rapidsai/cugraph/pull/2725)) [@robertmaynard](https://github.com/robertmaynard)
- fix Comms import ([#2717](https://github.com/rapidsai/cugraph/pull/2717)) [@BradReesWork](https://github.com/BradReesWork)
- added tests for triangle count on unweighted graphs and graphs with int64 vertex types ([#2716](https://github.com/rapidsai/cugraph/pull/2716)) [@acostadon](https://github.com/acostadon)
- Define k-core API and tests ([#2712](https://github.com/rapidsai/cugraph/pull/2712)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add `is_multigraph` to PG and change `has_duplicate_edges` to use types ([#2708](https://github.com/rapidsai/cugraph/pull/2708)) [@eriknw](https://github.com/eriknw)
- Refactor louvain ([#2705](https://github.com/rapidsai/cugraph/pull/2705)) [@jnke2016](https://github.com/jnke2016)
- new notebook for loading mag240m ([#2701](https://github.com/rapidsai/cugraph/pull/2701)) [@BradReesWork](https://github.com/BradReesWork)
- PG allow get_vertex_data to accept single type or id ([#2698](https://github.com/rapidsai/cugraph/pull/2698)) [@eriknw](https://github.com/eriknw)
- Renumber PG to be contiguous per type ([#2697](https://github.com/rapidsai/cugraph/pull/2697)) [@eriknw](https://github.com/eriknw)
- Added `SamplingResult` cdef class to return cupy "views" for PLC sampling algos instead of copying result data ([#2684](https://github.com/rapidsai/cugraph/pull/2684)) [@rlratzel](https://github.com/rlratzel)
- Enable PLC algos to leverage the PLC graph ([#2682](https://github.com/rapidsai/cugraph/pull/2682)) [@jnke2016](https://github.com/jnke2016)
- `graph_mask_t` and separating raft includes for `host_span` and `device_span` ([#2679](https://github.com/rapidsai/cugraph/pull/2679)) [@cjnolet](https://github.com/cjnolet)
- Promote triangle count from experimental ([#2671](https://github.com/rapidsai/cugraph/pull/2671)) [@jnke2016](https://github.com/jnke2016)
- Small fix to the MG PyG Test to Account for Current Sampling Behavior ([#2666](https://github.com/rapidsai/cugraph/pull/2666)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Move GaaS sources, tests, docs, scripts from the rapidsai/GaaS repo to the cugraph repo ([#2661](https://github.com/rapidsai/cugraph/pull/2661)) [@rlratzel](https://github.com/rlratzel)
- C, Pylibcugraph, and Python API Updates for Edge Types ([#2629](https://github.com/rapidsai/cugraph/pull/2629)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Add coverage for uniform neighbor sampling ([#2625](https://github.com/rapidsai/cugraph/pull/2625)) [@jnke2016](https://github.com/jnke2016)
- Define C and C++ APIs for Jaccard/Sorensen/Overlap ([#2624](https://github.com/rapidsai/cugraph/pull/2624)) [@ChuckHastings](https://github.com/ChuckHastings)
- Code cleanup ([#2617](https://github.com/rapidsai/cugraph/pull/2617)) [@seunghwak](https://github.com/seunghwak)
- Branch 22.10 merge 22.08 ([#2599](https://github.com/rapidsai/cugraph/pull/2599)) [@rlratzel](https://github.com/rlratzel)
- Restructure Louvain to be more like other algorithms ([#2594](https://github.com/rapidsai/cugraph/pull/2594)) [@ChuckHastings](https://github.com/ChuckHastings)
- Hetrograph and dask_cudf support ([#2592](https://github.com/rapidsai/cugraph/pull/2592)) [@VibhuJawa](https://github.com/VibhuJawa)
- remove pagerank from cython.cu ([#2587](https://github.com/rapidsai/cugraph/pull/2587)) [@ChuckHastings](https://github.com/ChuckHastings)
- MG uniform random walk implementation ([#2585](https://github.com/rapidsai/cugraph/pull/2585)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update vertex_frontier_t to take unsorted (tagged-)vertex list with possible duplicates ([#2584](https://github.com/rapidsai/cugraph/pull/2584)) [@seunghwak](https://github.com/seunghwak)
- Use edge_ids directly in uniform sampling call to prevent cost of edge_id lookup ([#2550](https://github.com/rapidsai/cugraph/pull/2550)) [@VibhuJawa](https://github.com/VibhuJawa)
- PropertyGraph set index to vertex and edge ids ([#2523](https://github.com/rapidsai/cugraph/pull/2523)) [@eriknw](https://github.com/eriknw)
- Use rapids-cmake 22.10 best practice for RAPIDS.cmake location ([#2518](https://github.com/rapidsai/cugraph/pull/2518)) [@robertmaynard](https://github.com/robertmaynard)
- Unpin `dask` and `distributed` for development ([#2517](https://github.com/rapidsai/cugraph/pull/2517)) [@galipremsagar](https://github.com/galipremsagar)
- Use category dtype for type in PropertyGraph ([#2510](https://github.com/rapidsai/cugraph/pull/2510)) [@eriknw](https://github.com/eriknw)
- Split edge_partition_src_dst_property.cuh to .hpp and .cuh files. ([#2503](https://github.com/rapidsai/cugraph/pull/2503)) [@seunghwak](https://github.com/seunghwak)
- Rename multiple .cuh (.cu) files to .hpp (.cpp) ([#2501](https://github.com/rapidsai/cugraph/pull/2501)) [@seunghwak](https://github.com/seunghwak)
- Fix Forward-Merger Conflicts ([#2474](https://github.com/rapidsai/cugraph/pull/2474)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add tests for reading edge and vertex data from single input in PG, implementation to follow. ([#2154](https://github.com/rapidsai/cugraph/pull/2154)) [@rlratzel](https://github.com/rlratzel)
# cuGraph 22.08.00 (17 Aug 2022)
## 🚨 Breaking Changes
- Change default return type `PropertyGraph.extract_subgraph() -> cugraph.Graph(directed=True)` ([#2460](https://github.com/rapidsai/cugraph/pull/2460)) [@eriknw](https://github.com/eriknw)
- cuGraph code cleanup ([#2431](https://github.com/rapidsai/cugraph/pull/2431)) [@seunghwak](https://github.com/seunghwak)
- Clean up public api ([#2398](https://github.com/rapidsai/cugraph/pull/2398)) [@ChuckHastings](https://github.com/ChuckHastings)
- Delete old nbr sampling software ([#2371](https://github.com/rapidsai/cugraph/pull/2371)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove GraphCSC/GraphCSCView object, no longer used ([#2354](https://github.com/rapidsai/cugraph/pull/2354)) [@ChuckHastings](https://github.com/ChuckHastings)
- Replace raw pointers with device_span in induced subgraph ([#2348](https://github.com/rapidsai/cugraph/pull/2348)) [@yang-hu-nv](https://github.com/yang-hu-nv)
- Clean up some unused code in the C API (and beyond) ([#2339](https://github.com/rapidsai/cugraph/pull/2339)) [@ChuckHastings](https://github.com/ChuckHastings)
- Performance-optimize storing edge partition source/destination properties in (key, value) pairs ([#2328](https://github.com/rapidsai/cugraph/pull/2328)) [@seunghwak](https://github.com/seunghwak)
- Remove legacy katz ([#2324](https://github.com/rapidsai/cugraph/pull/2324)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🐛 Bug Fixes
- Fix PropertyGraph MG tests ([#2511](https://github.com/rapidsai/cugraph/pull/2511)) [@eriknw](https://github.com/eriknw)
- Update `k_core.py` to Check for Graph Direction ([#2507](https://github.com/rapidsai/cugraph/pull/2507)) [@oorliu](https://github.com/oorliu)
- fix non-deterministic bug in uniform neighborhood sampling ([#2477](https://github.com/rapidsai/cugraph/pull/2477)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix typos in Python CMakeLists CUDA arch file ([#2475](https://github.com/rapidsai/cugraph/pull/2475)) [@vyasr](https://github.com/vyasr)
- Updated imports to be compatible with latest version of cupy ([#2473](https://github.com/rapidsai/cugraph/pull/2473)) [@rlratzel](https://github.com/rlratzel)
- Fix pandas SettingWithCopyWarning, which really shouldn't be ignored. ([#2447](https://github.com/rapidsai/cugraph/pull/2447)) [@eriknw](https://github.com/eriknw)
- fix handling of fanout == -1 ([#2435](https://github.com/rapidsai/cugraph/pull/2435)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add options to `extract_subgraph()` to bypass renumbering and adding edge_data, exclude internal `_WEIGHT_` column from `edge_property_names`, added `num_vertices_with_properties` attr ([#2419](https://github.com/rapidsai/cugraph/pull/2419)) [@rlratzel](https://github.com/rlratzel)
- Remove the comms import from cugraph's init file ([#2402](https://github.com/rapidsai/cugraph/pull/2402)) [@jnke2016](https://github.com/jnke2016)
- Bug fix (providing invalid sentinel value for cuCollection). ([#2382](https://github.com/rapidsai/cugraph/pull/2382)) [@seunghwak](https://github.com/seunghwak)
- add debug print for betweenness centrality, fix typo ([#2369](https://github.com/rapidsai/cugraph/pull/2369)) [@jnke2016](https://github.com/jnke2016)
- Bug fix for decompressing partial edge list and using (key, value) pairs for major properties. ([#2366](https://github.com/rapidsai/cugraph/pull/2366)) [@seunghwak](https://github.com/seunghwak)
- Fix Fanout -1 ([#2358](https://github.com/rapidsai/cugraph/pull/2358)) [@VibhuJawa](https://github.com/VibhuJawa)
- Update sampling primitive again, fix hypersparse computations ([#2353](https://github.com/rapidsai/cugraph/pull/2353)) [@ChuckHastings](https://github.com/ChuckHastings)
- added test cases and verified that algorithm works for undirected graphs ([#2349](https://github.com/rapidsai/cugraph/pull/2349)) [@acostadon](https://github.com/acostadon)
- Fix sampling bug ([#2343](https://github.com/rapidsai/cugraph/pull/2343)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix triangle count ([#2325](https://github.com/rapidsai/cugraph/pull/2325)) [@ChuckHastings](https://github.com/ChuckHastings)
## 📖 Documentation
- Defer loading of `custom.js` ([#2506](https://github.com/rapidsai/cugraph/pull/2506)) [@galipremsagar](https://github.com/galipremsagar)
- Centralize common `css` & `js` code in docs ([#2472](https://github.com/rapidsai/cugraph/pull/2472)) [@galipremsagar](https://github.com/galipremsagar)
- Fix issues with day & night modes in python docs ([#2471](https://github.com/rapidsai/cugraph/pull/2471)) [@galipremsagar](https://github.com/galipremsagar)
- Use Datasets API to Update Docstring Examples ([#2441](https://github.com/rapidsai/cugraph/pull/2441)) [@oorliu](https://github.com/oorliu)
- README updates ([#2395](https://github.com/rapidsai/cugraph/pull/2395)) [@BradReesWork](https://github.com/BradReesWork)
- Switch `language` from `None` to `"en"` in docs build ([#2368](https://github.com/rapidsai/cugraph/pull/2368)) [@galipremsagar](https://github.com/galipremsagar)
- Doxygen improvements to improve documentation of C API ([#2355](https://github.com/rapidsai/cugraph/pull/2355)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update multi-GPU example to include data generation ([#2345](https://github.com/rapidsai/cugraph/pull/2345)) [@charlesbluca](https://github.com/charlesbluca)
## 🚀 New Features
- Cost Matrix first version ([#2377](https://github.com/rapidsai/cugraph/pull/2377)) [@acostadon](https://github.com/acostadon)
## 🛠️ Improvements
- Pin `dask` & `distributed` for release ([#2478](https://github.com/rapidsai/cugraph/pull/2478)) [@galipremsagar](https://github.com/galipremsagar)
- Update PageRank to leverage pylibcugraph ([#2467](https://github.com/rapidsai/cugraph/pull/2467)) [@jnke2016](https://github.com/jnke2016)
- Change default return type `PropertyGraph.extract_subgraph() -> cugraph.Graph(directed=True)` ([#2460](https://github.com/rapidsai/cugraph/pull/2460)) [@eriknw](https://github.com/eriknw)
- Updates to Link Notebooks ([#2456](https://github.com/rapidsai/cugraph/pull/2456)) [@acostadon](https://github.com/acostadon)
- Only build cugraphmgtestutil when requested ([#2454](https://github.com/rapidsai/cugraph/pull/2454)) [@robertmaynard](https://github.com/robertmaynard)
- Datasets API Update: Add Extra Params and Improve Testing ([#2453](https://github.com/rapidsai/cugraph/pull/2453)) [@oorliu](https://github.com/oorliu)
- Uniform neighbor sample ([#2450](https://github.com/rapidsai/cugraph/pull/2450)) [@VibhuJawa](https://github.com/VibhuJawa)
- Don't store redundant columns in PropertyGraph Dataframes ([#2449](https://github.com/rapidsai/cugraph/pull/2449)) [@eriknw](https://github.com/eriknw)
- Changes to Cores, components and layout notebooks ([#2448](https://github.com/rapidsai/cugraph/pull/2448)) [@acostadon](https://github.com/acostadon)
- Added `get_vertex_data()` and `get_edge_data()` to SG/MG PropertyGraph ([#2444](https://github.com/rapidsai/cugraph/pull/2444)) [@rlratzel](https://github.com/rlratzel)
- Remove OpenMP dependencies from CMake ([#2443](https://github.com/rapidsai/cugraph/pull/2443)) [@seunghwak](https://github.com/seunghwak)
- Use Datasets API to Update Notebook Examples ([#2440](https://github.com/rapidsai/cugraph/pull/2440)) [@oorliu](https://github.com/oorliu)
- Refactor MG C++ tests (handle initialization) ([#2439](https://github.com/rapidsai/cugraph/pull/2439)) [@seunghwak](https://github.com/seunghwak)
- Branch 22.08 merge 22.06 ([#2436](https://github.com/rapidsai/cugraph/pull/2436)) [@rlratzel](https://github.com/rlratzel)
- Add get_num_vertices and get_num_edges methods to PropertyGraph. ([#2434](https://github.com/rapidsai/cugraph/pull/2434)) [@eriknw](https://github.com/eriknw)
- Make cuco a private dependency and leverage rapids-cmake ([#2432](https://github.com/rapidsai/cugraph/pull/2432)) [@vyasr](https://github.com/vyasr)
- cuGraph code cleanup ([#2431](https://github.com/rapidsai/cugraph/pull/2431)) [@seunghwak](https://github.com/seunghwak)
- Add core number to the python API ([#2414](https://github.com/rapidsai/cugraph/pull/2414)) [@jnke2016](https://github.com/jnke2016)
- Enable concurrent broadcasts in update_edge_partition_minor_property() ([#2413](https://github.com/rapidsai/cugraph/pull/2413)) [@seunghwak](https://github.com/seunghwak)
- Optimize has_duplicate_edges ([#2409](https://github.com/rapidsai/cugraph/pull/2409)) [@VibhuJawa](https://github.com/VibhuJawa)
- Define API for MG random walk ([#2407](https://github.com/rapidsai/cugraph/pull/2407)) [@ChuckHastings](https://github.com/ChuckHastings)
- Support building without cugraph-ops ([#2405](https://github.com/rapidsai/cugraph/pull/2405)) [@ChuckHastings](https://github.com/ChuckHastings)
- Clean up public api ([#2398](https://github.com/rapidsai/cugraph/pull/2398)) [@ChuckHastings](https://github.com/ChuckHastings)
- Community notebook updates structure/testing/improvement ([#2397](https://github.com/rapidsai/cugraph/pull/2397)) [@acostadon](https://github.com/acostadon)
- Run relevant CI tests based on what's changed in the ChangeList ([#2396](https://github.com/rapidsai/cugraph/pull/2396)) [@anandhkb](https://github.com/anandhkb)
- Update `Graph` to store a Pylibcugraph Graph (SG/MG Graph) ([#2394](https://github.com/rapidsai/cugraph/pull/2394)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Moving Centrality notebooks to new structure and updating/testing ([#2388](https://github.com/rapidsai/cugraph/pull/2388)) [@acostadon](https://github.com/acostadon)
- Add conda compilers to env file ([#2384](https://github.com/rapidsai/cugraph/pull/2384)) [@vyasr](https://github.com/vyasr)
- Add get_node_storage and get_edge_storage to CuGraphStorage ([#2381](https://github.com/rapidsai/cugraph/pull/2381)) [@VibhuJawa](https://github.com/VibhuJawa)
- Pin max version of `cuda-python` to `11.7.0` ([#2380](https://github.com/rapidsai/cugraph/pull/2380)) [@Ethyling](https://github.com/Ethyling)
- Update cugraph python build ([#2378](https://github.com/rapidsai/cugraph/pull/2378)) [@jnke2016](https://github.com/jnke2016)
- Delete old nbr sampling software ([#2371](https://github.com/rapidsai/cugraph/pull/2371)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add datasets API to import graph data from configuration/metadata files ([#2367](https://github.com/rapidsai/cugraph/pull/2367)) [@betochimas](https://github.com/betochimas)
- Skip reduction for zero (in|out-)degree vertices. ([#2365](https://github.com/rapidsai/cugraph/pull/2365)) [@seunghwak](https://github.com/seunghwak)
- Update Python version support. ([#2363](https://github.com/rapidsai/cugraph/pull/2363)) [@bdice](https://github.com/bdice)
- Branch 22.08 merge 22.06 ([#2362](https://github.com/rapidsai/cugraph/pull/2362)) [@rlratzel](https://github.com/rlratzel)
- Support raft updating to new version of cuco ([#2360](https://github.com/rapidsai/cugraph/pull/2360)) [@ChuckHastings](https://github.com/ChuckHastings)
- Branch 22.08 merge 22.06 ([#2359](https://github.com/rapidsai/cugraph/pull/2359)) [@rlratzel](https://github.com/rlratzel)
- Remove topology header ([#2357](https://github.com/rapidsai/cugraph/pull/2357)) [@ChuckHastings](https://github.com/ChuckHastings)
- Switch back to PC generator ([#2356](https://github.com/rapidsai/cugraph/pull/2356)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove GraphCSC/GraphCSCView object, no longer used ([#2354](https://github.com/rapidsai/cugraph/pull/2354)) [@ChuckHastings](https://github.com/ChuckHastings)
- Resolve Forward merging of branch-22.06 into branch-22.08 ([#2350](https://github.com/rapidsai/cugraph/pull/2350)) [@jnke2016](https://github.com/jnke2016)
- Replace raw pointers with device_span in induced subgraph ([#2348](https://github.com/rapidsai/cugraph/pull/2348)) [@yang-hu-nv](https://github.com/yang-hu-nv)
- Some legacy BFS cleanup ([#2347](https://github.com/rapidsai/cugraph/pull/2347)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove legacy sssp implementation ([#2344](https://github.com/rapidsai/cugraph/pull/2344)) [@ChuckHastings](https://github.com/ChuckHastings)
- Unpin `dask` & `distributed` for development ([#2342](https://github.com/rapidsai/cugraph/pull/2342)) [@galipremsagar](https://github.com/galipremsagar)
- Release notebook: Nx Generators & Adding Perf_counter ([#2341](https://github.com/rapidsai/cugraph/pull/2341)) [@oorliu](https://github.com/oorliu)
- Clean up some unused code in the C API (and beyond) ([#2339](https://github.com/rapidsai/cugraph/pull/2339)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add core number to the C API ([#2338](https://github.com/rapidsai/cugraph/pull/2338)) [@betochimas](https://github.com/betochimas)
- Update the list of algos to benchmark ([#2337](https://github.com/rapidsai/cugraph/pull/2337)) [@jnke2016](https://github.com/jnke2016)
- Default GPU_COUNT to 1 in cmake file ([#2336](https://github.com/rapidsai/cugraph/pull/2336)) [@ChuckHastings](https://github.com/ChuckHastings)
- DOC Fix for Renumber-2.ipynb ([#2335](https://github.com/rapidsai/cugraph/pull/2335)) [@oorliu](https://github.com/oorliu)
- Resolve conflicts for merge from branch-22.06 to branch-22.08 ([#2334](https://github.com/rapidsai/cugraph/pull/2334)) [@rlratzel](https://github.com/rlratzel)
- update versions to 22.08 ([#2332](https://github.com/rapidsai/cugraph/pull/2332)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix experimental labels ([#2331](https://github.com/rapidsai/cugraph/pull/2331)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Performance-optimize storing edge partition source/destination properties in (key, value) pairs ([#2328](https://github.com/rapidsai/cugraph/pull/2328)) [@seunghwak](https://github.com/seunghwak)
- Remove legacy katz ([#2324](https://github.com/rapidsai/cugraph/pull/2324)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add missing Thrust includes ([#2310](https://github.com/rapidsai/cugraph/pull/2310)) [@bdice](https://github.com/bdice)
# cuGraph 22.06.00 (7 Jun 2022)
## 🚨 Breaking Changes
- Fix uniform neighborhood sampling remove duplicates ([#2301](https://github.com/rapidsai/cugraph/pull/2301)) [@ChuckHastings](https://github.com/ChuckHastings)
- Split update_v_frontier_from_outgoing_e to two simpler primitives ([#2290](https://github.com/rapidsai/cugraph/pull/2290)) [@seunghwak](https://github.com/seunghwak)
- Refactor MG neighborhood sampling and add SG implementation ([#2285](https://github.com/rapidsai/cugraph/pull/2285)) [@jnke2016](https://github.com/jnke2016)
- Resolve inconsistencies in reduction support in primitives ([#2257](https://github.com/rapidsai/cugraph/pull/2257)) [@seunghwak](https://github.com/seunghwak)
- Revert SG Katz API's signature to previous <22.04 version ([#2242](https://github.com/rapidsai/cugraph/pull/2242)) [@betochimas](https://github.com/betochimas)
- Rename primitive functions. ([#2234](https://github.com/rapidsai/cugraph/pull/2234)) [@seunghwak](https://github.com/seunghwak)
- Graph primitives API updates ([#2220](https://github.com/rapidsai/cugraph/pull/2220)) [@seunghwak](https://github.com/seunghwak)
- Add Katz Centrality to pylibcugraph, refactor Katz Centrality for cugraph ([#2201](https://github.com/rapidsai/cugraph/pull/2201)) [@betochimas](https://github.com/betochimas)
- Update graph/graph primitives API to consistently use vertex/edge centric terminologies instead of matrix centric terminolgies ([#2187](https://github.com/rapidsai/cugraph/pull/2187)) [@seunghwak](https://github.com/seunghwak)
- Define C API for eigenvector centrality ([#2180](https://github.com/rapidsai/cugraph/pull/2180)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🐛 Bug Fixes
- fix sampling handling of dscr region ([#2321](https://github.com/rapidsai/cugraph/pull/2321)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add test to reproduce issue with double weights, fix issue (graph cre… ([#2305](https://github.com/rapidsai/cugraph/pull/2305)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix MG BFS through C API ([#2291](https://github.com/rapidsai/cugraph/pull/2291)) [@ChuckHastings](https://github.com/ChuckHastings)
- fixes BUG 2275 ([#2279](https://github.com/rapidsai/cugraph/pull/2279)) [@BradReesWork](https://github.com/BradReesWork)
- Refactored SG `hits` and MG `katz_centrality` ([#2276](https://github.com/rapidsai/cugraph/pull/2276)) [@betochimas](https://github.com/betochimas)
- Multi-GPU reduce_v & transform_reduce_v bug fix. ([#2269](https://github.com/rapidsai/cugraph/pull/2269)) [@seunghwak](https://github.com/seunghwak)
- Update BFS and SSSP to check start/source vertex for validity ([#2268](https://github.com/rapidsai/cugraph/pull/2268)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Update some clustering algos to only support undirected graphs ([#2267](https://github.com/rapidsai/cugraph/pull/2267)) [@jnke2016](https://github.com/jnke2016)
- Resolves maximum spanning tree bug when using Edgelist instead of Adjlist ([#2256](https://github.com/rapidsai/cugraph/pull/2256)) [@betochimas](https://github.com/betochimas)
- cudf moved the default_hash into the cudf::detail namespace ([#2244](https://github.com/rapidsai/cugraph/pull/2244)) [@ChuckHastings](https://github.com/ChuckHastings)
- Allow `cugraph` to be imported in an SG env for SG algorithms ([#2241](https://github.com/rapidsai/cugraph/pull/2241)) [@betochimas](https://github.com/betochimas)
- Address some MNMG issues in cython.cu ([#2224](https://github.com/rapidsai/cugraph/pull/2224)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix error from two conflicting merges ([#2219](https://github.com/rapidsai/cugraph/pull/2219)) [@ChuckHastings](https://github.com/ChuckHastings)
- Branch 22.06 MNMG bug work and support for Undirected Graphs ([#2215](https://github.com/rapidsai/cugraph/pull/2215)) [@acostadon](https://github.com/acostadon)
- Branch 22.06 merge 22.04 ([#2190](https://github.com/rapidsai/cugraph/pull/2190)) [@rlratzel](https://github.com/rlratzel)
## 📖 Documentation
- Fix BFS Docstring ([#2318](https://github.com/rapidsai/cugraph/pull/2318)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- small typo ([#2250](https://github.com/rapidsai/cugraph/pull/2250)) [@hoosierEE](https://github.com/hoosierEE)
- Updating issue template and missing docs ([#2211](https://github.com/rapidsai/cugraph/pull/2211)) [@BradReesWork](https://github.com/BradReesWork)
- Python code cleanup across docs, wrappers, testing ([#2194](https://github.com/rapidsai/cugraph/pull/2194)) [@betochimas](https://github.com/betochimas)
## 🚀 New Features
- Multi GPU Property Graph with basic creation support ([#2286](https://github.com/rapidsai/cugraph/pull/2286)) [@acostadon](https://github.com/acostadon)
- Triangle Counting ([#2253](https://github.com/rapidsai/cugraph/pull/2253)) [@seunghwak](https://github.com/seunghwak)
- Triangle Counts C++ API ([#2233](https://github.com/rapidsai/cugraph/pull/2233)) [@seunghwak](https://github.com/seunghwak)
- Define C API for eigenvector centrality ([#2180](https://github.com/rapidsai/cugraph/pull/2180)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🛠️ Improvements
- Pin `dask` and `distributed` for release ([#2317](https://github.com/rapidsai/cugraph/pull/2317)) [@galipremsagar](https://github.com/galipremsagar)
- Pin `dask` & `distributed` for release ([#2312](https://github.com/rapidsai/cugraph/pull/2312)) [@galipremsagar](https://github.com/galipremsagar)
- Triangle counting C API implementation ([#2302](https://github.com/rapidsai/cugraph/pull/2302)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix uniform neighborhood sampling remove duplicates ([#2301](https://github.com/rapidsai/cugraph/pull/2301)) [@ChuckHastings](https://github.com/ChuckHastings)
- Migrate SG and MG SSSP to pylibcugraph ([#2295](https://github.com/rapidsai/cugraph/pull/2295)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Add Louvain to the C API ([#2292](https://github.com/rapidsai/cugraph/pull/2292)) [@ChuckHastings](https://github.com/ChuckHastings)
- Split update_v_frontier_from_outgoing_e to two simpler primitives ([#2290](https://github.com/rapidsai/cugraph/pull/2290)) [@seunghwak](https://github.com/seunghwak)
- Add and test mechanism for creating graph with edge index as weight ([#2288](https://github.com/rapidsai/cugraph/pull/2288)) [@ChuckHastings](https://github.com/ChuckHastings)
- Implement eigenvector centrality ([#2287](https://github.com/rapidsai/cugraph/pull/2287)) [@ChuckHastings](https://github.com/ChuckHastings)
- Refactor MG neighborhood sampling and add SG implementation ([#2285](https://github.com/rapidsai/cugraph/pull/2285)) [@jnke2016](https://github.com/jnke2016)
- Migrate SG and MG BFS to pylibcugraph ([#2284](https://github.com/rapidsai/cugraph/pull/2284)) [@alexbarghi-nv](https://github.com/alexbarghi-nv)
- Optimize Sampling for graph_store ([#2283](https://github.com/rapidsai/cugraph/pull/2283)) [@VibhuJawa](https://github.com/VibhuJawa)
- Refactor mg symmetrize tests ([#2278](https://github.com/rapidsai/cugraph/pull/2278)) [@jnke2016](https://github.com/jnke2016)
- Add do_expensive_check to graph primitives ([#2274](https://github.com/rapidsai/cugraph/pull/2274)) [@seunghwak](https://github.com/seunghwak)
- add bindings for triangle counting ([#2273](https://github.com/rapidsai/cugraph/pull/2273)) [@jnke2016](https://github.com/jnke2016)
- Define triangle_count C API ([#2271](https://github.com/rapidsai/cugraph/pull/2271)) [@ChuckHastings](https://github.com/ChuckHastings)
- Revert old pattern of SG cugraph testing for CI purposes ([#2262](https://github.com/rapidsai/cugraph/pull/2262)) [@betochimas](https://github.com/betochimas)
- Branch 22.06 bug fixes + update imports ([#2261](https://github.com/rapidsai/cugraph/pull/2261)) [@betochimas](https://github.com/betochimas)
- Raft RNG updated API ([#2260](https://github.com/rapidsai/cugraph/pull/2260)) [@MatthiasKohl](https://github.com/MatthiasKohl)
- Add Degree Centrality to cugraph ([#2259](https://github.com/rapidsai/cugraph/pull/2259)) [@betochimas](https://github.com/betochimas)
- Refactor Uniform Neighborhood Sampling ([#2258](https://github.com/rapidsai/cugraph/pull/2258)) [@ChuckHastings](https://github.com/ChuckHastings)
- Resolve inconsistencies in reduction support in primitives ([#2257](https://github.com/rapidsai/cugraph/pull/2257)) [@seunghwak](https://github.com/seunghwak)
- Add Eigenvector Centrality to pylibcugraph, cugraph APIs ([#2255](https://github.com/rapidsai/cugraph/pull/2255)) [@betochimas](https://github.com/betochimas)
- Add MG Hits and MG Neighborhood_sampling to benchmarks ([#2254](https://github.com/rapidsai/cugraph/pull/2254)) [@jnke2016](https://github.com/jnke2016)
- Undirected graph support for MG graphs ([#2247](https://github.com/rapidsai/cugraph/pull/2247)) [@jnke2016](https://github.com/jnke2016)
- Branch 22.06 bugs ([#2245](https://github.com/rapidsai/cugraph/pull/2245)) [@BradReesWork](https://github.com/BradReesWork)
- Revert SG Katz API's signature to previous <22.04 version ([#2242](https://github.com/rapidsai/cugraph/pull/2242)) [@betochimas](https://github.com/betochimas)
- add API for the new uniform neighborhood sampling ([#2236](https://github.com/rapidsai/cugraph/pull/2236)) [@ChuckHastings](https://github.com/ChuckHastings)
- Reverting raft pinned tag ([#2235](https://github.com/rapidsai/cugraph/pull/2235)) [@cjnolet](https://github.com/cjnolet)
- Rename primitive functions. ([#2234](https://github.com/rapidsai/cugraph/pull/2234)) [@seunghwak](https://github.com/seunghwak)
- Moves pylibcugraph APIS from 22.04 and earlier out of `experimental` namespace ([#2232](https://github.com/rapidsai/cugraph/pull/2232)) [@betochimas](https://github.com/betochimas)
- Use conda to build python packages during GPU tests ([#2230](https://github.com/rapidsai/cugraph/pull/2230)) [@Ethyling](https://github.com/Ethyling)
- Fix typos in documentation ([#2225](https://github.com/rapidsai/cugraph/pull/2225)) [@seunghwak](https://github.com/seunghwak)
- Update CMake pinning to allow newer CMake versions. ([#2221](https://github.com/rapidsai/cugraph/pull/2221)) [@vyasr](https://github.com/vyasr)
- Graph primitives API updates ([#2220](https://github.com/rapidsai/cugraph/pull/2220)) [@seunghwak](https://github.com/seunghwak)
- Enable MG support for small datasets ([#2216](https://github.com/rapidsai/cugraph/pull/2216)) [@jnke2016](https://github.com/jnke2016)
- Unpin `dask` & `distributed` for devlopment ([#2214](https://github.com/rapidsai/cugraph/pull/2214)) [@galipremsagar](https://github.com/galipremsagar)
- updated MG Test code to not use DiGraph ([#2213](https://github.com/rapidsai/cugraph/pull/2213)) [@BradReesWork](https://github.com/BradReesWork)
- renaming detail space functions ([#2212](https://github.com/rapidsai/cugraph/pull/2212)) [@seunghwak](https://github.com/seunghwak)
- Make diagram and caption consistent in Pagerank.ipynb ([#2207](https://github.com/rapidsai/cugraph/pull/2207)) [@charlesbluca](https://github.com/charlesbluca)
- Add Katz Centrality to pylibcugraph, refactor Katz Centrality for cugraph ([#2201](https://github.com/rapidsai/cugraph/pull/2201)) [@betochimas](https://github.com/betochimas)
- Resolve Forward merging of branch-22.04 into branch-22.06 ([#2197](https://github.com/rapidsai/cugraph/pull/2197)) [@jnke2016](https://github.com/jnke2016)
- Add Katz Centrality to the C API ([#2192](https://github.com/rapidsai/cugraph/pull/2192)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update graph/graph primitives API to consistently use vertex/edge centric terminologies instead of matrix centric terminolgies ([#2187](https://github.com/rapidsai/cugraph/pull/2187)) [@seunghwak](https://github.com/seunghwak)
- Labeling algorithm updates for C API ([#2185](https://github.com/rapidsai/cugraph/pull/2185)) [@ChuckHastings](https://github.com/ChuckHastings)
- Added GraphStore Function ([#2183](https://github.com/rapidsai/cugraph/pull/2183)) [@wangxiaoyunNV](https://github.com/wangxiaoyunNV)
- Enable building static libs ([#2179](https://github.com/rapidsai/cugraph/pull/2179)) [@trxcllnt](https://github.com/trxcllnt)
- Fix merge conflicts ([#2155](https://github.com/rapidsai/cugraph/pull/2155)) [@ajschmidt8](https://github.com/ajschmidt8)
- Remove unused code (gunrock HITS) ([#2152](https://github.com/rapidsai/cugraph/pull/2152)) [@seunghwak](https://github.com/seunghwak)
- Turn off cuco dependency in RAFT. Re-establish explicit `cuco` and `libcuxx` cmake dependencies ([#2132](https://github.com/rapidsai/cugraph/pull/2132)) [@cjnolet](https://github.com/cjnolet)
- Consolidate C++ conda recipes and add `libcugraph-tests` package ([#2124](https://github.com/rapidsai/cugraph/pull/2124)) [@Ethyling](https://github.com/Ethyling)
- Use conda compilers ([#2101](https://github.com/rapidsai/cugraph/pull/2101)) [@Ethyling](https://github.com/Ethyling)
- Use mamba to build packages ([#2051](https://github.com/rapidsai/cugraph/pull/2051)) [@Ethyling](https://github.com/Ethyling)
# cuGraph 22.04.00 (6 Apr 2022)
## 🚨 Breaking Changes
- Remove major/minor from renumber_edgelist public functions. ([#2116](https://github.com/rapidsai/cugraph/pull/2116)) [@seunghwak](https://github.com/seunghwak)
- Add MG support to the C API ([#2110](https://github.com/rapidsai/cugraph/pull/2110)) [@ChuckHastings](https://github.com/ChuckHastings)
- Graph prmitives API update ([#2100](https://github.com/rapidsai/cugraph/pull/2100)) [@seunghwak](https://github.com/seunghwak)
- Reduce peak memory requirement in graph creation (part 1/2) ([#2070](https://github.com/rapidsai/cugraph/pull/2070)) [@seunghwak](https://github.com/seunghwak)
## 🐛 Bug Fixes
- Pin cmake in conda recipe to <3.23 ([#2176](https://github.com/rapidsai/cugraph/pull/2176)) [@dantegd](https://github.com/dantegd)
- Remove unused cython code referencing RAFT APIs that are no longer present ([#2125](https://github.com/rapidsai/cugraph/pull/2125)) [@rlratzel](https://github.com/rlratzel)
- Add pylibcugraph as a run dep to the cugraph conda package ([#2121](https://github.com/rapidsai/cugraph/pull/2121)) [@rlratzel](https://github.com/rlratzel)
- update_frontier_v_push_if_out_nbr C++ test bug fix ([#2097](https://github.com/rapidsai/cugraph/pull/2097)) [@seunghwak](https://github.com/seunghwak)
- extract_if_e bug fix. ([#2096](https://github.com/rapidsai/cugraph/pull/2096)) [@seunghwak](https://github.com/seunghwak)
- Fix bug Random Walk in array sizes ([#2089](https://github.com/rapidsai/cugraph/pull/2089)) [@ChuckHastings](https://github.com/ChuckHastings)
- Coarsening symmetric graphs leads to slightly asymmetric edge weights ([#2080](https://github.com/rapidsai/cugraph/pull/2080)) [@seunghwak](https://github.com/seunghwak)
- Skips ktruss docstring example for CUDA version 11.4 ([#2074](https://github.com/rapidsai/cugraph/pull/2074)) [@betochimas](https://github.com/betochimas)
- Branch 22.04 merge 22.02 ([#2072](https://github.com/rapidsai/cugraph/pull/2072)) [@rlratzel](https://github.com/rlratzel)
- MG Louvain C++ test R-mat usecase parameters ([#2061](https://github.com/rapidsai/cugraph/pull/2061)) [@seunghwak](https://github.com/seunghwak)
- Updates to enable NumberMap to generate unique src/dst column names ([#2050](https://github.com/rapidsai/cugraph/pull/2050)) [@rlratzel](https://github.com/rlratzel)
- Allow class types to be properly represented in the `experimental_warning_wrapper()` return value ([#2048](https://github.com/rapidsai/cugraph/pull/2048)) [@rlratzel](https://github.com/rlratzel)
- Improve MG graph creation ([#2044](https://github.com/rapidsai/cugraph/pull/2044)) [@seunghwak](https://github.com/seunghwak)
## 📖 Documentation
- 22.04 Update docs ([#2171](https://github.com/rapidsai/cugraph/pull/2171)) [@BradReesWork](https://github.com/BradReesWork)
- Corrected image in Hits notebook so right node was highlighted. Issue 2079 ([#2106](https://github.com/rapidsai/cugraph/pull/2106)) [@acostadon](https://github.com/acostadon)
- API Doc Namespace Edits + SimpleGraphImpl methods ([#2086](https://github.com/rapidsai/cugraph/pull/2086)) [@betochimas](https://github.com/betochimas)
## 🚀 New Features
- Gather one hop neighbors ([#2117](https://github.com/rapidsai/cugraph/pull/2117)) [@kaatish](https://github.com/kaatish)
- Define the uniform neighbor sampling C API ([#2112](https://github.com/rapidsai/cugraph/pull/2112)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add `node2vec` wrapper to cugraph ([#2093](https://github.com/rapidsai/cugraph/pull/2093)) [@betochimas](https://github.com/betochimas)
- Add `node2vec` wrappers to pylibcugraph ([#2085](https://github.com/rapidsai/cugraph/pull/2085)) [@betochimas](https://github.com/betochimas)
- Multi gpu sample edges utilities ([#2064](https://github.com/rapidsai/cugraph/pull/2064)) [@kaatish](https://github.com/kaatish)
- add libcugraphops as a dependency of cugraph ([#2019](https://github.com/rapidsai/cugraph/pull/2019)) [@MatthiasKohl](https://github.com/MatthiasKohl)
## 🛠️ Improvements
- Updated random_walk_benchmark notebook for API change in cudf ([#2164](https://github.com/rapidsai/cugraph/pull/2164)) [@mmccarty](https://github.com/mmccarty)
- Neighborhood sampling C API implementation ([#2156](https://github.com/rapidsai/cugraph/pull/2156)) [@ChuckHastings](https://github.com/ChuckHastings)
- Enhancement on uniform random sampling of indices near zero. ([#2153](https://github.com/rapidsai/cugraph/pull/2153)) [@aschaffer](https://github.com/aschaffer)
- Temporarily disable new `ops-bot` functionality ([#2151](https://github.com/rapidsai/cugraph/pull/2151)) [@ajschmidt8](https://github.com/ajschmidt8)
- HITS C API implementation ([#2150](https://github.com/rapidsai/cugraph/pull/2150)) [@ChuckHastings](https://github.com/ChuckHastings)
- Use `rapids_find_package` to get `cugraph-ops` ([#2148](https://github.com/rapidsai/cugraph/pull/2148)) [@trxcllnt](https://github.com/trxcllnt)
- Pin `dask` and `distributed` versions ([#2147](https://github.com/rapidsai/cugraph/pull/2147)) [@galipremsagar](https://github.com/galipremsagar)
- Pin gtest/gmock to 1.10.0 in dev envs ([#2127](https://github.com/rapidsai/cugraph/pull/2127)) [@trxcllnt](https://github.com/trxcllnt)
- Add HITS to the C API ([#2123](https://github.com/rapidsai/cugraph/pull/2123)) [@ChuckHastings](https://github.com/ChuckHastings)
- node2vec Python wrapper API changes and refactoring, with improved testing coverage ([#2120](https://github.com/rapidsai/cugraph/pull/2120)) [@betochimas](https://github.com/betochimas)
- Add MG neighborhood sampling to pylibcugraph & cugraph APIs ([#2118](https://github.com/rapidsai/cugraph/pull/2118)) [@betochimas](https://github.com/betochimas)
- Remove major/minor from renumber_edgelist public functions. ([#2116](https://github.com/rapidsai/cugraph/pull/2116)) [@seunghwak](https://github.com/seunghwak)
- Upgrade `dask` and `distributed` ([#2115](https://github.com/rapidsai/cugraph/pull/2115)) [@galipremsagar](https://github.com/galipremsagar)
- Remove references to gmock ([#2114](https://github.com/rapidsai/cugraph/pull/2114)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add `.github/ops-bot.yaml` config file ([#2111](https://github.com/rapidsai/cugraph/pull/2111)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add MG support to the C API ([#2110](https://github.com/rapidsai/cugraph/pull/2110)) [@ChuckHastings](https://github.com/ChuckHastings)
- Graph prmitives API update ([#2100](https://github.com/rapidsai/cugraph/pull/2100)) [@seunghwak](https://github.com/seunghwak)
- Nx compatibility based on making Graph subclass and calling Cugraph algos ([#2099](https://github.com/rapidsai/cugraph/pull/2099)) [@acostadon](https://github.com/acostadon)
- Fix cugraph-ops header names ([#2095](https://github.com/rapidsai/cugraph/pull/2095)) [@kaatish](https://github.com/kaatish)
- Updating a few headers that have been renamed in raft ([#2090](https://github.com/rapidsai/cugraph/pull/2090)) [@cjnolet](https://github.com/cjnolet)
- Add MG wrapper for HITS ([#2088](https://github.com/rapidsai/cugraph/pull/2088)) [@jnke2016](https://github.com/jnke2016)
- Automatically clone raft when the raft pinned tag changes ([#2087](https://github.com/rapidsai/cugraph/pull/2087)) [@cjnolet](https://github.com/cjnolet)
- updated release performance notebook to also measure using Nx as imput ([#2083](https://github.com/rapidsai/cugraph/pull/2083)) [@BradReesWork](https://github.com/BradReesWork)
- Reduce peak memory requirement in graph creation (part 2/2) ([#2081](https://github.com/rapidsai/cugraph/pull/2081)) [@seunghwak](https://github.com/seunghwak)
- C API code cleanup ([#2077](https://github.com/rapidsai/cugraph/pull/2077)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove usage of RAFT memory management ([#2076](https://github.com/rapidsai/cugraph/pull/2076)) [@viclafargue](https://github.com/viclafargue)
- MNMG Neighborhood Sampling ([#2073](https://github.com/rapidsai/cugraph/pull/2073)) [@aschaffer](https://github.com/aschaffer)
- Allow PropertyGraph `default_edge_weight` to be used to add an edge weight value on extracted Graphs even when a weight property wasn't specified ([#2071](https://github.com/rapidsai/cugraph/pull/2071)) [@rlratzel](https://github.com/rlratzel)
- Reduce peak memory requirement in graph creation (part 1/2) ([#2070](https://github.com/rapidsai/cugraph/pull/2070)) [@seunghwak](https://github.com/seunghwak)
- add node2vec C API implementation ([#2069](https://github.com/rapidsai/cugraph/pull/2069)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fixing cugraph for RAFT spectral/lap API changes ([#2067](https://github.com/rapidsai/cugraph/pull/2067)) [@cjnolet](https://github.com/cjnolet)
- remove unused spmv functions ([#2066](https://github.com/rapidsai/cugraph/pull/2066)) [@ChuckHastings](https://github.com/ChuckHastings)
- Improve MG Louvain scalability ([#2062](https://github.com/rapidsai/cugraph/pull/2062)) [@seunghwak](https://github.com/seunghwak)
- Added `pylibcugraph` utility for setting up return array values ([#2060](https://github.com/rapidsai/cugraph/pull/2060)) [@rlratzel](https://github.com/rlratzel)
- Add node2vec to C API - API PR ([#2059](https://github.com/rapidsai/cugraph/pull/2059)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add CMake `install` rules for tests ([#2057](https://github.com/rapidsai/cugraph/pull/2057)) [@ajschmidt8](https://github.com/ajschmidt8)
- PropertyGraph updates: added features for DGL, improved `extract_subgraph()` and `num_vertices` performance ([#2056](https://github.com/rapidsai/cugraph/pull/2056)) [@rlratzel](https://github.com/rlratzel)
- Update C++ SG and MG Louvain tests to support Rmat and benchmark tests ([#2054](https://github.com/rapidsai/cugraph/pull/2054)) [@ChuckHastings](https://github.com/ChuckHastings)
- Unpin max `dask` and `distributed` versions ([#2053](https://github.com/rapidsai/cugraph/pull/2053)) [@galipremsagar](https://github.com/galipremsagar)
- Removal of remaining DiGraph Python mentions ([#2049](https://github.com/rapidsai/cugraph/pull/2049)) [@betochimas](https://github.com/betochimas)
- Dgl graph store ([#2046](https://github.com/rapidsai/cugraph/pull/2046)) [@BradReesWork](https://github.com/BradReesWork)
- replace `ccache` with `sccache` ([#2045](https://github.com/rapidsai/cugraph/pull/2045)) [@AyodeAwe](https://github.com/AyodeAwe)
- Fix Merge Conflicts for `2024` ([#2040](https://github.com/rapidsai/cugraph/pull/2040)) [@ajschmidt8](https://github.com/ajschmidt8)
- Improve MG PageRank scalability ([#2038](https://github.com/rapidsai/cugraph/pull/2038)) [@seunghwak](https://github.com/seunghwak)
- Created initial list of simple Graph creation tests for nx compatibility ([#2035](https://github.com/rapidsai/cugraph/pull/2035)) [@acostadon](https://github.com/acostadon)
- neighbor sampling in COO/CSR format ([#1982](https://github.com/rapidsai/cugraph/pull/1982)) [@MatthiasKohl](https://github.com/MatthiasKohl)
# cuGraph 22.02.00 (2 Feb 2022)
## 🐛 Bug Fixes
- Always upload libcugraph ([#2041](https://github.com/rapidsai/cugraph/pull/2041)) [@raydouglass](https://github.com/raydouglass)
- Fix Louvain hang in multi-GPU testing ([#2028](https://github.com/rapidsai/cugraph/pull/2028)) [@seunghwak](https://github.com/seunghwak)
- fix bug when calculating the number of vertices ([#1992](https://github.com/rapidsai/cugraph/pull/1992)) [@jnke2016](https://github.com/jnke2016)
- update cuda 11.5 configuration to use clang format 11.1.0 ([#1990](https://github.com/rapidsai/cugraph/pull/1990)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update version in libcugraph_etl CMakeLists.txt to 22.02.00 to match libcugraph ([#1966](https://github.com/rapidsai/cugraph/pull/1966)) [@rlratzel](https://github.com/rlratzel)
## 📖 Documentation
- Initial automated doctest, all current examples now pass, other documentation edits ([#2014](https://github.com/rapidsai/cugraph/pull/2014)) [@betochimas](https://github.com/betochimas)
- Fix README example ([#1981](https://github.com/rapidsai/cugraph/pull/1981)) [@gitbuda](https://github.com/gitbuda)
## 🚀 New Features
- Add SSSP API, test and implementation ([#2016](https://github.com/rapidsai/cugraph/pull/2016)) [@ChuckHastings](https://github.com/ChuckHastings)
- Propose extract_bfs_paths C API ([#1955](https://github.com/rapidsai/cugraph/pull/1955)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🛠️ Improvements
- Do not build CUDA libs in Python jobs ([#2039](https://github.com/rapidsai/cugraph/pull/2039)) [@Ethyling](https://github.com/Ethyling)
- updated for release 22.02 ([#2034](https://github.com/rapidsai/cugraph/pull/2034)) [@BradReesWork](https://github.com/BradReesWork)
- Fix raft git ref ([#2032](https://github.com/rapidsai/cugraph/pull/2032)) [@Ethyling](https://github.com/Ethyling)
- Pin `dask` & `distributed` ([#2031](https://github.com/rapidsai/cugraph/pull/2031)) [@galipremsagar](https://github.com/galipremsagar)
- Fix build script ([#2029](https://github.com/rapidsai/cugraph/pull/2029)) [@Ethyling](https://github.com/Ethyling)
- Prepare upload scripts for Python 3.7 removal ([#2027](https://github.com/rapidsai/cugraph/pull/2027)) [@Ethyling](https://github.com/Ethyling)
- Python API updates to enable explicit control of internal `graph_t` creation and deletion ([#2023](https://github.com/rapidsai/cugraph/pull/2023)) [@rlratzel](https://github.com/rlratzel)
- Updated build.sh help text and test execution steps in SOURCEBUILD.md ([#2020](https://github.com/rapidsai/cugraph/pull/2020)) [@acostadon](https://github.com/acostadon)
- Removed unused CI files ([#2017](https://github.com/rapidsai/cugraph/pull/2017)) [@rlratzel](https://github.com/rlratzel)
- Unpin `dask` and `distributed` ([#2010](https://github.com/rapidsai/cugraph/pull/2010)) [@galipremsagar](https://github.com/galipremsagar)
- Fix call to `getDeviceAttribute` following API change in RMM. ([#2008](https://github.com/rapidsai/cugraph/pull/2008)) [@shwina](https://github.com/shwina)
- drop fa2 cpu code ([#2007](https://github.com/rapidsai/cugraph/pull/2007)) [@BradReesWork](https://github.com/BradReesWork)
- Branch 22.02 merge 21.12 ([#2002](https://github.com/rapidsai/cugraph/pull/2002)) [@rlratzel](https://github.com/rlratzel)
- Update references to CHECK_CUDA, CUDA_CHECK and CUDA_TRY to use new RAFT_ names ([#2000](https://github.com/rapidsai/cugraph/pull/2000)) [@ChuckHastings](https://github.com/ChuckHastings)
- Initial PropertyGraph implementation and tests ([#1999](https://github.com/rapidsai/cugraph/pull/1999)) [@rlratzel](https://github.com/rlratzel)
- Fix optional and cstddef includes ([#1998](https://github.com/rapidsai/cugraph/pull/1998)) [@gitbuda](https://github.com/gitbuda)
- Add optimized 2x string column renumbering code ([#1996](https://github.com/rapidsai/cugraph/pull/1996)) [@chirayuG-nvidia](https://github.com/chirayuG-nvidia)
- Pass RMM memory allocator to cuco ([#1994](https://github.com/rapidsai/cugraph/pull/1994)) [@seunghwak](https://github.com/seunghwak)
- Add missing imports tests ([#1993](https://github.com/rapidsai/cugraph/pull/1993)) [@Ethyling](https://github.com/Ethyling)
- Update ucx-py version on release using rvc ([#1991](https://github.com/rapidsai/cugraph/pull/1991)) [@Ethyling](https://github.com/Ethyling)
- make C++ tests run faster (fewer tests) ([#1989](https://github.com/rapidsai/cugraph/pull/1989)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update the update_frontier_v_push_if_out_nbr primitive & BFS performance ([#1988](https://github.com/rapidsai/cugraph/pull/1988)) [@seunghwak](https://github.com/seunghwak)
- Remove `IncludeCategories` from `.clang-format` ([#1987](https://github.com/rapidsai/cugraph/pull/1987)) [@codereport](https://github.com/codereport)
- Update frontier v push if out nbr prim test ([#1985](https://github.com/rapidsai/cugraph/pull/1985)) [@kaatish](https://github.com/kaatish)
- Pass stream to cuco::static_map ([#1984](https://github.com/rapidsai/cugraph/pull/1984)) [@seunghwak](https://github.com/seunghwak)
- Shutdown the connected scheduler and workers ([#1980](https://github.com/rapidsai/cugraph/pull/1980)) [@jnke2016](https://github.com/jnke2016)
- Use CUB 1.15.0's new segmented sort ([#1977](https://github.com/rapidsai/cugraph/pull/1977)) [@seunghwak](https://github.com/seunghwak)
- Improve consistency in C++ test case names and add R-mat tests to graph coarsening ([#1976](https://github.com/rapidsai/cugraph/pull/1976)) [@seunghwak](https://github.com/seunghwak)
- 22.02 dep fix ([#1974](https://github.com/rapidsai/cugraph/pull/1974)) [@BradReesWork](https://github.com/BradReesWork)
- Extract paths C API implementation ([#1973](https://github.com/rapidsai/cugraph/pull/1973)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add rmat tests to Louvain C++ unit tests ([#1971](https://github.com/rapidsai/cugraph/pull/1971)) [@ChuckHastings](https://github.com/ChuckHastings)
- Branch 22.02 merge 21.12 ([#1965](https://github.com/rapidsai/cugraph/pull/1965)) [@rlratzel](https://github.com/rlratzel)
- Update to UCX-Py 0.24 ([#1962](https://github.com/rapidsai/cugraph/pull/1962)) [@pentschev](https://github.com/pentschev)
- add rmm pool option for SNMG runs ([#1957](https://github.com/rapidsai/cugraph/pull/1957)) [@jnke2016](https://github.com/jnke2016)
- Branch 22.02 merge 21.12 ([#1953](https://github.com/rapidsai/cugraph/pull/1953)) [@rlratzel](https://github.com/rlratzel)
- Update probability params for RMAT call to match Graph500 ([#1952](https://github.com/rapidsai/cugraph/pull/1952)) [@rlratzel](https://github.com/rlratzel)
- Fix the difference in 2D partitioning of GPUs in python and C++ ([#1950](https://github.com/rapidsai/cugraph/pull/1950)) [@seunghwak](https://github.com/seunghwak)
- Raft Handle Updates to cuGraph ([#1894](https://github.com/rapidsai/cugraph/pull/1894)) [@divyegala](https://github.com/divyegala)
- Remove FAISS dependency, inherit other common dependencies from raft ([#1863](https://github.com/rapidsai/cugraph/pull/1863)) [@trxcllnt](https://github.com/trxcllnt)
# cuGraph 21.12.00 (9 Dec 2021)
## 🚨 Breaking Changes
- Disable HITS and setup 11.5 env ([#1930](https://github.com/rapidsai/cugraph/pull/1930)) [@BradReesWork](https://github.com/BradReesWork)
## 🐛 Bug Fixes
- Updates to `libcugraph_etl` conda recipe for CUDA Enhanced Compatibility ([#1968](https://github.com/rapidsai/cugraph/pull/1968)) [@rlratzel](https://github.com/rlratzel)
- Enforce renumbering for MNMG algos ([#1943](https://github.com/rapidsai/cugraph/pull/1943)) [@jnke2016](https://github.com/jnke2016)
- Bug fix in the R-mat generator ([#1929](https://github.com/rapidsai/cugraph/pull/1929)) [@seunghwak](https://github.com/seunghwak)
- Updates to support correct comparisons of cuDF Series with different names ([#1928](https://github.com/rapidsai/cugraph/pull/1928)) [@rlratzel](https://github.com/rlratzel)
- Updated error message and using a proper TypeError exception when an invalid MultiGraph is passed in ([#1925](https://github.com/rapidsai/cugraph/pull/1925)) [@rlratzel](https://github.com/rlratzel)
- Update calls to cuDF Series ctors, bug fix to `cugraph.subgraph()` for handling non-renumbered Graphs ([#1901](https://github.com/rapidsai/cugraph/pull/1901)) [@rlratzel](https://github.com/rlratzel)
- Fix MG test bug ([#1897](https://github.com/rapidsai/cugraph/pull/1897)) [@seunghwak](https://github.com/seunghwak)
- Temporary workaround for CI issues with 11.0 ([#1883](https://github.com/rapidsai/cugraph/pull/1883)) [@ChuckHastings](https://github.com/ChuckHastings)
- Ensuring dask workers are using local space ([#1879](https://github.com/rapidsai/cugraph/pull/1879)) [@jnke2016](https://github.com/jnke2016)
- Disable WCC test until we get get on an A100 to debug on ([#1870](https://github.com/rapidsai/cugraph/pull/1870)) [@ChuckHastings](https://github.com/ChuckHastings)
## 📖 Documentation
- Enable crosslink to rmm ([#1918](https://github.com/rapidsai/cugraph/pull/1918)) [@AyodeAwe](https://github.com/AyodeAwe)
## 🚀 New Features
- C API Create Graph Implementation ([#1940](https://github.com/rapidsai/cugraph/pull/1940)) [@ChuckHastings](https://github.com/ChuckHastings)
- Count self-loops and multi-edges ([#1939](https://github.com/rapidsai/cugraph/pull/1939)) [@seunghwak](https://github.com/seunghwak)
- Add a new graph primitive to filter edges (extract_if_e) ([#1938](https://github.com/rapidsai/cugraph/pull/1938)) [@seunghwak](https://github.com/seunghwak)
- Add options to drop self-loops & multi_edges in C++ test graph generation ([#1934](https://github.com/rapidsai/cugraph/pull/1934)) [@seunghwak](https://github.com/seunghwak)
- K-core implementation for undirected graphs ([#1933](https://github.com/rapidsai/cugraph/pull/1933)) [@seunghwak](https://github.com/seunghwak)
- K-core decomposition API update ([#1924](https://github.com/rapidsai/cugraph/pull/1924)) [@seunghwak](https://github.com/seunghwak)
- Transpose ([#1834](https://github.com/rapidsai/cugraph/pull/1834)) [@seunghwak](https://github.com/seunghwak)
- Symmetrize ([#1833](https://github.com/rapidsai/cugraph/pull/1833)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Fix Changelog Merge Conflicts for `branch-21.12` ([#1960](https://github.com/rapidsai/cugraph/pull/1960)) [@ajschmidt8](https://github.com/ajschmidt8)
- Pin max `dask` & `distributed` to `2021.11.2` ([#1958](https://github.com/rapidsai/cugraph/pull/1958)) [@galipremsagar](https://github.com/galipremsagar)
- Explicitly install cusolver version with the correct ABI version ([#1954](https://github.com/rapidsai/cugraph/pull/1954)) [@robertmaynard](https://github.com/robertmaynard)
- Upgrade `clang` to `11.1.0` ([#1949](https://github.com/rapidsai/cugraph/pull/1949)) [@galipremsagar](https://github.com/galipremsagar)
- cugraph bring in the same cuco as raft and cudf ([#1945](https://github.com/rapidsai/cugraph/pull/1945)) [@robertmaynard](https://github.com/robertmaynard)
- Re-enable HITS in the python API using the new primitive-based implementation ([#1941](https://github.com/rapidsai/cugraph/pull/1941)) [@rlratzel](https://github.com/rlratzel)
- Accounting for raft::random detail changes ([#1937](https://github.com/rapidsai/cugraph/pull/1937)) [@divyegala](https://github.com/divyegala)
- Use collections.abc.Sequence instead of deprecated collections.Sequence. ([#1932](https://github.com/rapidsai/cugraph/pull/1932)) [@bdice](https://github.com/bdice)
- Update rapids-cmake to 21.12 ([#1931](https://github.com/rapidsai/cugraph/pull/1931)) [@dantegd](https://github.com/dantegd)
- Disable HITS and setup 11.5 env ([#1930](https://github.com/rapidsai/cugraph/pull/1930)) [@BradReesWork](https://github.com/BradReesWork)
- add new demo notebook for louvain ([#1927](https://github.com/rapidsai/cugraph/pull/1927)) [@ChuckHastings](https://github.com/ChuckHastings)
- Ensure empty shuffled columns have the appropriate dtype ([#1926](https://github.com/rapidsai/cugraph/pull/1926)) [@jnke2016](https://github.com/jnke2016)
- improved Nx conversion performance ([#1921](https://github.com/rapidsai/cugraph/pull/1921)) [@BradReesWork](https://github.com/BradReesWork)
- Fix metadata mismatch ([#1920](https://github.com/rapidsai/cugraph/pull/1920)) [@jnke2016](https://github.com/jnke2016)
- Additional improvements to support (key, value) pairs when E/V is small and P is large ([#1919](https://github.com/rapidsai/cugraph/pull/1919)) [@seunghwak](https://github.com/seunghwak)
- Remove unnecessary host barrier synchronization ([#1917](https://github.com/rapidsai/cugraph/pull/1917)) [@seunghwak](https://github.com/seunghwak)
- Reduce MNMG memory requirements ([#1916](https://github.com/rapidsai/cugraph/pull/1916)) [@seunghwak](https://github.com/seunghwak)
- Added separate helpers for moving buffers to either cudf column and series objects ([#1915](https://github.com/rapidsai/cugraph/pull/1915)) [@rlratzel](https://github.com/rlratzel)
- C API for creating a graph ([#1907](https://github.com/rapidsai/cugraph/pull/1907)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add raft ops for reduce_v and transform_reduce_v ([#1902](https://github.com/rapidsai/cugraph/pull/1902)) [@kaatish](https://github.com/kaatish)
- Store benchmark results in json files ([#1900](https://github.com/rapidsai/cugraph/pull/1900)) [@jnke2016](https://github.com/jnke2016)
- HITS primitive based implementation ([#1898](https://github.com/rapidsai/cugraph/pull/1898)) [@kaatish](https://github.com/kaatish)
- Update to UCX-Py 0.23 ([#1895](https://github.com/rapidsai/cugraph/pull/1895)) [@Ethyling](https://github.com/Ethyling)
- Updating WCC/SCC notebook ([#1893](https://github.com/rapidsai/cugraph/pull/1893)) [@BradReesWork](https://github.com/BradReesWork)
- Update input argument check for graph_t constructor and remove expensive input argument check for graph_view_t ([#1890](https://github.com/rapidsai/cugraph/pull/1890)) [@seunghwak](https://github.com/seunghwak)
- Update `conda` recipes for Enhanced Compatibility effort ([#1889](https://github.com/rapidsai/cugraph/pull/1889)) [@ajschmidt8](https://github.com/ajschmidt8)
- Minor code clean-up ([#1888](https://github.com/rapidsai/cugraph/pull/1888)) [@seunghwak](https://github.com/seunghwak)
- Sort local neighbors in the graph adjacency list. ([#1886](https://github.com/rapidsai/cugraph/pull/1886)) [@seunghwak](https://github.com/seunghwak)
- initial creation of libcugraph_etl.so ([#1885](https://github.com/rapidsai/cugraph/pull/1885)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fixing Nx and Graph/DiGraph issues ([#1882](https://github.com/rapidsai/cugraph/pull/1882)) [@BradReesWork](https://github.com/BradReesWork)
- Remove unnecessary explicit template instantiation ([#1878](https://github.com/rapidsai/cugraph/pull/1878)) [@seunghwak](https://github.com/seunghwak)
- node2vec Sampling Implementation ([#1875](https://github.com/rapidsai/cugraph/pull/1875)) [@aschaffer](https://github.com/aschaffer)
- update docstring and examples ([#1866](https://github.com/rapidsai/cugraph/pull/1866)) [@jnke2016](https://github.com/jnke2016)
- Copy v transform reduce out test ([#1856](https://github.com/rapidsai/cugraph/pull/1856)) [@kaatish](https://github.com/kaatish)
- Unpin `dask` & `distributed` ([#1849](https://github.com/rapidsai/cugraph/pull/1849)) [@galipremsagar](https://github.com/galipremsagar)
- Fix automerger for `branch-21.12` ([#1848](https://github.com/rapidsai/cugraph/pull/1848)) [@galipremsagar](https://github.com/galipremsagar)
- Extract BFS paths SG implementation ([#1838](https://github.com/rapidsai/cugraph/pull/1838)) [@ChuckHastings](https://github.com/ChuckHastings)
- Initial cuGraph C API - biased RW, C tests, script updates, cmake files, C library helpers ([#1799](https://github.com/rapidsai/cugraph/pull/1799)) [@aschaffer](https://github.com/aschaffer)
# cuGraph 21.10.00 (7 Oct 2021)
## 🚨 Breaking Changes
- remove tsp implementation from 21.10 ([#1812](https://github.com/rapidsai/cugraph/pull/1812)) [@ChuckHastings](https://github.com/ChuckHastings)
- multi seeds BFS with one seed per component ([#1591](https://github.com/rapidsai/cugraph/pull/1591)) [@afender](https://github.com/afender)
## 🐛 Bug Fixes
- make_zip_iterator should be on a make_tuple ([#1857](https://github.com/rapidsai/cugraph/pull/1857)) [@ChuckHastings](https://github.com/ChuckHastings)
- Removed NetworkX requirement for type checks, fixed docstring, added new docstrings, import cleanups ([#1853](https://github.com/rapidsai/cugraph/pull/1853)) [@rlratzel](https://github.com/rlratzel)
- Temporarily disable input argument checks for a currently disabled feature ([#1840](https://github.com/rapidsai/cugraph/pull/1840)) [@seunghwak](https://github.com/seunghwak)
- Changed value of the expensive check param to `false` in `populate_graph_container` ([#1839](https://github.com/rapidsai/cugraph/pull/1839)) [@rlratzel](https://github.com/rlratzel)
- Accommodate cudf change to is_string_dtype method ([#1827](https://github.com/rapidsai/cugraph/pull/1827)) [@ChuckHastings](https://github.com/ChuckHastings)
- Changed code to disable `k_truss` on CUDA 11.4 differently ([#1811](https://github.com/rapidsai/cugraph/pull/1811)) [@rlratzel](https://github.com/rlratzel)
- Clean-up artifacts from the multi-source BFS PR ([#1591) (#1804](https://github.com/rapidsai/cugraph/pull/1591) (#1804)) [@seunghwak](https://github.com/seunghwak)
- MG WCC bug fix ([#1802](https://github.com/rapidsai/cugraph/pull/1802)) [@seunghwak](https://github.com/seunghwak)
- Fix MG Louvain test compile errors ([#1797](https://github.com/rapidsai/cugraph/pull/1797)) [@seunghwak](https://github.com/seunghwak)
- force_atlas2 to support nx hypercube_graph ([#1779](https://github.com/rapidsai/cugraph/pull/1779)) [@jnke2016](https://github.com/jnke2016)
- Bug louvain reverted fix ([#1766](https://github.com/rapidsai/cugraph/pull/1766)) [@ChuckHastings](https://github.com/ChuckHastings)
- Bug dask cudf personalization ([#1764](https://github.com/rapidsai/cugraph/pull/1764)) [@Iroy30](https://github.com/Iroy30)
## 📖 Documentation
- updated to new doc theme ([#1793](https://github.com/rapidsai/cugraph/pull/1793)) [@BradReesWork](https://github.com/BradReesWork)
- Change python docs to pydata theme ([#1785](https://github.com/rapidsai/cugraph/pull/1785)) [@galipremsagar](https://github.com/galipremsagar)
- Initial doc update for running the python E2E benchmarks in a MNMG environment. ([#1781](https://github.com/rapidsai/cugraph/pull/1781)) [@rlratzel](https://github.com/rlratzel)
## 🚀 New Features
- C++ benchmarking for additional algorithms ([#1762](https://github.com/rapidsai/cugraph/pull/1762)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Updating cuco to latest ([#1859](https://github.com/rapidsai/cugraph/pull/1859)) [@BradReesWork](https://github.com/BradReesWork)
- fix benchmark exit status ([#1850](https://github.com/rapidsai/cugraph/pull/1850)) [@jnke2016](https://github.com/jnke2016)
- add try/catch for python-louvain ([#1842](https://github.com/rapidsai/cugraph/pull/1842)) [@BradReesWork](https://github.com/BradReesWork)
- Pin max dask and distributed versions to 2021.09.1 ([#1841](https://github.com/rapidsai/cugraph/pull/1841)) [@galipremsagar](https://github.com/galipremsagar)
- add compiler version checks to cmake to fail early ([#1836](https://github.com/rapidsai/cugraph/pull/1836)) [@ChuckHastings](https://github.com/ChuckHastings)
- Make sure we keep the rapids-cmake and cugraph cal version in sync ([#1830](https://github.com/rapidsai/cugraph/pull/1830)) [@robertmaynard](https://github.com/robertmaynard)
- Remove obsolete file ([#1829](https://github.com/rapidsai/cugraph/pull/1829)) [@ChuckHastings](https://github.com/ChuckHastings)
- Improve memory scaling for low average vertex degree graphs & many GPUs ([#1823](https://github.com/rapidsai/cugraph/pull/1823)) [@seunghwak](https://github.com/seunghwak)
- Added the reduction op input parameter to host_scalar_(all)reduce utility functions. ([#1822](https://github.com/rapidsai/cugraph/pull/1822)) [@seunghwak](https://github.com/seunghwak)
- Count if e test ([#1821](https://github.com/rapidsai/cugraph/pull/1821)) [@kaatish](https://github.com/kaatish)
- Added Sorensen algorithm to Python API ([#1820](https://github.com/rapidsai/cugraph/pull/1820)) [@jnke2016](https://github.com/jnke2016)
- Updated to enforce only supported dtypes, changed to use legacy connected_components API ([#1817](https://github.com/rapidsai/cugraph/pull/1817)) [@rlratzel](https://github.com/rlratzel)
- Group return values of renumber_edgelist and input parameters of graph_t & graph_view_t constructors. ([#1816](https://github.com/rapidsai/cugraph/pull/1816)) [@seunghwak](https://github.com/seunghwak)
- remove tsp implementation from 21.10 ([#1812](https://github.com/rapidsai/cugraph/pull/1812)) [@ChuckHastings](https://github.com/ChuckHastings)
- Changed pylibcugraph connected_components APIs to use duck typing for CAI inputs, added doc placeholders ([#1810](https://github.com/rapidsai/cugraph/pull/1810)) [@rlratzel](https://github.com/rlratzel)
- Add new new raft symlink path to .gitignore ([#1808](https://github.com/rapidsai/cugraph/pull/1808)) [@trxcllnt](https://github.com/trxcllnt)
- Initial version of `pylibcugraph` conda package and CI build script updates ([#1806](https://github.com/rapidsai/cugraph/pull/1806)) [@rlratzel](https://github.com/rlratzel)
- Also building cpp MG tests as part of conda/CI libcugraph builds ([#1805](https://github.com/rapidsai/cugraph/pull/1805)) [@rlratzel](https://github.com/rlratzel)
- Split many files to separate SG from MG template instantiations ([#1803](https://github.com/rapidsai/cugraph/pull/1803)) [@ChuckHastings](https://github.com/ChuckHastings)
- Graph primitives memory scaling improvements for low average vertex degree graphs and many GPUs (Part 1) ([#1801](https://github.com/rapidsai/cugraph/pull/1801)) [@seunghwak](https://github.com/seunghwak)
- Pylibcugraph connected components ([#1800](https://github.com/rapidsai/cugraph/pull/1800)) [@Iroy30](https://github.com/Iroy30)
- Transform Reduce E test ([#1798](https://github.com/rapidsai/cugraph/pull/1798)) [@kaatish](https://github.com/kaatish)
- Update with rapids cmake new features ([#1790](https://github.com/rapidsai/cugraph/pull/1790)) [@robertmaynard](https://github.com/robertmaynard)
- Update thrust/RMM deprecated calls ([#1789](https://github.com/rapidsai/cugraph/pull/1789)) [@dantegd](https://github.com/dantegd)
- Update UCX-Py to 0.22 ([#1788](https://github.com/rapidsai/cugraph/pull/1788)) [@pentschev](https://github.com/pentschev)
- Initial version of `pylibcugraph` source tree and build script updates ([#1787](https://github.com/rapidsai/cugraph/pull/1787)) [@rlratzel](https://github.com/rlratzel)
- Fix Forward-Merge Conflicts ([#1786](https://github.com/rapidsai/cugraph/pull/1786)) [@ajschmidt8](https://github.com/ajschmidt8)
- add conda environment for CUDA 11.4 ([#1784](https://github.com/rapidsai/cugraph/pull/1784)) [@seunghwak](https://github.com/seunghwak)
- Temporarily pin RMM while refactor removes deprecated calls ([#1775](https://github.com/rapidsai/cugraph/pull/1775)) [@dantegd](https://github.com/dantegd)
- MNMG memory footprint improvement for low average vertex degree graphs (part 2) ([#1774](https://github.com/rapidsai/cugraph/pull/1774)) [@seunghwak](https://github.com/seunghwak)
- Fix unused variables/parameters warnings ([#1772](https://github.com/rapidsai/cugraph/pull/1772)) [@seunghwak](https://github.com/seunghwak)
- MNMG memory footprint improvement for low average vertex degree graphs (part 1) ([#1769](https://github.com/rapidsai/cugraph/pull/1769)) [@seunghwak](https://github.com/seunghwak)
- Transform reduce v test ([#1768](https://github.com/rapidsai/cugraph/pull/1768)) [@kaatish](https://github.com/kaatish)
- Move experimental source files and a few implementation headers ([#1763](https://github.com/rapidsai/cugraph/pull/1763)) [@ChuckHastings](https://github.com/ChuckHastings)
- updating notebooks ([#1761](https://github.com/rapidsai/cugraph/pull/1761)) [@BradReesWork](https://github.com/BradReesWork)
- consolidate tests to use the fixture dask_client ([#1758](https://github.com/rapidsai/cugraph/pull/1758)) [@jnke2016](https://github.com/jnke2016)
- Move all new graph objects out of experimental namespace ([#1757](https://github.com/rapidsai/cugraph/pull/1757)) [@ChuckHastings](https://github.com/ChuckHastings)
- C++ benchmarking for MG PageRank ([#1755](https://github.com/rapidsai/cugraph/pull/1755)) [@seunghwak](https://github.com/seunghwak)
- Move legacy implementations into legacy directories ([#1752](https://github.com/rapidsai/cugraph/pull/1752)) [@ChuckHastings](https://github.com/ChuckHastings)
- Remove hardcoded Pagerank dtype ([#1751](https://github.com/rapidsai/cugraph/pull/1751)) [@jnke2016](https://github.com/jnke2016)
- Add python end to end benchmark and create new directories ([#1750](https://github.com/rapidsai/cugraph/pull/1750)) [@jnke2016](https://github.com/jnke2016)
- Modify MNMG louvain to support an empty vertex partition ([#1744](https://github.com/rapidsai/cugraph/pull/1744)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fea renumbering test ([#1742](https://github.com/rapidsai/cugraph/pull/1742)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix auto-merger for Branch 21.10 coming from 21.08 ([#1740](https://github.com/rapidsai/cugraph/pull/1740)) [@galipremsagar](https://github.com/galipremsagar)
- Use the new RAPIDS.cmake to fetch rapids-cmake ([#1734](https://github.com/rapidsai/cugraph/pull/1734)) [@robertmaynard](https://github.com/robertmaynard)
- Biased Random Walks for GNN ([#1732](https://github.com/rapidsai/cugraph/pull/1732)) [@aschaffer](https://github.com/aschaffer)
- Updated MG python tests to run in single and multi-node environments ([#1731](https://github.com/rapidsai/cugraph/pull/1731)) [@rlratzel](https://github.com/rlratzel)
- ENH Replace gpuci_conda_retry with gpuci_mamba_retry ([#1720](https://github.com/rapidsai/cugraph/pull/1720)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Apply modifications to account for RAFT changes ([#1707](https://github.com/rapidsai/cugraph/pull/1707)) [@viclafargue](https://github.com/viclafargue)
- multi seeds BFS with one seed per component ([#1591](https://github.com/rapidsai/cugraph/pull/1591)) [@afender](https://github.com/afender)
# cuGraph 21.08.00 (4 Aug 2021)
## 🚨 Breaking Changes
- Removed depricated code ([#1705](https://github.com/rapidsai/cugraph/pull/1705)) [@BradReesWork](https://github.com/BradReesWork)
- Delete legacy renumbering implementation ([#1681](https://github.com/rapidsai/cugraph/pull/1681)) [@ChuckHastings](https://github.com/ChuckHastings)
- Migrate old graph to legacy directory/namespace ([#1675](https://github.com/rapidsai/cugraph/pull/1675)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🐛 Bug Fixes
- Changed cuco cmake function to return early if cuco has already been added as a target ([#1746](https://github.com/rapidsai/cugraph/pull/1746)) [@rlratzel](https://github.com/rlratzel)
- revert cuco to latest dev branch, issues should be fixed ([#1721](https://github.com/rapidsai/cugraph/pull/1721)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix `conda` uploads ([#1712](https://github.com/rapidsai/cugraph/pull/1712)) [@ajschmidt8](https://github.com/ajschmidt8)
- Updated for CUDA-specific py packages ([#1709](https://github.com/rapidsai/cugraph/pull/1709)) [@rlratzel](https://github.com/rlratzel)
- Use `library_dirs` for cython linking, link cudatoolkit libs, allow setting UCX install location ([#1698](https://github.com/rapidsai/cugraph/pull/1698)) [@trxcllnt](https://github.com/trxcllnt)
- Fix the Louvain failure with 64 bit vertex IDs ([#1696](https://github.com/rapidsai/cugraph/pull/1696)) [@seunghwak](https://github.com/seunghwak)
- Use nested include in destination of install headers to avoid docker permission issues ([#1656](https://github.com/rapidsai/cugraph/pull/1656)) [@dantegd](https://github.com/dantegd)
- Added accidentally-removed cpp-mgtests target back to the valid args list ([#1652](https://github.com/rapidsai/cugraph/pull/1652)) [@rlratzel](https://github.com/rlratzel)
- Update UCX-Py version to 0.21 ([#1650](https://github.com/rapidsai/cugraph/pull/1650)) [@pentschev](https://github.com/pentschev)
## 📖 Documentation
- Docs for RMAT ([#1735](https://github.com/rapidsai/cugraph/pull/1735)) [@BradReesWork](https://github.com/BradReesWork)
- Doc updates ([#1719](https://github.com/rapidsai/cugraph/pull/1719)) [@BradReesWork](https://github.com/BradReesWork)
## 🚀 New Features
- Fea cleanup stream part1 ([#1653](https://github.com/rapidsai/cugraph/pull/1653)) [@ChuckHastings](https://github.com/ChuckHastings)
## 🛠️ Improvements
- Pinning cuco to a specific commit hash for release ([#1741](https://github.com/rapidsai/cugraph/pull/1741)) [@rlratzel](https://github.com/rlratzel)
- Pin max version for `dask` & `distributed` ([#1736](https://github.com/rapidsai/cugraph/pull/1736)) [@galipremsagar](https://github.com/galipremsagar)
- Fix libfaiss dependency to not expressly depend on conda-forge ([#1728](https://github.com/rapidsai/cugraph/pull/1728)) [@Ethyling](https://github.com/Ethyling)
- Fix MG_test bug ([#1718](https://github.com/rapidsai/cugraph/pull/1718)) [@jnke2016](https://github.com/jnke2016)
- Cascaded dispatch for type-erased API ([#1711](https://github.com/rapidsai/cugraph/pull/1711)) [@aschaffer](https://github.com/aschaffer)
- ReduceV test ([#1710](https://github.com/rapidsai/cugraph/pull/1710)) [@kaatish](https://github.com/kaatish)
- Removed depricated code ([#1705](https://github.com/rapidsai/cugraph/pull/1705)) [@BradReesWork](https://github.com/BradReesWork)
- Delete unused/out-dated primitives ([#1704](https://github.com/rapidsai/cugraph/pull/1704)) [@seunghwak](https://github.com/seunghwak)
- Update primitives to support DCSR (DCSC) segments (Part 2/2) ([#1703](https://github.com/rapidsai/cugraph/pull/1703)) [@seunghwak](https://github.com/seunghwak)
- Fea speedup compile ([#1702](https://github.com/rapidsai/cugraph/pull/1702)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update `conda` environment name for CI ([#1699](https://github.com/rapidsai/cugraph/pull/1699)) [@ajschmidt8](https://github.com/ajschmidt8)
- Count if test ([#1697](https://github.com/rapidsai/cugraph/pull/1697)) [@kaatish](https://github.com/kaatish)
- replace cudf assert_eq ([#1693](https://github.com/rapidsai/cugraph/pull/1693)) [@jnke2016](https://github.com/jnke2016)
- Fix int64 vertex_t ([#1691](https://github.com/rapidsai/cugraph/pull/1691)) [@Iroy30](https://github.com/Iroy30)
- Update primitives to support DCSR (DCSC) segments (Part 1) ([#1690](https://github.com/rapidsai/cugraph/pull/1690)) [@seunghwak](https://github.com/seunghwak)
- remove hardcoded dtype ([#1689](https://github.com/rapidsai/cugraph/pull/1689)) [@Iroy30](https://github.com/Iroy30)
- Updating Clang Version to 11.0.0 ([#1688](https://github.com/rapidsai/cugraph/pull/1688)) [@codereport](https://github.com/codereport)
- `CHECK_CUDA` macros in debug builds ([#1687](https://github.com/rapidsai/cugraph/pull/1687)) [@trxcllnt](https://github.com/trxcllnt)
- fixing symmetrize_ddf ([#1686](https://github.com/rapidsai/cugraph/pull/1686)) [@jnke2016](https://github.com/jnke2016)
- Improve Random Walks performance ([#1685](https://github.com/rapidsai/cugraph/pull/1685)) [@aschaffer](https://github.com/aschaffer)
- Use the 21.08 branch of rapids-cmake as rmm requires it ([#1683](https://github.com/rapidsai/cugraph/pull/1683)) [@robertmaynard](https://github.com/robertmaynard)
- Delete legacy renumbering implementation ([#1681](https://github.com/rapidsai/cugraph/pull/1681)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fix vertex partition offsets ([#1680](https://github.com/rapidsai/cugraph/pull/1680)) [@Iroy30](https://github.com/Iroy30)
- Ues std::optional (or thrust::optional) for optional parameters & first part of DCSR (DCSC) implementation. ([#1676](https://github.com/rapidsai/cugraph/pull/1676)) [@seunghwak](https://github.com/seunghwak)
- Migrate old graph to legacy directory/namespace ([#1675](https://github.com/rapidsai/cugraph/pull/1675)) [@ChuckHastings](https://github.com/ChuckHastings)
- Expose epsilon parameter (precision) through python layer ([#1674](https://github.com/rapidsai/cugraph/pull/1674)) [@ChuckHastings](https://github.com/ChuckHastings)
- Fea hungarian expose precision ([#1673](https://github.com/rapidsai/cugraph/pull/1673)) [@ChuckHastings](https://github.com/ChuckHastings)
- Branch 21.08 merge 21.06 ([#1672](https://github.com/rapidsai/cugraph/pull/1672)) [@BradReesWork](https://github.com/BradReesWork)
- Update pins to Dask/Distributed >= 2021.6.0 ([#1666](https://github.com/rapidsai/cugraph/pull/1666)) [@pentschev](https://github.com/pentschev)
- Fix conflicts in `1643` ([#1651](https://github.com/rapidsai/cugraph/pull/1651)) [@ajschmidt8](https://github.com/ajschmidt8)
- Rename include/cugraph/patterns to include/cugraph/prims ([#1644](https://github.com/rapidsai/cugraph/pull/1644)) [@seunghwak](https://github.com/seunghwak)
- Fix merge conflicts in 1631 ([#1639](https://github.com/rapidsai/cugraph/pull/1639)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update to changed `rmm::device_scalar` API ([#1637](https://github.com/rapidsai/cugraph/pull/1637)) [@harrism](https://github.com/harrism)
- Fix merge conflicts ([#1614](https://github.com/rapidsai/cugraph/pull/1614)) [@ajschmidt8](https://github.com/ajschmidt8)
# cuGraph 21.06.00 (9 Jun 2021)
## 🐛 Bug Fixes
- Delete CUDA_ARCHITECTURES=OFF ([#1638](https://github.com/rapidsai/cugraph/pull/1638)) [@seunghwak](https://github.com/seunghwak)
- transform_reduce_e bug fixes ([#1633](https://github.com/rapidsai/cugraph/pull/1633)) [@ChuckHastings](https://github.com/ChuckHastings)
- Correct install path for include folder to avoid double nesting ([#1630](https://github.com/rapidsai/cugraph/pull/1630)) [@dantegd](https://github.com/dantegd)
- Remove thread local thrust::sort (thrust::sort with the execution policy thrust::seq) from copy_v_transform_reduce_key_aggregated_out_nbr ([#1627](https://github.com/rapidsai/cugraph/pull/1627)) [@seunghwak](https://github.com/seunghwak)
## 🚀 New Features
- SG & MG Weakly Connected Components ([#1604](https://github.com/rapidsai/cugraph/pull/1604)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Remove Pascal guard and test cuGraph use of cuco::static_map on Pascal ([#1640](https://github.com/rapidsai/cugraph/pull/1640)) [@seunghwak](https://github.com/seunghwak)
- Upgraded recipe and dev envs to NCCL 2.9.9 ([#1636](https://github.com/rapidsai/cugraph/pull/1636)) [@rlratzel](https://github.com/rlratzel)
- Use UCX-Py 0.20 ([#1634](https://github.com/rapidsai/cugraph/pull/1634)) [@jakirkham](https://github.com/jakirkham)
- Updated dependencies for CalVer ([#1629](https://github.com/rapidsai/cugraph/pull/1629)) [@rlratzel](https://github.com/rlratzel)
- MG WCC improvements ([#1628](https://github.com/rapidsai/cugraph/pull/1628)) [@seunghwak](https://github.com/seunghwak)
- Initialize force_atlas2 `old_forces` device_uvector, use new `rmm::exec_policy` ([#1625](https://github.com/rapidsai/cugraph/pull/1625)) [@trxcllnt](https://github.com/trxcllnt)
- Fix developer guide examples for device_buffer ([#1619](https://github.com/rapidsai/cugraph/pull/1619)) [@harrism](https://github.com/harrism)
- Pass rmm memory allocator to cuco::static_map ([#1617](https://github.com/rapidsai/cugraph/pull/1617)) [@seunghwak](https://github.com/seunghwak)
- Undo disabling MG C++ testing outputs for non-root processes ([#1615](https://github.com/rapidsai/cugraph/pull/1615)) [@seunghwak](https://github.com/seunghwak)
- WCC bindings ([#1612](https://github.com/rapidsai/cugraph/pull/1612)) [@Iroy30](https://github.com/Iroy30)
- address 'ValueError: Series contains NULL values' from from_cudf_edge… ([#1610](https://github.com/rapidsai/cugraph/pull/1610)) [@mattf](https://github.com/mattf)
- Fea rmm device buffer change ([#1609](https://github.com/rapidsai/cugraph/pull/1609)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update `CHANGELOG.md` links for calver ([#1608](https://github.com/rapidsai/cugraph/pull/1608)) [@ajschmidt8](https://github.com/ajschmidt8)
- Handle int64 in force atlas wrapper and update to uvector ([#1607](https://github.com/rapidsai/cugraph/pull/1607)) [@hlinsen](https://github.com/hlinsen)
- Update docs build script ([#1606](https://github.com/rapidsai/cugraph/pull/1606)) [@ajschmidt8](https://github.com/ajschmidt8)
- WCC performance/memory footprint optimization ([#1605](https://github.com/rapidsai/cugraph/pull/1605)) [@seunghwak](https://github.com/seunghwak)
- adding test graphs - part 2 ([#1603](https://github.com/rapidsai/cugraph/pull/1603)) [@ChuckHastings](https://github.com/ChuckHastings)
- Update the Random Walk binding ([#1599](https://github.com/rapidsai/cugraph/pull/1599)) [@Iroy30](https://github.com/Iroy30)
- Add mnmg out degree ([#1592](https://github.com/rapidsai/cugraph/pull/1592)) [@Iroy30](https://github.com/Iroy30)
- Update `cugraph` to with newest CMake features, including CPM for dependencies ([#1585](https://github.com/rapidsai/cugraph/pull/1585)) [@robertmaynard](https://github.com/robertmaynard)
- Implement Graph Batching functionality ([#1580](https://github.com/rapidsai/cugraph/pull/1580)) [@aschaffer](https://github.com/aschaffer)
- add multi-column support in algorithms - part 2 ([#1571](https://github.com/rapidsai/cugraph/pull/1571)) [@Iroy30](https://github.com/Iroy30)
# cuGraph 0.19.0 (21 Apr 2021)
## 🐛 Bug Fixes
- Fixed copyright date and format ([#1526](https://github.com//rapidsai/cugraph/pull/1526)) [@rlratzel](https://github.com/rlratzel)
- fix mg_renumber non-deterministic errors ([#1523](https://github.com//rapidsai/cugraph/pull/1523)) [@Iroy30](https://github.com/Iroy30)
- Updated NetworkX version to 2.5.1 ([#1510](https://github.com//rapidsai/cugraph/pull/1510)) [@rlratzel](https://github.com/rlratzel)
- pascal renumbering fix ([#1505](https://github.com//rapidsai/cugraph/pull/1505)) [@Iroy30](https://github.com/Iroy30)
- Fix MNMG test failures and skip tests that are not supported on Pascal ([#1498](https://github.com//rapidsai/cugraph/pull/1498)) [@jnke2016](https://github.com/jnke2016)
- Revert "Update conda recipes pinning of repo dependencies" ([#1493](https://github.com//rapidsai/cugraph/pull/1493)) [@raydouglass](https://github.com/raydouglass)
- Update conda recipes pinning of repo dependencies ([#1485](https://github.com//rapidsai/cugraph/pull/1485)) [@mike-wendt](https://github.com/mike-wendt)
- Update to make notebook_list.py compatible with numba 0.53 ([#1455](https://github.com//rapidsai/cugraph/pull/1455)) [@rlratzel](https://github.com/rlratzel)
- Fix bugs in copy_v_transform_reduce_key_aggregated_out_nbr & groupby_gpuid_and_shuffle ([#1434](https://github.com//rapidsai/cugraph/pull/1434)) [@seunghwak](https://github.com/seunghwak)
- update default path of setup to use the new directory paths in build … ([#1425](https://github.com//rapidsai/cugraph/pull/1425)) [@ChuckHastings](https://github.com/ChuckHastings)
## 📖 Documentation
- Create C++ documentation ([#1489](https://github.com//rapidsai/cugraph/pull/1489)) [@ChuckHastings](https://github.com/ChuckHastings)
- Create cuGraph developers guide ([#1431](https://github.com//rapidsai/cugraph/pull/1431)) [@ChuckHastings](https://github.com/ChuckHastings)
- Add boost 1.0 license file. ([#1401](https://github.com//rapidsai/cugraph/pull/1401)) [@seunghwak](https://github.com/seunghwak)
## 🚀 New Features
- Implement C/CUDA RandomWalks functionality ([#1439](https://github.com//rapidsai/cugraph/pull/1439)) [@aschaffer](https://github.com/aschaffer)
- Add R-mat generator ([#1411](https://github.com//rapidsai/cugraph/pull/1411)) [@seunghwak](https://github.com/seunghwak)
## 🛠️ Improvements
- Random Walks - Python Bindings ([#1516](https://github.com//rapidsai/cugraph/pull/1516)) [@jnke2016](https://github.com/jnke2016)
- Updating RAFT tag ([#1509](https://github.com//rapidsai/cugraph/pull/1509)) [@afender](https://github.com/afender)
- Clean up nullptr cuda_stream_view arguments ([#1504](https://github.com//rapidsai/cugraph/pull/1504)) [@hlinsen](https://github.com/hlinsen)
- Reduce the size of the cugraph libraries ([#1503](https://github.com//rapidsai/cugraph/pull/1503)) [@robertmaynard](https://github.com/robertmaynard)
- Add indirection and replace algorithms with new renumbering ([#1484](https://github.com//rapidsai/cugraph/pull/1484)) [@Iroy30](https://github.com/Iroy30)
- Multiple graph generator with power law distribution on sizes ([#1483](https://github.com//rapidsai/cugraph/pull/1483)) [@afender](https://github.com/afender)
- TSP solver bug fix ([#1480](https://github.com//rapidsai/cugraph/pull/1480)) [@hlinsen](https://github.com/hlinsen)
- Added cmake function and .hpp template for generating version_config.hpp file. ([#1476](https://github.com//rapidsai/cugraph/pull/1476)) [@rlratzel](https://github.com/rlratzel)
- Fix for bug in SCC on self-loops ([#1475](https://github.com//rapidsai/cugraph/pull/1475)) [@aschaffer](https://github.com/aschaffer)
- MS BFS python APIs + EgoNet updates ([#1469](https://github.com//rapidsai/cugraph/pull/1469)) [@afender](https://github.com/afender)
- Removed unused dependencies from libcugraph recipe, moved non-test script code from test script to gpu build script ([#1468](https://github.com//rapidsai/cugraph/pull/1468)) [@rlratzel](https://github.com/rlratzel)
- Remove literals passed to `device_uvector::set_element_async` ([#1453](https://github.com//rapidsai/cugraph/pull/1453)) [@harrism](https://github.com/harrism)
- ENH Change conda build directories to work with ccache ([#1452](https://github.com//rapidsai/cugraph/pull/1452)) [@dillon-cullinan](https://github.com/dillon-cullinan)
- Updating docs ([#1448](https://github.com//rapidsai/cugraph/pull/1448)) [@BradReesWork](https://github.com/BradReesWork)
- Improve graph primitives performance on graphs with widely varying vertex degrees ([#1447](https://github.com//rapidsai/cugraph/pull/1447)) [@seunghwak](https://github.com/seunghwak)
- Update Changelog Link ([#1446](https://github.com//rapidsai/cugraph/pull/1446)) [@ajschmidt8](https://github.com/ajschmidt8)
- Updated NCCL to version 2.8.4 ([#1445](https://github.com//rapidsai/cugraph/pull/1445)) [@BradReesWork](https://github.com/BradReesWork)
- Update FAISS to 1.7.0 ([#1444](https://github.com//rapidsai/cugraph/pull/1444)) [@BradReesWork](https://github.com/BradReesWork)
- Update graph partitioning scheme ([#1443](https://github.com//rapidsai/cugraph/pull/1443)) [@seunghwak](https://github.com/seunghwak)
- Add additional datasets to improve coverage ([#1441](https://github.com//rapidsai/cugraph/pull/1441)) [@jnke2016](https://github.com/jnke2016)
- Update C++ MG PageRank and SG PageRank, Katz Centrality, BFS, and SSSP to use the new R-mat graph generator ([#1438](https://github.com//rapidsai/cugraph/pull/1438)) [@seunghwak](https://github.com/seunghwak)
- Remove raft handle duplication ([#1436](https://github.com//rapidsai/cugraph/pull/1436)) [@Iroy30](https://github.com/Iroy30)
- Streams infra + support in egonet ([#1435](https://github.com//rapidsai/cugraph/pull/1435)) [@afender](https://github.com/afender)
- Prepare Changelog for Automation ([#1433](https://github.com//rapidsai/cugraph/pull/1433)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update 0.18 changelog entry ([#1429](https://github.com//rapidsai/cugraph/pull/1429)) [@ajschmidt8](https://github.com/ajschmidt8)
- Update and Test Renumber bindings ([#1427](https://github.com//rapidsai/cugraph/pull/1427)) [@Iroy30](https://github.com/Iroy30)
- Update Louvain to use new graph primitives and pattern accelerators ([#1423](https://github.com//rapidsai/cugraph/pull/1423)) [@ChuckHastings](https://github.com/ChuckHastings)
- Replace rmm::device_vector & thrust::host_vector with rmm::device_uvector & std::vector, respectively. ([#1421](https://github.com//rapidsai/cugraph/pull/1421)) [@seunghwak](https://github.com/seunghwak)
- Update C++ MG PageRank test ([#1419](https://github.com//rapidsai/cugraph/pull/1419)) [@seunghwak](https://github.com/seunghwak)
- ENH Build with `cmake --build` & Pass ccache variables to conda recipe & use Ninja in CI ([#1415](https://github.com//rapidsai/cugraph/pull/1415)) [@Ethyling](https://github.com/Ethyling)
- Adding new primitives: copy_v_transform_reduce_key_aggregated_out_nbr & transform_reduce_by_adj_matrix_row|col_key_e bug fixes ([#1399](https://github.com//rapidsai/cugraph/pull/1399)) [@seunghwak](https://github.com/seunghwak)
- Add new primitives: compute_in|out_degrees, compute_in|out_weight_sums to graph_view_t ([#1394](https://github.com//rapidsai/cugraph/pull/1394)) [@seunghwak](https://github.com/seunghwak)
- Rename sort_and_shuffle to groupby_gpuid_and_shuffle ([#1392](https://github.com//rapidsai/cugraph/pull/1392)) [@seunghwak](https://github.com/seunghwak)
- Matching updates for RAFT comms updates (device_sendrecv, device_multicast_sendrecv, gather, gatherv) ([#1391](https://github.com//rapidsai/cugraph/pull/1391)) [@seunghwak](https://github.com/seunghwak)
- Fix forward-merge conflicts for #1370 ([#1377](https://github.com//rapidsai/cugraph/pull/1377)) [@ajschmidt8](https://github.com/ajschmidt8)
- Add utility function for computing a secondary cost for BFS and SSSP output ([#1376](https://github.com//rapidsai/cugraph/pull/1376)) [@hlinsen](https://github.com/hlinsen)
# cuGraph 0.18.0 (24 Feb 2021)
## Bug Fixes 🐛
- Fixed TSP returned routes (#1412) @hlinsen
- Updated CI scripts to use a different error handling convention, updated LD_LIBRARY_PATH for project flash runs (#1386) @rlratzel
- Bug fixes for MNMG coarsen_graph, renumber_edgelist, relabel (#1364) @seunghwak
- Set a specific known working commit hash for gunrock instead of "dev" (#1336) @rlratzel
- Updated git utils used by copyright.py for compatibility with current CI env (#1325) @rlratzel
- Fix MNMG Louvain tests on Pascal architecture (#1322) @ChuckHastings
- FIX Set bash trap after PATH is updated (#1321) @dillon-cullinan
- Fix graph nodes function and renumbering from series (#1319) @Iroy30
- Fix Branch 0.18 merge 0.17 (#1314) @BradReesWork
- Fix EXPERIMENTAL_LOUVAIN_TEST on Pascal (#1312) @ChuckHastings
- Updated cuxfilter to 0.18, removed datashader indirect dependency in conda dev .yml files (#1311) @rlratzel
- Update SG PageRank C++ tests (#1307) @seunghwak
## Documentation 📖
- Enabled MultiGraph class and tests, updated SOURCEBUILD.md to include the latest build.sh options (#1351) @rlratzel
## New Features 🚀
- New EgoNet extractor (#1365) @afender
- Implement induced subgraph extraction primitive (SG C++) (#1354) @seunghwak
## Improvements 🛠️
- Update stale GHA with exemptions & new labels (#1413) @mike-wendt
- Add GHA to mark issues/prs as stale/rotten (#1408) @Ethyling
- update subgraph tests and remove legacy pagerank (#1378) @Iroy30
- Update the conda environments and README file (#1369) @BradReesWork
- Prepare Changelog for Automation (#1368) @ajschmidt8
- Update CMakeLists.txt files for consistency with RAPIDS and to support cugraph as an external project and other tech debt removal (#1367) @rlratzel
- Use new coarsen_graph primitive in Louvain (#1362) @ChuckHastings
- Added initial infrastructure for MG C++ testing and a Pagerank MG test using it (#1361) @rlratzel
- Add SG TSP (#1360) @hlinsen
- Build a Dendrogram class, adapt Louvain/Leiden/ECG to use it (#1359) @ChuckHastings
- Auto-label PRs based on their content (#1358) @jolorunyomi
- Implement MNMG Renumber (#1355) @aschaffer
- Enabling pytest code coverage output by default (#1352) @jnke2016
- Added configuration for new cugraph-doc-codeowners review group (#1344) @rlratzel
- API update to match RAFT PR #120 (#1343) @drobison00
- Pin gunrock to v1.2 for version 0.18 (#1342) @ChuckHastings
- Fix #1340 - Use generic from_edgelist() methods (#1341) @miguelusque
- Using RAPIDS_DATASET_ROOT_DIR env var in place of absolute path to datasets in tests (#1337) @jnke2016
- Expose dense implementation of Hungarian algorithm (#1333) @ChuckHastings
- SG Pagerank transition (#1332) @Iroy30
- improving error checking and docs (#1327) @BradReesWork
- Fix MNMG cleanup exceptions (#1326) @Iroy30
- Create labeler.yml (#1318) @jolorunyomi
- Updates to support nightly MG test automation (#1308) @rlratzel
- Add C++ graph functions (coarsen_grpah, renumber_edgelist, relabel) and primitvies (transform_reduce_by_adj_matrix_row_key, transform_reduce_by_adj_matrix_col_key, copy_v_transform_reduce_key_aggregated_out_nbr) (#1257) @seunghwak
# cuGraph 0.17.0 (10 Dec 2020)
## New Features
- PR #1276 MST
- PR #1245 Add functions to add pandas and numpy compatibility
- PR #1260 Add katz_centrality mnmg wrapper
- PR #1264 CuPy sparse matrix input support for WCC, SCC, SSSP, and BFS
- PR #1265 Implement Hungarian Algorithm
- PR #1274 Add generic from_edgelist() and from_adjlist() APIs
- PR #1279 Add self loop check variable in graph
- PR #1277 SciPy sparse matrix input support for WCC, SCC, SSSP, and BFS
- PR #1278 Add support for shortest_path_length and fix graph vertex checks
- PR #1280 Add Multi(Di)Graph support
## Improvements
- PR #1227 Pin cmake policies to cmake 3.17 version
- PR #1267 Compile time improvements via Explicit Instantiation Declarations.
- PR #1269 Removed old db code that was not being used
- PR #1271 Add extra check to make SG Louvain deterministic
- PR #1273 Update Force Atlas 2 notebook, wrapper and coding style
- PR #1289 Update api.rst for MST
- PR #1281 Update README
- PR #1293: Updating RAFT to latest
## Bug Fixes
- PR #1237 update tests for assymetric graphs, enable personalization pagerank
- PR #1242 Calling gunrock cmake using explicit -D options, re-enabling C++ tests
- PR #1246 Use latest Gunrock, update HITS implementation
- PR #1250 Updated cuco commit hash to latest as of 2020-10-30 and removed unneeded GIT_SHALLOW param
- PR #1251 Changed the MG context testing class to use updated parameters passed in from the individual tests
- PR #1253 MG test fixes: updated additional comms.initialize() calls, fixed dask DataFrame comparisons
- PR #1270 Raise exception for p2p, disable bottom up approach for bfs
- PR #1275 Force local artifact conda install
- PR #1285 Move codecov upload to gpu build script
- PR #1290 Update weights check in bc and graph prims wrappers
- PR #1299 Update doc and notebook
- PR #1304 Enable all GPU archs for test builds
# cuGraph 0.16.0 (21 Oct 2020)
## New Features
- PR #1098 Add new graph classes to support 2D partitioning
- PR #1124 Sub-communicator initialization for 2D partitioning support
- PR #838 Add pattern accelerator API functions and pattern accelerator API based implementations of PageRank, Katz Centrality, BFS, and SSSP
- PR #1147 Added support for NetworkX graphs as input type
- PR #1157 Louvain API update to use graph_container_t
- PR #1151 MNMG extension for pattern accelerator based PageRank, Katz Centrality, BFS, and SSSP implementations (C++ part)
- PR #1163 Integrated 2D shuffling and Louvain updates
- PR #1178 Refactored cython graph factory code to scale to additional data types
- PR #1175 Integrated 2D pagerank python/cython infra
- PR #1177 Integrated 2D bfs and sssp python/cython infra
- PR #1172 MNMG Louvain implementation
## Improvements
- PR 1081 MNMG Renumbering - sort partitions by degree
- PR 1115 Replace deprecated rmm::mr::get_default_resource with rmm::mr::get_current_device_resource
- PR #1133 added python 2D shuffling
- PR 1129 Refactored test to use common dataset and added additional doc pages
- PR 1135 SG Updates to Louvain et. al.
- PR 1132 Upgrade Thrust to latest commit
- PR #1129 Refactored test to use common dataset and added additional doc pages
- PR #1145 Simple edge list generator
- PR #1144 updated documentation and APIs
- PR #1139 MNMG Louvain Python updates, Cython cleanup
- PR #1156 Add aarch64 gencode support
- PR #1149 Parquet read and concat within workers
- PR #1152 graph container cleanup, added arg for instantiating legacy types and switch statements to factory function
- PR #1164 MG symmetrize and conda env updates
- PR #1162 enhanced networkx testing
- PR #1169 Added RAPIDS cpp packages to cugraph dev env
- PR #1165 updated remaining algorithms to be NetworkX compatible
- PR #1176 Update ci/local/README.md
- PR #1184 BLD getting latest tags
- PR #1222 Added min CUDA version check to MG Louvain
- PR #1217 NetworkX Transition doc
- PR #1223 Update mnmg docs
- PR #1230 Improve gpuCI scripts
## Bug Fixes
- PR #1131 Show style checker errors with set +e
- PR #1150 Update RAFT git tag
- PR #1155 Remove RMM library dependency and CXX11 ABI handling
- PR #1158 Pass size_t* & size_t* instead of size_t[] & int[] for raft allgatherv's input parameters recvcounts & displs
- PR #1168 Disabled MG tests on single GPU
- PR #1166 Fix misspelling of function calls in asserts causing debug build to fail
- PR #1180 BLD Adopt RAFT model for cuhornet dependency
- PR #1181 Fix notebook error handling in CI
- PR #1199 BUG segfault in python test suite
- PR #1186 BLD Installing raft headers under cugraph
- PR #1192 Fix benchmark notes and documentation issues in graph.py
- PR #1196 Move subcomms init outside of individual algorithm functions
- PR #1198 Remove deprecated call to from_gpu_matrix
- PR #1174 Fix bugs in MNMG pattern accelerators and pattern accelerator based implementations of MNMG PageRank, BFS, and SSSP
- PR #1233 Temporarily disabling C++ tests for 0.16
- PR #1240 Require `ucx-proc=*=gpu`
- PR #1241 Fix a bug in personalized PageRank with the new graph primitives API.
- PR #1249 Fix upload script syntax
# cuGraph 0.15.0 (26 Aug 2020)
## New Features
- PR #940 Add MG Batch BC
- PR #937 Add wrapper for gunrock HITS algorithm
- PR #939 Updated Notebooks to include new features and benchmarks
- PR #944 MG pagerank (dask)
- PR #947 MG pagerank (CUDA)
- PR #826 Bipartite Graph python API
- PR #963 Renumbering refactor, add multi GPU support
- PR #964 MG BFS (CUDA)
- PR #990 MG Consolidation
- PR #993 Add persistent Handle for Comms
- PR #979 Add hypergraph implementation to convert DataFrames into Graphs
- PR #1010 MG BFS (dask)
- PR #1018 MG personalized pagerank
- PR #1047 Updated select tests to use new dataset list that includes asymmetric directed graph
- PR #1090 Add experimental Leiden function
- PR #1077 Updated/added copyright notices, added copyright CI check from cuml
- PR #1100 Add support for new build process (Project Flash)
- PR #1093 New benchmarking notebook
## Improvements
- PR #898 Add Edge Betweenness Centrality, and endpoints to BC
- PR #913 Eliminate `rmm.device_array` usage
- PR #903 Add short commit hash to conda package
- PR #920 modify bfs test, update graph number_of_edges, update storage of transposedAdjList in Graph
- PR #933 Update mg_degree to use raft, add python tests
- PR #930 rename test_utils.h to utilities/test_utils.hpp and remove thrust dependency
- PR #934 Update conda dev environment.yml dependencies to 0.15
- PR #942 Removed references to deprecated RMM headers.
- PR #941 Regression python/cudf fix
- PR #945 Simplified benchmark --no-rmm-reinit option, updated default options
- PR #946 Install meta packages for dependencies
- PR #952 Updated get_test_data.sh to also (optionally) download and install datasets for benchmark runs
- PR #953 fix setting RAFT_DIR from the RAFT_PATH env var
- PR #954 Update cuGraph error handling to use RAFT
- PR #968 Add build script for CI benchmark integration
- PR #959 Add support for uint32_t and int64_t types for BFS (cpp side)
- PR #962 Update dask pagerank
- PR #975 Upgrade GitHub template
- PR #976 Fix error in Graph.edges(), update cuDF rename() calls
- PR #977 Update force_atlas2 to call on_train_end after iterating
- PR #980 Replace nvgraph Spectral Clustering (SC) functionality with RAFT SC
- PR #987 Move graph out of experimental namespace
- PR #984 Removing codecov until we figure out how to interpret failures that block CI
- PR #985 Add raft handle to BFS, BC and edge BC
- PR #991 Update conda upload versions for new supported CUDA/Python
- PR #988 Add clang and clang tools to the conda env
- PR #997 Update setup.cfg to run pytests under cugraph tests directory only
- PR #1007 Add tolerance support to MG Pagerank and fix
- PR #1009 Update benchmarks script to include requirements used
- PR #1014 Fix benchmarks script variable name
- PR #1021 Update cuGraph to use RAFT CUDA utilities
- PR #1019 Remove deprecated CUDA library calls
- PR #1024 Updated condata environment YML files
- PR #1026 update chunksize for mnmg, remove files and unused code
- PR #1028 Update benchmarks script to use ASV_LABEL
- PR #1030 MG directory org and documentation
- PR #1020 Updated Louvain to honor max_level, ECG now calls Louvain for 1 level, then full run.
- PR #1031 MG notebook
- PR #1034 Expose resolution (gamma) parameter in Louvain
- PR #1037 Centralize test main function and replace usage of deprecated `cnmem_memory_resource`
- PR #1041 Use S3 bucket directly for benchmark plugin
- PR #1056 Fix MG BFS performance
- PR #1062 Compute max_vertex_id in mnmg local data computation
- PR #1068 Remove unused thirdparty code
- PR #1105 Update `master` references to `main`
## Bug Fixes
- PR #936 Update Force Atlas 2 doc and wrapper
- PR #938 Quote conda installs to avoid bash interpretation
- PR #966 Fix build error (debug mode)
- PR #983 Fix offset calculation in COO to CSR
- PR #989: Fix issue with incorrect docker image being used in local build script
- PR #992 Fix unrenumber of predecessor
- PR #1008 Fix for cudf updates disabling iteration of Series/Columns/Index
- PR #1012 Fix Local build script README
- PR #1017 Fix more mg bugs
- PR #1022 Fix support for using a cudf.DataFrame with a MG graph
- PR #1025: Explicitly skip raft test folder for pytest 6.0.0
- PR #1027 Fix documentation
- PR #1033 Fix reparition error in big datasets, updated coroutine, fixed warnings
- PR #1036 Fixed benchmarks for new renumbering API, updated comments, added quick test-only benchmark run to CI
- PR #1040 Fix spectral clustering renumbering issue
- PR #1057 Updated raft dependency to pull fixes on cusparse selection in CUDA 11
- PR #1066 Update cugunrock to not build for unsupported CUDA architectures
- PR #1069 Fixed CUDA 11 Pagerank crash, by replacing CUB's SpMV with raft's.
- PR #1083 Fix NBs to run in nightly test run, update renumbering text, cleanup
- PR #1087 Updated benchmarks README to better describe how to get plugin, added rapids-pytest-benchmark plugin to conda dev environments
- PR #1101 Removed unnecessary device-to-host copy which caused a performance regression
- PR #1106 Added new release.ipynb to notebook test skip list
- PR #1125 Patch Thrust to workaround `CUDA_CUB_RET_IF_FAIL` macro clearing CUDA errors
# cuGraph 0.14.0 (03 Jun 2020)
## New Features
- PR #756 Add Force Atlas 2 layout
- PR #822 Added new functions in python graph class, similar to networkx
- PR #840 MG degree
- PR #875 UVM notebook
- PR #881 Raft integration infrastructure
## Improvements
- PR #917 Remove gunrock option from Betweenness Centrality
- PR #764 Updated sssp and bfs with GraphCSR, removed gdf_column, added nullptr weights test for sssp
- PR #765 Remove gdf_column from connected components
- PR #780 Remove gdf_column from cuhornet features
- PR #781 Fix compiler argument syntax for ccache
- PR #782 Use Cython's `new_build_ext` (if available)
- PR #788 Added options and config file to enable codecov
- PR #793 Fix legacy cudf imports/cimports
- PR #798 Edit return graph type in algorithms return graphs
- PR #799 Refactored graph class with RAII
- PR #802 Removed use of gdf_column from db code
- PR #803 Enable Ninja build
- PR #804 Cythonize in parallel
- PR #807 Updating the Python docs
- PR #817 Add native Betweenness Centrality with sources subset
- PR #818 Initial version of new "benchmarks" folder
- PR #820 MG infra and all-gather smoke test
- PR #823 Remove gdf column from nvgraph
- PR #829 Updated README and CONTRIBUTIOIN docs
- PR #831 Updated Notebook - Added K-Truss, ECG, and Betweenness Centrality
- PR #832 Removed RMM ALLOC from db subtree
- PR #833 Update graph functions to use new Graph class
- PR #834 Updated local gpuci build
- PR #836 Remove SNMG code
- PR #845 Add .clang-format & format all files
- PR #859 Updated main docs
- PR #862 Katz Centrality : Auto calculation of alpha parameter if set to none
- PR #865 Added C++ docs
- PR #866 Use RAII graph class in KTruss
- PR #867 Updates to support the latest flake8 version
- PR #874 Update setup.py to use custom clean command
- PR #876 Add BFS C++ tests
- PR #878 Updated build script
- PR #887 Updates test to common datasets
- PR #879 Add docs build script to repository
- PR #880 Remove remaining gdf_column references
- PR #882 Add Force Atlas 2 to benchmarks
- PR #891 A few gdf_column stragglers
- PR #893 Add external_repositories dir and raft symlink to .gitignore
- PR #897 Remove RMM ALLOC calls
- PR #899 Update include paths to remove deleted cudf headers
- PR #906 Update Louvain notebook
- PR #948 Move doc customization scripts to Jenkins
## Bug Fixes
- PR #927 Update scikit learn dependency
- PR #916 Fix CI error on Force Atlas 2 test
- PR #763 Update RAPIDS conda dependencies to v0.14
- PR #795 Fix some documentation
- PR #800 Fix bfs error in optimization path
- PR #825 Fix outdated CONTRIBUTING.md
- PR #827 Fix indexing CI errors due to cudf updates
- PR #844 Fixing tests, converting __getitem__ calls to .iloc
- PR #851 Removed RMM from tests
- PR #852 Fix BFS Notebook
- PR #855 Missed a file in the original SNMG PR
- PR #860 Fix all Notebooks
- PR #870 Fix Louvain
- PR #889 Added missing conftest.py file to benchmarks dir
- PR #896 mg dask infrastructure fixes
- PR #907 Fix bfs directed missing vertices
- PR #911 Env and changelog update
- PR #923 Updated pagerank with @afender 's temp fix for double-free crash
- PR #928 Fix scikit learn test install to work with libgcc-ng 7.3
- PR 935 Merge
- PR #956 Use new gpuCI image in local build script
# cuGraph 0.13.0 (31 Mar 2020)
## New Features
- PR #736 cuHornet KTruss integration
- PR #735 Integration gunrock's betweenness centrality
- PR #760 cuHornet Weighted KTruss
## Improvements
- PR #688 Cleanup datasets after testing on gpuCI
- PR #694 Replace the expensive cudaGetDeviceProperties call in triangle counting with cheaper cudaDeviceGetAttribute calls
- PR #701 Add option to filter datasets and tests when run from CI
- PR #715 Added new YML file for CUDA 10.2
- PR #719 Updated docs to remove CUDA 9.2 and add CUDA 10.2
- PR #720 Updated error messages
- PR #722 Refactor graph to remove gdf_column
- PR #723 Added notebook testing to gpuCI gpu build
- PR #734 Updated view_edge_list for Graph, added unrenumbering test, fixed column access issues
- PR #738 Move tests directory up a level
- PR #739 Updated Notebooks
- PR #740 added utility to extract paths from SSSP/BFS results
- PR #742 Rremove gdf column from jaccard
- PR #741 Added documentation for running and adding new benchmarks and shell script to automate
- PR #747 updated viewing of graph, datatypecasting and two hop neighbor unrenumbering for multi column
- PR #766 benchmark script improvements/refactorings: separate ETL steps, averaging, cleanup
## Bug Fixes
- PR #697 Updated versions in conda environments.
- PR #692 Add check after opening golden result files in C++ Katz Centrality tests.
- PR #702 Add libcypher include path to target_include_directories
- PR #716 Fixed bug due to disappearing get_column_data_ptr function in cudf
- PR #726 Fixed SSSP notebook issues in last cell
- PR #728 Temporary fix for dask attribute error issue
- PR #733 Fixed multi-column renumbering issues with indexes
- PR #746 Dask + Distributed 2.12.0+
- PR #753 ECG Error
- PR #758 Fix for graph comparison failure
- PR #761 Added flag to not treat deprecation warnings as errors, for now
- PR #771 Added unrenumbering in wcc and scc. Updated tests to compare vertices of largest component
- PR #774 Raise TypeError if a DiGraph is used with spectral*Clustering()
# cuGraph 0.12.0 (04 Feb 2020)
## New Features
- PR #628 Add (Di)Graph constructor from Multi(Di)Graph
- PR #630 Added ECG clustering
- PR #636 Added Multi-column renumbering support
## Improvements
- PR #640 remove gdf_column in sssp
- PR #629 get rid of gdf_column in pagerank
- PR #641 Add codeowners
- PR #646 Skipping all tests in test_bfs_bsp.py since SG BFS is not formally supported
- PR #652 Remove gdf_column in BFS
- PR #660 enable auto renumbering
- PR #664 Added support for Louvain early termination.
- PR #667 Drop `cython` from run requirements in conda recipe
- PR #666 Incorporate multicolumn renumbering in python graph class for Multi(Di)Graph
- PR #685 Avoid deep copy in index reset
## Bug Fixes
- PR #634 renumber vertex ids passed in analytics
- PR #649 Change variable names in wjaccard and woverlap to avoid exception
- PR #651 fix cudf error in katz wrapper and test nstart
- PR #663 Replaced use of cudf._lib.gdf_dtype_from_value based on cudf refactoring
- PR #670 Use cudf pandas version
- PR #672 fix snmg pagerank based on cudf Buffer changes
- PR #681 fix column length mismatch cudf issue
- PR #684 Deprecated cudf calls
- PR #686 Balanced cut fix
- PR #689 Check graph input type, disable Multi(Di)Graph, add cugraph.from_cudf_edgelist
# cuGraph 0.11.0 (11 Dec 2019)
## New Features
- PR #588 Python graph class and related changes
- PR #630 Adds ECG clustering functionality
## Improvements
- PR #569 Added exceptions
- PR #554 Upgraded namespace so that cugraph can be used for the API.
- PR #564 Update cudf type aliases
- PR #562 Remove pyarrow dependency so we inherit the one cudf uses
- PR #576 Remove adj list conversion automation from c++
- PR #587 API upgrade
- PR #585 Remove BUILD_ABI references from CI scripts
- PR #591 Adding initial GPU metrics to benchmark utils
- PR #599 Pregel BFS
- PR #601 add test for type conversion, edit createGraph_nvgraph
- PR #614 Remove unused CUDA conda labels
- PR #616 Remove c_ prefix
- PR #618 Updated Docs
- PR #619 Transition guide
## Bug Fixes
- PR #570 Temporarily disabling 2 DB tests
- PR #573 Fix pagerank test and symmetrize for cudf 0.11
- PR #574 dev env update
- PR #580 Changed hardcoded test output file to a generated tempfile file name
- PR #595 Updates to use the new RMM Python reinitialize() API
- PR #625 use destination instead of target when adding edgelist
# cuGraph 0.10.0 (16 Oct 2019)
## New Features
- PR #469 Symmetrize a COO
- PR #477 Add cuHornet as a submodule
- PR #483 Katz Centrality
- PR #524 Integrated libcypher-parser conda package into project.
- PR #493 Added C++ findMatches operator for OpenCypher query.
- PR #527 Add testing with asymmetric graph (where appropriate)
- PR #520 KCore and CoreNumber
- PR #496 Gunrock submodule + SM prelimis.
- PR #575 Added updated benchmark files that use new func wrapper pattern and asvdb
## Improvements
- PR #466 Add file splitting test; Update to reduce dask overhead
- PR #468 Remove unnecessary print statement
- PR #464 Limit initial RMM pool allocator size to 128mb so pytest can run in parallel
- PR #474 Add csv file writing, lazy compute - snmg pagerank
- PR #481 Run bfs on unweighted graphs when calling sssp
- PR #491 Use YYMMDD tag in nightly build
- PR #487 Add woverlap test, add namespace in snmg COO2CSR
- PR #531 Use new rmm python package
## Bug Fixes
- PR #458 Fix potential race condition in SSSP
- PR #471 Remove nvidia driver installation from ci/cpu/build.sh
- PR #473 Re-sync cugraph with cudf (cudf renamed the bindings directory to _lib).
- PR #480 Fixed DASK CI build script
- PR #478 Remove requirements and setup for pi
- PR #495 Fixed cuhornet and cmake for Turing cards
- PR #489 Handle negative vertex ids in renumber
- PR #519 Removed deprecated cusparse calls
- PR #522 Added the conda dev env file for 10.1
- PR #525 Update build scripts and YYMMDD tagging for nightly builds
- PR #548 Added missing cores documentation
- PR #556 Fixed recursive remote options for submodules
- PR #559 Added RMM init check so RMM free APIs are not called if not initialized
# cuGraph 0.9.0 (21 Aug 2019)
## New Features
- PR #361 Prototypes for cusort functions
- PR #357 Pagerank cpp API
- PR #366 Adds graph.degrees() function returning both in and out degree.
- PR #380 First implemention of cusort - SNMG key/value sorting
- PR #416 OpenCypher: Added C++ implementation of db_object class and assorted other classes
- PR #411 Integrate dask-cugraph in cugraph
- PR #411 Integrate dask-cugraph in cugraph #411
- PR #418 Update cusort to handle SNMG key-only sorting
- PR #423 Add Strongly Connected Components (GEMM); Weakly CC updates;
- PR #437 Streamline CUDA_REL environment variable
- PR #449 Fix local build generated file ownerships
- PR #454 Initial version of updated script to run benchmarks
## Improvements
- PR #353 Change snmg python wrapper in accordance to cpp api
- PR #362 Restructured python/cython directories and files.
- PR #365 Updates for setting device and vertex ids for snmg pagerank
- PR #383 Exposed MG pagerank solver parameters
- PR #399 Example Prototype of Strongly Connected Components using primitives
- PR #419 Version test
- PR #420 drop duplicates, remove print, compute/wait read_csv in pagerank.py
- PR #439 More efficient computation of number of vertices from edge list
- PR #445 Update view_edge_list, view_adj_list, and view_transposed_adj_list to return edge weights.
- PR #450 Add a multi-GPU section in cuGraph documentation.
## Bug Fixes
- PR #368 Bump cudf dependency versions for cugraph conda packages
- PR #354 Fixed bug in building a debug version
- PR #360 Fixed bug in snmg coo2csr causing intermittent test failures.
- PR #364 Fixed bug building or installing cugraph when conda isn't installed
- PR #375 Added a function to initialize gdf columns in cugraph #375
- PR #378 cugraph was unable to import device_of_gpu_pointer
- PR #384 Fixed bug in snmg coo2csr causing error in dask-cugraph tests.
- PR #382 Disabled vertex id check to allow Azure deployment
- PR #410 Fixed overflow error in SNMG COO2CSR
- PR #395 run omp_ge_num_threads in a parallel context
- PR #412 Fixed formatting issues in cuGraph documentation.
- PR #413 Updated python build instructions.
- PR #414 Add weights to wjaccrd.py
- PR #436 Fix Skip Test Functionality
- PR #438 Fix versions of packages in build script and conda yml
- PR #441 Import cudf_cpp.pxd instead of duplicating cudf definitions.
- PR #441 Removed redundant definitions of python dictionaries and functions.
- PR #442 Updated versions in conda environments.
- PR #442 Added except + to cython bindings to C(++) functions.
- PR #443 Fix accuracy loss issue for snmg pagerank
- PR #444 Fix warnings in strongly connected components
- PR #446 Fix permission for source (-x) and script (+x) files.
- PR #448 Import filter_unreachable
- PR #453 Re-sync cugraph with cudf (dependencies, type conversion & scatter functions).
- PR #463 Remove numba dependency and use the one from cudf
# cuGraph 0.8.0 (27 June 2019)
## New Features
- PR #287 SNMG power iteration step1
- PR #297 SNMG degree calculation
- PR #300 Personalized Page Rank
- PR #302 SNMG CSR Pagerank (cuda/C++)
- PR #315 Weakly Connected Components adapted from cuML (cuda/C++)
- PR #323 Add test skipping function to build.sh
- PR #308 SNMG python wrapper for pagerank
- PR #321 Added graph initialization functions for NetworkX compatibility.
- PR #332 Added C++ support for strings in renumbering function
- PR #325 Implement SSSP with predecessors (cuda/C++)
- PR #331 Python bindings and test for Weakly Connected Components.
- PR #339 SNMG COO2CSR (cuda/C++)
- PR #341 SSSP with predecessors (python) and function for filtering unreachable nodes in the traversal
- PR #348 Updated README for release
## Improvements
- PR #291 nvGraph is updated to use RMM instead of directly invoking cnmem functions.
- PR #286 Reorganized cugraph source directory
- PR #306 Integrated nvgraph to libcugraph.so (libnvgraph_rapids.so will not be built anymore).
- PR #306 Updated python test files to run pytest with all four RMM configurations.
- PR #321 Added check routines for input graph data vertex IDs and offsets (cugraph currently supports only 32-bit integers).
- PR #333 Various general improvements at the library level
## Bug Fixes
- PR #283 Automerge fix
- PR #291 Fixed a RMM memory allocation failure due to duplicate copies of cnmem.o
- PR #291 Fixed a cub CsrMV call error when RMM pool allocator is used.
- PR #306 Fixed cmake warnings due to library conflicts.
- PR #311 Fixed bug in SNMG degree causing failure for three gpus
- PR #309 Update conda build recipes
- PR #314 Added datasets to gitignore
- PR #322 Updates to accommodate new cudf include file locations
- PR #324 Fixed crash in WeakCC for larger graph and added adj matrix symmetry check
- PR #327 Implemented a temporary fix for the build failure due to gunrock updates.
- PR #345 Updated CMakeLists.txt to apply RUNPATH to transitive dependencies.
- PR #350 Configure Sphinx to render params correctly
- PR #359 Updates to remove libboost_system as a runtime dependency on libcugraph.so
# cuGraph 0.7.0 (10 May 2019)
## New Features
- PR #195 Added Graph.get_two_hop_neighbors() method
- PR #195 Updated Jaccard and Weighted Jaccard to accept lists of vertex pairs to compute for
- PR #202 Added methods to compute the overlap coefficient and weighted overlap coefficient
- PR #230 SNMG SPMV and helpers functions
- PR #210 Expose degree calculation kernel via python API
- PR #220 Added bindings for Nvgraph triangle counting
- PR #234 Added bindings for renumbering, modify renumbering to use RMM
- PR #246 Added bindings for subgraph extraction
- PR #250 Add local build script to mimic gpuCI
- PR #261 Add docs build script to cuGraph
- PR #301 Added build.sh script, updated CI scripts and documentation
## Improvements
- PR #157 Removed cudatoolkit dependency in setup.py
- PR #185 Update docs version
- PR #194 Open source nvgraph in cugraph repository #194
- PR #190 Added a copy option in graph creation
- PR #196 Fix typos in readme intro
- PR #207 mtx2csv script
- PR #203 Added small datasets directly in the repo
- PR #215 Simplified get_rapids_dataset_root_dir(), set a default value for the root dir
- PR #233 Added csv datasets and edited test to use cudf for reading graphs
- PR #247 Added some documentation for renumbering
- PR #252 cpp test upgrades for more convenient testing on large input
- PR #264 Add cudatoolkit conda dependency
- PR #267 Use latest release version in update-version CI script
- PR #270 Updated the README.md and CONTRIBUTING.md files
- PR #281 Updated README with algorithm list
## Bug Fixes
- PR #256 Add pip to the install, clean up conda instructions
- PR #253 Add rmm to conda configuration
- PR #226 Bump cudf dependencies to 0.7
- PR #169 Disable terminal output in sssp
- PR #191 Fix double upload bug
- PR #181 Fixed crash/rmm free error when edge values provided
- PR #193 Fixed segfault when egde values not provided
- PR #190 Fixed a memory reference counting error between cudf & cugraph
- PR #190 Fixed a language level warning (cython)
- PR #214 Removed throw exception from dtor in TC
- PR #211 Remove hardcoded dataset paths, replace with build var that can be overridden with an env var
- PR #206 Updated versions in conda envs
- PR #218 Update c_graph.pyx
- PR #224 Update erroneous comments in overlap_wrapper.pyx, woverlap_wrapper.pyx, test_louvain.py, and spectral_clustering.pyx
- PR #220 Fixed bugs in Nvgraph triangle counting
- PR #232 Fixed memory leaks in managing cudf columns.
- PR #236 Fixed issue with v0.7 nightly yml environment file. Also updated the README to remove pip
- PR #239 Added a check to prevent a cugraph object to store two different graphs.
- PR #244 Fixed issue with nvgraph's subgraph extraction if the first vertex in the vertex list is not incident on an edge in the extracted graph
- PR #249 Fix oudated cuDF version in gpu/build.shi
- PR #262 Removed networkx conda dependency for both build and runtime
- PR #271 Removed nvgraph conda dependency
- PR #276 Removed libgdf_cffi import from bindings
- PR #288 Add boost as a conda dependency
# cuGraph 0.6.0 (22 Mar 2019)
## New Features
- PR #73 Weighted Jaccard bindings
- PR #41 RMAT graph bindings
- PR #43 Louvain binings
- PR #44 SSSP bindings
- PR #47 BSF bindings
- PR #53 New Repo structure
- PR #67 RMM Integration with rmm as as submodule
- PR #82 Spectral Clustering bindings
- PR #82 Clustering metrics binding
- PR #85 Helper functions on python Graph object
- PR #106 Add gpu/build.sh file for gpuCI
## Improvements
- PR #50 Reorganize directory structure to match cuDF
- PR #85 Deleted setup.py and setup.cfg which had been replaced
- PR #95 Code clean up
- PR #96 Relocated mmio.c and mmio.h (external files) to thirdparty/mmio
- PR #97 Updated python tests to speed them up
- PR #100 Added testing for returned vertex and edge identifiers
- PR #105 Updated python code to follow PEP8 (fixed flake8 complaints)
- PR #121 Cleaned up READEME file
- PR #130 Update conda build recipes
- PR #144 Documentation for top level functions
## Bug Fixes
- PR #48 ABI Fixes
- PR #72 Bug fix for segfault issue getting transpose from adjacency list
- PR #105 Bug fix for memory leaks and python test failures
- PR #110 Bug fix for segfault calling Louvain with only edge list
- PR #115 Fixes for changes in cudf 0.6, pick up RMM from cudf instead of thirdpary
- PR #116 Added netscience.mtx dataset to datasets.tar.gz
- PR #120 Bug fix for segfault calling spectral clustering with only edge list
- PR #123 Fixed weighted Jaccard to assume the input weights are given as a cudf.Series
- PR #152 Fix conda package version string
- PR #160 Added additional link directory to support building on CentOS-7
- PR #221 Moved two_hop_neighbors.cuh to src folder to prevent it being installed
- PR #223 Fixed compiler warning in cpp/src/cugraph.cu
- PR #284 Commented out unit test code that fails due to a cudf bug
# cuGraph 0.5.0 (28 Jan 2019)
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/build.sh | #!/bin/bash
# Copyright (c) 2019-2023, NVIDIA CORPORATION.
# cugraph build script
# This script is used to build the component(s) in this repo from
# source, and can be called with various options to customize the
# build as needed (see the help output for details)
# Abort script on first error
set -e
NUMARGS=$#
ARGS=$*
# NOTE: ensure all dir changes are relative to the location of this
# script, and that this script resides in the repo dir!
REPODIR=$(cd $(dirname $0); pwd)
RAPIDS_VERSION=23.12
# Valid args to this script (all possible targets and options) - only one per line
VALIDARGS="
clean
uninstall
libcugraph
libcugraph_etl
pylibcugraph
cugraph
cugraph-service
cugraph-pyg
cugraph-dgl
nx-cugraph
cpp-mgtests
cpp-mtmgtests
docs
all
-v
-g
-n
--pydevelop
--allgpuarch
--skip_cpp_tests
--without_cugraphops
--cmake_default_generator
--clean
-h
--help
"
HELP="$0 [<target> ...] [<flag> ...]
where <target> is:
clean - remove all existing build artifacts and configuration (start over)
uninstall - uninstall libcugraph and cugraph from a prior build/install (see also -n)
libcugraph - build libcugraph.so and SG test binaries
libcugraph_etl - build libcugraph_etl.so and SG test binaries
pylibcugraph - build the pylibcugraph Python package
cugraph - build the cugraph Python package
cugraph-service - build the cugraph-service_client and cugraph-service_server Python package
cugraph-pyg - build the cugraph-pyg Python package
cugraph-dgl - build the cugraph-dgl extensions for DGL
nx-cugraph - build the nx-cugraph Python package
cpp-mgtests - build libcugraph and libcugraph_etl MG tests. Builds MPI communicator, adding MPI as a dependency.
cpp-mtmgtests - build libcugraph MTMG tests. Adds UCX as a dependency (temporary).
docs - build the docs
all - build everything
and <flag> is:
-v - verbose build mode
-g - build for debug
-n - do not install after a successful build (does not affect Python packages)
--pydevelop - use setup.py develop instead of install
--allgpuarch - build for all supported GPU architectures
--skip_cpp_tests - do not build the SG test binaries as part of the libcugraph and libcugraph_etl targets
--without_cugraphops - do not build algos that require cugraph-ops
--cmake_default_generator - use the default cmake generator instead of ninja
--clean - clean an individual target (note: to do a complete rebuild, use the clean target described above)
-h - print this text
default action (no args) is to build and install 'libcugraph' then 'libcugraph_etl' then 'pylibcugraph' and then 'cugraph' targets
libcugraph build dir is: ${LIBCUGRAPH_BUILD_DIR}
Set env var LIBCUGRAPH_BUILD_DIR to override libcugraph build dir.
"
LIBCUGRAPH_BUILD_DIR=${LIBCUGRAPH_BUILD_DIR:=${REPODIR}/cpp/build}
LIBCUGRAPH_ETL_BUILD_DIR=${LIBCUGRAPH_ETL_BUILD_DIR:=${REPODIR}/cpp/libcugraph_etl/build}
PYLIBCUGRAPH_BUILD_DIR=${REPODIR}/python/pylibcugraph/_skbuild
CUGRAPH_BUILD_DIR=${REPODIR}/python/cugraph/_skbuild
CUGRAPH_SERVICE_BUILD_DIRS="${REPODIR}/python/cugraph-service/server/build
${REPODIR}/python/cugraph-service/client/build
"
CUGRAPH_DGL_BUILD_DIR=${REPODIR}/python/cugraph-dgl/build
# All python build dirs using _skbuild are handled by cleanPythonDir, but
# adding them here for completeness
BUILD_DIRS="${LIBCUGRAPH_BUILD_DIR}
${LIBCUGRAPH_ETL_BUILD_DIR}
${PYLIBCUGRAPH_BUILD_DIR}
${CUGRAPH_BUILD_DIR}
${CUGRAPH_SERVICE_BUILD_DIRS}
${CUGRAPH_DGL_BUILD_DIR}
"
# Set defaults for vars modified by flags to this script
VERBOSE_FLAG=""
CMAKE_VERBOSE_OPTION=""
BUILD_TYPE=Release
INSTALL_TARGET="--target install"
BUILD_CPP_TESTS=ON
BUILD_CPP_MG_TESTS=OFF
BUILD_CPP_MTMG_TESTS=OFF
BUILD_ALL_GPU_ARCH=0
BUILD_WITH_CUGRAPHOPS=ON
CMAKE_GENERATOR_OPTION="-G Ninja"
PYTHON_ARGS_FOR_INSTALL="-m pip install --no-build-isolation --no-deps"
# Set defaults for vars that may not have been defined externally
# FIXME: if PREFIX is not set, check CONDA_PREFIX, but there is no fallback
# from there!
INSTALL_PREFIX=${PREFIX:=${CONDA_PREFIX}}
PARALLEL_LEVEL=${PARALLEL_LEVEL:=`nproc`}
BUILD_ABI=${BUILD_ABI:=ON}
function hasArg {
(( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
}
function buildDefault {
(( ${NUMARGS} == 0 )) || !(echo " ${ARGS} " | grep -q " [^-][a-zA-Z0-9\_\-]\+ ")
}
function cleanPythonDir {
pushd $1 > /dev/null
rm -rf dist dask-worker-space cugraph/raft *.egg-info
find . -type d -name __pycache__ -print | xargs rm -rf
find . -type d -name _skbuild -print | xargs rm -rf
find . -type d -name dist -print | xargs rm -rf
find . -type f -name "*.cpp" -delete
find . -type f -name "*.cpython*.so" -delete
find . -type d -name _external_repositories -print | xargs rm -rf
popd > /dev/null
}
if hasArg -h || hasArg --help; then
echo "${HELP}"
exit 0
fi
# Check for valid usage
if (( ${NUMARGS} != 0 )); then
for a in ${ARGS}; do
if ! (echo "${VALIDARGS}" | grep -q "^[[:blank:]]*${a}$"); then
echo "Invalid option: ${a}"
exit 1
fi
done
fi
# Process flags
if hasArg -v; then
VERBOSE_FLAG="-v"
CMAKE_VERBOSE_OPTION="--log-level=VERBOSE"
fi
if hasArg -g; then
BUILD_TYPE=Debug
fi
if hasArg -n; then
INSTALL_TARGET=""
fi
if hasArg --allgpuarch; then
BUILD_ALL_GPU_ARCH=1
fi
if hasArg --skip_cpp_tests; then
BUILD_CPP_TESTS=OFF
fi
if hasArg --without_cugraphops; then
BUILD_WITH_CUGRAPHOPS=OFF
fi
if hasArg cpp-mtmgtests; then
BUILD_CPP_MTMG_TESTS=ON
fi
if hasArg cpp-mgtests || hasArg all; then
BUILD_CPP_MG_TESTS=ON
fi
if hasArg --cmake_default_generator; then
CMAKE_GENERATOR_OPTION=""
fi
if hasArg --pydevelop; then
PYTHON_ARGS_FOR_INSTALL="-m pip install --no-build-isolation --no-deps -e"
fi
# Append `-DFIND_RAFT_CPP=ON` to EXTRA_CMAKE_ARGS unless a user specified the option.
SKBUILD_EXTRA_CMAKE_ARGS="${EXTRA_CMAKE_ARGS}"
if [[ "${EXTRA_CMAKE_ARGS}" != *"DFIND_CUGRAPH_CPP"* ]]; then
SKBUILD_EXTRA_CMAKE_ARGS="${SKBUILD_EXTRA_CMAKE_ARGS} -DFIND_CUGRAPH_CPP=ON"
fi
# If clean or uninstall targets given, run them prior to any other steps
if hasArg uninstall; then
if [[ "$INSTALL_PREFIX" != "" ]]; then
rm -rf ${INSTALL_PREFIX}/include/cugraph
rm -f ${INSTALL_PREFIX}/lib/libcugraph.so
rm -rf ${INSTALL_PREFIX}/include/cugraph_c
rm -f ${INSTALL_PREFIX}/lib/libcugraph_c.so
rm -rf ${INSTALL_PREFIX}/include/cugraph_etl
rm -f ${INSTALL_PREFIX}/lib/libcugraph_etl.so
rm -rf ${INSTALL_PREFIX}/lib/cmake/cugraph
rm -rf ${INSTALL_PREFIX}/lib/cmake/cugraph_etl
fi
# This may be redundant given the above, but can also be used in case
# there are other installed files outside of the locations above.
if [ -e ${LIBCUGRAPH_BUILD_DIR}/install_manifest.txt ]; then
xargs rm -f < ${LIBCUGRAPH_BUILD_DIR}/install_manifest.txt > /dev/null 2>&1
fi
# uninstall cugraph and pylibcugraph installed from a prior "setup.py
# install"
# FIXME: if multiple versions of these packages are installed, this only
# removes the latest one and leaves the others installed. build.sh uninstall
# can be run multiple times to remove all of them, but that is not obvious.
pip uninstall -y pylibcugraph cugraph cugraph-service-client cugraph-service-server \
cugraph-dgl cugraph-pyg nx-cugraph
fi
if hasArg clean; then
# Ignore errors for clean since missing files, etc. are not failures
set +e
# remove artifacts generated inplace
# FIXME: ideally the "setup.py clean" command would be used for this, but
# currently running any setup.py command has side effects (eg. cloning
# repos).
# (cd ${REPODIR}/python && python setup.py clean)
if [[ -d ${REPODIR}/python ]]; then
cleanPythonDir ${REPODIR}/python
fi
# If the dirs to clean are mounted dirs in a container, the contents should
# be removed but the mounted dirs will remain. The find removes all
# contents but leaves the dirs, the rmdir attempts to remove the dirs but
# can fail safely.
for bd in ${BUILD_DIRS}; do
if [ -d ${bd} ]; then
find ${bd} -mindepth 1 -delete
rmdir ${bd} || true
fi
done
# Go back to failing on first error for all other operations
set -e
fi
################################################################################
# Configure, build, and install libcugraph
if buildDefault || hasArg libcugraph || hasArg all; then
if hasArg --clean; then
if [ -d ${LIBCUGRAPH_BUILD_DIR} ]; then
find ${LIBCUGRAPH_BUILD_DIR} -mindepth 1 -delete
rmdir ${LIBCUGRAPH_BUILD_DIR} || true
fi
else
if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then
CUGRAPH_CMAKE_CUDA_ARCHITECTURES="NATIVE"
echo "Building for the architecture of the GPU in the system..."
else
CUGRAPH_CMAKE_CUDA_ARCHITECTURES="RAPIDS"
echo "Building for *ALL* supported GPU architectures..."
fi
mkdir -p ${LIBCUGRAPH_BUILD_DIR}
cd ${LIBCUGRAPH_BUILD_DIR}
cmake -B "${LIBCUGRAPH_BUILD_DIR}" -S "${REPODIR}/cpp" \
-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
-DCMAKE_CUDA_ARCHITECTURES=${CUGRAPH_CMAKE_CUDA_ARCHITECTURES} \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DBUILD_TESTS=${BUILD_CPP_TESTS} \
-DBUILD_CUGRAPH_MG_TESTS=${BUILD_CPP_MG_TESTS} \
-DBUILD_CUGRAPH_MTMG_TESTS=${BUILD_CPP_MTMG_TESTS} \
-DUSE_CUGRAPH_OPS=${BUILD_WITH_CUGRAPHOPS} \
${CMAKE_GENERATOR_OPTION} \
${CMAKE_VERBOSE_OPTION}
cmake --build "${LIBCUGRAPH_BUILD_DIR}" -j${PARALLEL_LEVEL} ${INSTALL_TARGET} ${VERBOSE_FLAG}
fi
fi
# Configure, build, and install libcugraph_etl
if buildDefault || hasArg libcugraph_etl || hasArg all; then
if hasArg --clean; then
if [ -d ${LIBCUGRAPH_ETL_BUILD_DIR} ]; then
find ${LIBCUGRAPH_ETL_BUILD_DIR} -mindepth 1 -delete
rmdir ${LIBCUGRAPH_ETL_BUILD_DIR} || true
fi
else
if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then
CUGRAPH_CMAKE_CUDA_ARCHITECTURES="NATIVE"
echo "Building for the architecture of the GPU in the system..."
else
CUGRAPH_CMAKE_CUDA_ARCHITECTURES="RAPIDS"
echo "Building for *ALL* supported GPU architectures..."
fi
mkdir -p ${LIBCUGRAPH_ETL_BUILD_DIR}
cd ${LIBCUGRAPH_ETL_BUILD_DIR}
cmake -DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
-DCMAKE_CUDA_ARCHITECTURES=${CUGRAPH_CMAKE_CUDA_ARCHITECTURES} \
-DDISABLE_DEPRECATION_WARNING=${BUILD_DISABLE_DEPRECATION_WARNING} \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
-DBUILD_TESTS=${BUILD_CPP_TESTS} \
-DBUILD_CUGRAPH_MG_TESTS=${BUILD_CPP_MG_TESTS} \
-DBUILD_CUGRAPH_MTMG_TESTS=${BUILD_CPP_MTMG_TESTS} \
-DCMAKE_PREFIX_PATH=${LIBCUGRAPH_BUILD_DIR} \
${CMAKE_GENERATOR_OPTION} \
${CMAKE_VERBOSE_OPTION} \
${REPODIR}/cpp/libcugraph_etl
cmake --build "${LIBCUGRAPH_ETL_BUILD_DIR}" -j${PARALLEL_LEVEL} ${INSTALL_TARGET} ${VERBOSE_FLAG}
fi
fi
# Build, and install pylibcugraph
if buildDefault || hasArg pylibcugraph || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/pylibcugraph
else
# FIXME: skbuild with setuptools>=64 has a bug when called from a "pip
# install -e" command, resulting in a broken editable wheel. Continue
# to use "setup.py bdist_ext --inplace" for a develop build until
# https://github.com/scikit-build/scikit-build/issues/981 is closed.
if hasArg --pydevelop; then
cd ${REPODIR}/python/pylibcugraph
python setup.py build_ext \
--inplace \
-- \
-DFIND_CUGRAPH_CPP=ON \
-DUSE_CUGRAPH_OPS=${BUILD_WITH_CUGRAPHOPS} \
-Dcugraph_ROOT=${LIBCUGRAPH_BUILD_DIR} \
-- \
-j${PARALLEL_LEVEL:-1}
cd -
fi
SKBUILD_CONFIGURE_OPTIONS="${SKBUILD_EXTRA_CMAKE_ARGS} -DUSE_CUGRAPH_OPS=${BUILD_WITH_CUGRAPHOPS}" \
SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/pylibcugraph
fi
fi
# Build and install the cugraph Python package
if buildDefault || hasArg cugraph || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/cugraph
else
# FIXME: skbuild with setuptools>=64 has a bug when called from a "pip
# install -e" command, resulting in a broken editable wheel. Continue
# to use "setup.py bdist_ext --inplace" for a develop build until
# https://github.com/scikit-build/scikit-build/issues/981 is closed.
if hasArg --pydevelop; then
cd ${REPODIR}/python/cugraph
python setup.py build_ext \
--inplace \
-- \
-DFIND_CUGRAPH_CPP=ON \
-DUSE_CUGRAPH_OPS=${BUILD_WITH_CUGRAPHOPS} \
-Dcugraph_ROOT=${LIBCUGRAPH_BUILD_DIR} \
-- \
-j${PARALLEL_LEVEL:-1}
cd -
fi
SKBUILD_CONFIGURE_OPTIONS="${SKBUILD_EXTRA_CMAKE_ARGS} -DUSE_CUGRAPH_OPS=${BUILD_WITH_CUGRAPHOPS}" \
SKBUILD_BUILD_OPTIONS="-j${PARALLEL_LEVEL}" \
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/cugraph
fi
fi
# Install the cugraph-service-client and cugraph-service-server Python packages
if hasArg cugraph-service || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/cugraph-service
else
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/cugraph-service/client
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/cugraph-service/server
fi
fi
# Build and install the cugraph-pyg Python package
if hasArg cugraph-pyg || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/cugraph-pyg
else
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/cugraph-pyg
fi
fi
# Install the cugraph-dgl extensions for DGL
if hasArg cugraph-dgl || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/cugraph-dgl
else
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/cugraph-dgl
fi
fi
# Build and install the nx-cugraph Python package
if hasArg nx-cugraph || hasArg all; then
if hasArg --clean; then
cleanPythonDir ${REPODIR}/python/nx-cugraph
else
python ${PYTHON_ARGS_FOR_INSTALL} ${REPODIR}/python/nx-cugraph
fi
fi
# Build the docs
if hasArg docs || hasArg all; then
if [ ! -d ${LIBCUGRAPH_BUILD_DIR} ]; then
mkdir -p ${LIBCUGRAPH_BUILD_DIR}
cd ${LIBCUGRAPH_BUILD_DIR}
cmake -B "${LIBCUGRAPH_BUILD_DIR}" -S "${REPODIR}/cpp" \
-DCMAKE_INSTALL_PREFIX=${INSTALL_PREFIX} \
-DCMAKE_BUILD_TYPE=${BUILD_TYPE} \
${CMAKE_GENERATOR_OPTION} \
${CMAKE_VERBOSE_OPTION}
fi
for PROJECT in libcugraphops libwholegraph; do
XML_DIR="${REPODIR}/docs/cugraph/${PROJECT}"
rm -rf "${XML_DIR}"
mkdir -p "${XML_DIR}"
export XML_DIR_${PROJECT^^}="$XML_DIR"
echo "downloading xml for ${PROJECT} into ${XML_DIR}. Environment variable XML_DIR_${PROJECT^^} is set to ${XML_DIR}"
curl -O "https://d1664dvumjb44w.cloudfront.net/${PROJECT}/xml_tar/${RAPIDS_VERSION}/xml.tar.gz"
tar -xzf xml.tar.gz -C "${XML_DIR}"
rm "./xml.tar.gz"
done
cd ${LIBCUGRAPH_BUILD_DIR}
cmake --build "${LIBCUGRAPH_BUILD_DIR}" -j${PARALLEL_LEVEL} --target docs_cugraph ${VERBOSE_FLAG}
echo "making libcugraph doc dir"
rm -rf ${REPODIR}/docs/cugraph/libcugraph
mkdir -p ${REPODIR}/docs/cugraph/libcugraph
export XML_DIR_LIBCUGRAPH="${REPODIR}/cpp/doxygen/xml"
cd ${REPODIR}/docs/cugraph
make html
fi
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/codecov.yml | #Configuration File for CodeCov
coverage:
status:
project: off
patch: off
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/.dockerignore | # Ignore cmake builds from local machine that might have occured before attempting Docker build. Including these files will cause CMake cache conflict issues
/cpp/build | 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/dependencies.yaml | # Dependency list for https://github.com/rapidsai/dependency-file-generator
files:
all:
output: [conda]
matrix:
cuda: ["11.8", "12.0"]
arch: [x86_64]
includes:
- checks
- common_build
- cpp_build
- cudatoolkit
- docs
- python_build_wheel
- python_build_cythonize
- depends_on_rmm
- depends_on_cudf
- depends_on_dask_cudf
- depends_on_pylibraft
- depends_on_raft_dask
- depends_on_pylibcugraphops
- depends_on_cupy
- python_run_cugraph
- python_run_nx_cugraph
- python_run_cugraph_dgl
- python_run_cugraph_pyg
- test_notebook
- test_python_common
- test_python_cugraph
- test_python_pylibcugraph
- test_python_nx_cugraph
checks:
output: none
includes:
- checks
- py_version
docs:
output: none
includes:
- cudatoolkit
- docs
- py_version
- depends_on_pylibcugraphops
test_cpp:
output: none
includes:
- cudatoolkit
- test_cpp
test_notebooks:
output: none
includes:
- cudatoolkit
- py_version
- test_notebook
- test_python_common
- test_python_cugraph
test_python:
output: none
includes:
- cudatoolkit
- depends_on_cudf
- py_version
- test_python_common
- test_python_cugraph
- test_python_pylibcugraph
py_build_cugraph:
output: pyproject
pyproject_dir: python/cugraph
extras:
table: build-system
includes:
- common_build
- python_build_wheel
- depends_on_rmm
- depends_on_pylibraft
- depends_on_pylibcugraph
- python_build_cythonize
py_run_cugraph:
output: pyproject
pyproject_dir: python/cugraph
extras:
table: project
includes:
- depends_on_rmm
- depends_on_cudf
- depends_on_dask_cudf
- depends_on_raft_dask
- depends_on_pylibcugraph
- depends_on_cupy
- python_run_cugraph
py_test_cugraph:
output: pyproject
pyproject_dir: python/cugraph
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
- test_python_cugraph
py_build_pylibcugraph:
output: pyproject
pyproject_dir: python/pylibcugraph
extras:
table: build-system
includes:
- common_build
- python_build_wheel
- depends_on_rmm
- depends_on_pylibraft
- python_build_cythonize
py_run_pylibcugraph:
output: pyproject
pyproject_dir: python/pylibcugraph
extras:
table: project
includes:
- depends_on_rmm
- depends_on_pylibraft
py_test_pylibcugraph:
output: pyproject
pyproject_dir: python/pylibcugraph
extras:
table: project.optional-dependencies
key: test
includes:
- depends_on_cudf
- test_python_common
- test_python_pylibcugraph
py_build_nx_cugraph:
output: pyproject
pyproject_dir: python/nx-cugraph
extras:
table: build-system
includes:
- python_build_wheel
py_run_nx_cugraph:
output: pyproject
pyproject_dir: python/nx-cugraph
extras:
table: project
includes:
- depends_on_pylibcugraph
- depends_on_cupy
- python_run_nx_cugraph
py_test_nx_cugraph:
output: pyproject
pyproject_dir: python/nx-cugraph
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
- test_python_nx_cugraph
py_build_cugraph_dgl:
output: pyproject
pyproject_dir: python/cugraph-dgl
extras:
table: build-system
includes:
- python_build_wheel
py_run_cugraph_dgl:
output: pyproject
pyproject_dir: python/cugraph-dgl
extras:
table: project
includes:
- python_run_cugraph_dgl
py_build_cugraph_pyg:
output: pyproject
pyproject_dir: python/cugraph-pyg
extras:
table: build-system
includes:
- python_build_wheel
py_run_cugraph_pyg:
output: pyproject
pyproject_dir: python/cugraph-pyg
extras:
table: project
includes:
- python_run_cugraph_pyg
py_build_cugraph_service_client:
output: pyproject
pyproject_dir: python/cugraph-service/client
extras:
table: build-system
includes:
- python_build_wheel
py_run_cugraph_service_client:
output: pyproject
pyproject_dir: python/cugraph-service/client
extras:
table: project
includes:
- python_run_cugraph_service_client
py_build_cugraph_service_server:
output: pyproject
pyproject_dir: python/cugraph-service/server
extras:
table: build-system
includes:
- python_build_wheel
py_run_cugraph_service_server:
output: pyproject
pyproject_dir: python/cugraph-service/server
extras:
table: project
includes:
- depends_on_rmm
- depends_on_cudf
- depends_on_dask_cudf
- depends_on_cupy
- python_run_cugraph_service_server
py_test_cugraph_service_server:
output: pyproject
pyproject_dir: python/cugraph-service/server
extras:
table: project.optional-dependencies
key: test
includes:
- test_python_common
- test_python_cugraph
cugraph_dgl_dev:
matrix:
cuda: ["11.8"]
output: conda
conda_dir: python/cugraph-dgl/conda
includes:
- checks
- depends_on_pylibcugraphops
- cugraph_dgl_dev
- test_python_common
cugraph_pyg_dev:
matrix:
cuda: ["11.8"]
output: conda
conda_dir: python/cugraph-pyg/conda
includes:
- checks
- depends_on_pylibcugraphops
- cugraph_pyg_dev
- test_python_common
channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
- pytorch
- pyg
- dglteam/label/cu118
- conda-forge
- nvidia
dependencies:
checks:
common:
- output_types: [conda, requirements]
packages:
- pre-commit
cudatoolkit:
specific:
- output_types: [conda]
matrices:
- matrix:
cuda: "12.0"
packages:
- cuda-version=12.0
- matrix:
cuda: "11.8"
packages:
- cuda-version=11.8
- cudatoolkit
- matrix:
cuda: "11.5"
packages:
- cuda-version=11.5
- cudatoolkit
- matrix:
cuda: "11.4"
packages:
- cuda-version=11.4
- cudatoolkit
- matrix:
cuda: "11.2"
packages:
- cuda-version=11.2
- cudatoolkit
common_build:
common:
- output_types: [conda, pyproject]
packages:
- &cmake_ver cmake>=3.26.4
- ninja
cpp_build:
common:
- output_types: [conda]
packages:
- c-compiler
- cxx-compiler
- gmock>=1.13.0
- gtest>=1.13.0
- libcugraphops==23.12.*
- libraft-headers==23.12.*
- libraft==23.12.*
- librmm==23.12.*
- openmpi # Required for building cpp-mgtests (multi-GPU tests)
specific:
- output_types: [conda]
matrices:
- matrix:
arch: x86_64
packages:
- gcc_linux-64=11.*
- matrix:
arch: aarch64
packages:
- gcc_linux-aarch64=11.*
- output_types: [conda]
matrices:
- matrix:
arch: x86_64
cuda: "11.8"
packages:
- nvcc_linux-64=11.8
- matrix:
arch: aarch64
cuda: "11.8"
packages:
- nvcc_linux-aarch64=11.8
- matrix:
cuda: "12.0"
packages:
- cuda-version=12.0
- cuda-nvcc
docs:
common:
- output_types: [conda]
packages:
- breathe
- doxygen
- graphviz
- ipython
- nbsphinx
- numpydoc
- pydata-sphinx-theme
- recommonmark
- sphinx-copybutton
- sphinx-markdown-tables
- sphinx<6
- sphinxcontrib-websupport
py_version:
specific:
- output_types: [conda]
matrices:
- matrix:
py: "3.9"
packages:
- python=3.9
- matrix:
py: "3.10"
packages:
- python=3.10
- matrix:
packages:
- python>=3.9,<3.11
python_build_wheel:
common:
- output_types: [conda, pyproject, requirements]
packages:
- setuptools>=61.0.0
- wheel
python_build_cythonize:
common:
- output_types: [conda, pyproject, requirements]
packages:
- cython>=3.0.0
- scikit-build>=0.13.1
python_run_cugraph:
common:
- output_types: [conda, pyproject]
packages:
- &dask rapids-dask-dependency==23.12.*
- &dask_cuda dask-cuda==23.12.*
- &numba numba>=0.57
- &numpy numpy>=1.21
- &ucx_py ucx-py==0.35.*
- output_types: conda
packages:
- aiohttp
- fsspec>=0.6.0
- libcudf==23.12.*
- requests
- nccl>=2.9.9
- ucx-proc=*=gpu
- output_types: pyproject
packages:
# cudf uses fsspec but is protocol independent. cugraph
# dataset APIs require [http] extras for use with cudf.
- fsspec[http]>=0.6.0
python_run_nx_cugraph:
common:
- output_types: [conda, pyproject]
packages:
- networkx>=3.0
- *numpy
python_run_cugraph_dgl:
common:
- output_types: [conda, pyproject]
packages:
- *numba
- *numpy
- output_types: [pyproject]
packages:
- &cugraph cugraph==23.12.*
python_run_cugraph_pyg:
common:
- output_types: [conda, pyproject]
packages:
- *numba
- *numpy
- output_types: [pyproject]
packages:
- *cugraph
python_run_cugraph_service_client:
common:
- output_types: [conda, pyproject]
packages:
- &thrift thriftpy2
python_run_cugraph_service_server:
common:
- output_types: [conda, pyproject]
packages:
- *dask
- *dask_cuda
- *numba
- *numpy
- *thrift
- *ucx_py
- output_types: pyproject
packages:
- *cugraph
- cugraph-service-client==23.12.*
test_cpp:
common:
- output_types: conda
packages:
- *cmake_ver
test_notebook:
common:
- output_types: [conda, requirements]
packages:
- ipython
- notebook>=0.5.0
- output_types: [conda]
packages:
- wget
test_python_common:
common:
- output_types: [conda, pyproject]
packages:
- pandas
- pytest
- pytest-benchmark
- pytest-cov
- pytest-xdist
- scipy
test_python_cugraph:
common:
- output_types: [conda, pyproject]
packages:
- networkx>=2.5.1
- *numpy
- python-louvain
- scikit-learn>=0.23.1
- output_types: [conda]
packages:
- pylibwholegraph==23.12.*
test_python_pylibcugraph:
common:
- output_types: [conda, pyproject]
packages:
- *numpy
test_python_nx_cugraph:
common:
- output_types: [conda, pyproject]
packages:
- packaging>=21
# not needed by nx-cugraph tests, but is required for running networkx tests
- pytest-mpl
cugraph_dgl_dev:
common:
- output_types: [conda]
packages:
- cugraph==23.12.*
- pytorch>=2.0
- pytorch-cuda==11.8
- dgl>=1.1.0.cu*
cugraph_pyg_dev:
common:
- output_types: [conda]
packages:
- cugraph==23.12.*
- pytorch>=2.0
- pytorch-cuda==11.8
- pyg>=2.4.0
depends_on_rmm:
common:
- output_types: conda
packages:
- &rmm_conda rmm==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &rmm_packages_pip_cu12
- rmm-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *rmm_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *rmm_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &rmm_packages_pip_cu11
- rmm-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *rmm_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *rmm_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *rmm_packages_pip_cu11}
- {matrix: null, packages: [*rmm_conda]}
depends_on_cudf:
common:
- output_types: conda
packages:
- &cudf_conda cudf==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &cudf_packages_pip_cu12
- cudf-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *cudf_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *cudf_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &cudf_packages_pip_cu11
- cudf-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *cudf_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *cudf_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *cudf_packages_pip_cu11}
- {matrix: null, packages: [*cudf_conda]}
depends_on_dask_cudf:
common:
- output_types: conda
packages:
- &dask_cudf_conda dask-cudf==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &dask_cudf_packages_pip_cu12
- dask-cudf-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *dask_cudf_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *dask_cudf_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &dask_cudf_packages_pip_cu11
- dask-cudf-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *dask_cudf_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *dask_cudf_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *dask_cudf_packages_pip_cu11}
- {matrix: null, packages: [*dask_cudf_conda]}
depends_on_pylibraft:
common:
- output_types: conda
packages:
- &pylibraft_conda pylibraft==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &pylibraft_packages_pip_cu12
- pylibraft-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *pylibraft_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *pylibraft_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &pylibraft_packages_pip_cu11
- pylibraft-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *pylibraft_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *pylibraft_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *pylibraft_packages_pip_cu11}
- {matrix: null, packages: [*pylibraft_conda]}
depends_on_raft_dask:
common:
- output_types: conda
packages:
- &raft_dask_conda raft-dask==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &raft_dask_packages_pip_cu12
- raft-dask-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *raft_dask_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *raft_dask_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &raft_dask_packages_pip_cu11
- raft-dask-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *raft_dask_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *raft_dask_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *raft_dask_packages_pip_cu11}
- {matrix: null, packages: [*raft_dask_conda]}
depends_on_pylibcugraph:
common:
- output_types: conda
packages:
- &pylibcugraph_conda pylibcugraph==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &pylibcugraph_packages_pip_cu12
- pylibcugraph-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *pylibcugraph_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *pylibcugraph_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &pylibcugraph_packages_pip_cu11
- pylibcugraph-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *pylibcugraph_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *pylibcugraph_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *pylibcugraph_packages_pip_cu11}
- {matrix: null, packages: [*pylibcugraph_conda]}
depends_on_pylibcugraphops:
common:
- output_types: conda
packages:
- &pylibcugraphops_conda pylibcugraphops==23.12.*
- output_types: requirements
packages:
# pip recognizes the index as a global option for the requirements.txt file
- --extra-index-url=https://pypi.nvidia.com
specific:
- output_types: [requirements, pyproject]
matrices:
- matrix: {cuda: "12.2"}
packages: &pylibcugraphops_packages_pip_cu12
- pylibcugraphops-cu12==23.12.*
- {matrix: {cuda: "12.1"}, packages: *pylibcugraphops_packages_pip_cu12}
- {matrix: {cuda: "12.0"}, packages: *pylibcugraphops_packages_pip_cu12}
- matrix: {cuda: "11.8"}
packages: &pylibcugraphops_packages_pip_cu11
- pylibcugraphops-cu11==23.12.*
- {matrix: {cuda: "11.5"}, packages: *pylibcugraphops_packages_pip_cu11}
- {matrix: {cuda: "11.4"}, packages: *pylibcugraphops_packages_pip_cu11}
- {matrix: {cuda: "11.2"}, packages: *pylibcugraphops_packages_pip_cu11}
- {matrix: null, packages: [*pylibcugraphops_conda]}
depends_on_cupy:
common:
- output_types: conda
packages:
- cupy>=12.0.0
specific:
- output_types: [requirements, pyproject]
matrices:
# All CUDA 12 + x86_64 versions
- matrix: {cuda: "12.2", arch: x86_64}
packages: &cupy_packages_cu12_x86_64
- cupy-cuda12x>=12.0.0
- {matrix: {cuda: "12.1", arch: x86_64}, packages: *cupy_packages_cu12_x86_64}
- {matrix: {cuda: "12.0", arch: x86_64}, packages: *cupy_packages_cu12_x86_64}
# All CUDA 12 + aarch64 versions
- matrix: {cuda: "12.2", arch: aarch64}
packages: &cupy_packages_cu12_aarch64
- cupy-cuda12x -f https://pip.cupy.dev/aarch64 # TODO: Verify that this works.
- {matrix: {cuda: "12.1", arch: aarch64}, packages: *cupy_packages_cu12_aarch64}
- {matrix: {cuda: "12.0", arch: aarch64}, packages: *cupy_packages_cu12_aarch64}
# All CUDA 11 + x86_64 versions
- matrix: {cuda: "11.8", arch: x86_64}
packages: &cupy_packages_cu11_x86_64
- cupy-cuda11x>=12.0.0
- {matrix: {cuda: "11.5", arch: x86_64}, packages: *cupy_packages_cu11_x86_64}
- {matrix: {cuda: "11.4", arch: x86_64}, packages: *cupy_packages_cu11_x86_64}
- {matrix: {cuda: "11.2", arch: x86_64}, packages: *cupy_packages_cu11_x86_64}
# All CUDA 11 + aarch64 versions
- matrix: {cuda: "11.8", arch: aarch64}
packages: &cupy_packages_cu11_aarch64
- cupy-cuda11x -f https://pip.cupy.dev/aarch64 # TODO: Verify that this works.
- {matrix: {cuda: "11.5", arch: aarch64}, packages: *cupy_packages_cu11_aarch64}
- {matrix: {cuda: "11.4", arch: aarch64}, packages: *cupy_packages_cu11_aarch64}
- {matrix: {cuda: "11.2", arch: aarch64}, packages: *cupy_packages_cu11_aarch64}
- {matrix: null, packages: [cupy-cuda11x>=12.0.0]}
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/conda_build.sh | #!/usr/bin/env bash
# Copyright (c) 2021-2022, NVIDIA CORPORATION
set -xe
CUDA_REL=${CUDA_VERSION%.*}
conda install conda-build anaconda-client conda-verify -y
conda build -c rapidsai -c rapidsai-nightly/label/cuda${CUDA_REL} -c conda-forge -c nvidia --python=${PYTHON} conda/recipes/cugraph
if [ "$UPLOAD_PACKAGE" == '1' ]; then
export UPLOADFILE=`conda build -c rapidsai -c conda-forge -c nvidia --python=${PYTHON} conda/recipes/cugraph --output`
SOURCE_BRANCH=main
test -e ${UPLOADFILE}
LABEL_OPTION="--label dev"
if [ "${LABEL_MAIN}" == '1' ]; then
LABEL_OPTION="--label main"
fi
echo "LABEL_OPTION=${LABEL_OPTION}"
if [ -z "$MY_UPLOAD_KEY" ]; then
echo "No upload key"
return 0
fi
echo "Upload"
echo ${UPLOADFILE}
anaconda -t ${MY_UPLOAD_KEY} upload -u ${CONDA_USERNAME:-rapidsai} ${LABEL_OPTION} --force ${UPLOADFILE} --no-progress
else
echo "Skipping upload"
fi
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA CORPORATION
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/VERSION | 23.12.00
| 0 |
rapidsai_public_repos | rapidsai_public_repos/cugraph/print_env.sh | #!/usr/bin/env bash
# Reports relevant environment information useful for diagnosing and
# debugging cuGraph issues.
# Usage:
# "./print_env.sh" - prints to stdout
# "./print_env.sh > env.txt" - prints to file "env.txt"
print_env() {
echo "**git***"
if [ "$(git rev-parse --is-inside-work-tree 2>/dev/null)" == "true" ]; then
git log --decorate -n 1
echo "**git submodules***"
git submodule status --recursive
else
echo "Not inside a git repository"
fi
echo
echo "***OS Information***"
cat /etc/*-release
uname -a
echo
echo "***GPU Information***"
nvidia-smi
echo
echo "***CPU***"
lscpu
echo
echo "***CMake***"
which cmake && cmake --version
echo
echo "***g++***"
which g++ && g++ --version
echo
echo "***nvcc***"
which nvcc && nvcc --version
echo
echo "***Python***"
which python && python -c "import sys; print('Python {0}.{1}.{2}'.format(sys.version_info[0], sys.version_info[1], sys.version_info[2]))"
echo
echo "***Environment Variables***"
printf '%-32s: %s\n' PATH $PATH
printf '%-32s: %s\n' LD_LIBRARY_PATH $LD_LIBRARY_PATH
printf '%-32s: %s\n' NUMBAPRO_NVVM $NUMBAPRO_NVVM
printf '%-32s: %s\n' NUMBAPRO_LIBDEVICE $NUMBAPRO_LIBDEVICE
printf '%-32s: %s\n' CONDA_PREFIX $CONDA_PREFIX
printf '%-32s: %s\n' PYTHON_PATH $PYTHON_PATH
echo
# Print conda packages if conda exists
if type "conda" &> /dev/null; then
echo '***conda packages***'
which conda && conda list
echo
# Print pip packages if pip exists
elif type "pip" &> /dev/null; then
echo "conda not found"
echo "***pip packages***"
which pip && pip list
echo
else
echo "conda not found"
echo "pip not found"
fi
}
echo "<details><summary>Click here to see environment details</summary><pre>"
echo " "
print_env | while read -r line; do
echo " $line"
done
echo "</pre></details>"
| 0 |
rapidsai_public_repos/cugraph/github | rapidsai_public_repos/cugraph/github/workflows/labeler.yml | name: "Pull Request Labeler"
on:
- pull_request_target
jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@main
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
| 0 |
rapidsai_public_repos/cugraph | rapidsai_public_repos/cugraph/mg_utils/default-config.sh | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
THIS_DIR=$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)
# Most are defined using the bash := or :- syntax, which means they
# will be set only if they were previously unset. The project config
# is loaded first, which gives it the opportunity to override anything
# in this file that uses that syntax. If there are variables in this
# file that should not be overridded by a project, then they will
# simply not use that syntax and override, since these variables are
# read last.
SCRIPTS_DIR=$THIS_DIR
WORKSPACE=$THIS_DIR
# These really should be oerridden by the project config!
CONDA_ENV=${CONDA_ENV:-rapids}
GPUS_PER_NODE=${GPUS_PER_NODE:-8}
WORKER_RMM_POOL_SIZE=${WORKER_RMM_POOL_SIZE:-12G}
DASK_CUDA_INTERFACE=${DASK_CUDA_INTERFACE:-ib0}
DASK_SCHEDULER_PORT=${DASK_SCHEDULER_PORT:-8792}
DASK_DEVICE_MEMORY_LIMIT=${DASK_DEVICE_MEMORY_LIMIT:-auto}
DASK_HOST_MEMORY_LIMIT=${DASK_HOST_MEMORY_LIMIT:-auto}
BUILD_LOG_FILE=${BUILD_LOG_FILE:-${RESULTS_DIR}/build_log.txt}
SCHEDULER_FILE=${SCHEDULER_FILE:-${WORKSPACE}/dask-scheduler.json}
DATE=${DATE:-$(date --utc "+%Y-%m-%d_%H:%M:%S")_UTC}
ENV_EXPORT_FILE=${ENV_EXPORT_FILE:-${WORKSPACE}/$(basename ${CONDA_ENV})-${DATE}.txt}
| 0 |
rapidsai_public_repos/cugraph | rapidsai_public_repos/cugraph/mg_utils/README.md | This directory contains various scripts helpful for cugraph users and developers.
The following scripts were copied from https://github.com/rapidsai/multi-gpu-tools and are useful for starting a dask cluster, which is needed by cugraph for multi-GPU support.
* `run-dask-process.sh`
* `functions.sh`
* `default-config.sh`
| 0 |
rapidsai_public_repos/cugraph | rapidsai_public_repos/cugraph/mg_utils/functions.sh | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is source'd from script-env.sh to add functions to the
# calling environment, hence no #!/bin/bash as the first line. This
# also assumes the variables used in this file have been defined
# elsewhere.
NUMARGS=$#
ARGS=$*
function hasArg {
(( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
}
function logger {
echo -e ">>>> $@"
}
# Calling "setTee outfile" will cause all stdout and stderr of the
# current script to be output to "tee", which outputs to stdout and
# "outfile" simultaneously. This is useful by allowing a script to
# "tee" itself at any point without being called with tee.
_origFileDescriptorsSaved=0
function setTee {
if [[ $_origFileDescriptorsSaved == 0 ]]; then
# Save off the original file descr 1 and 2 as 3 and 4
exec 3>&1 4>&2
_origFileDescriptorsSaved=1
fi
teeFile=$1
# Create a named pipe.
pipeName=$(mktemp -u)
mkfifo $pipeName
# Close the currnet 1 and 2 and restore to original (3, 4) in the
# event this function is called repeatedly.
exec 1>&- 2>&-
exec 1>&3 2>&4
# Start a tee process reading from the named pipe. Redirect stdout
# and stderr to the named pipe which goes to the tee process. The
# named pipe "file" can be removed and the tee process stays alive
# until the fd is closed.
tee -a < $pipeName $teeFile &
exec > $pipeName 2>&1
rm $pipeName
}
# Call this to stop script output from going to "tee" after a prior
# call to setTee.
function unsetTee {
if [[ $_origFileDescriptorsSaved == 1 ]]; then
# Close the current fd 1 and 2 which should stop the tee
# process, then restore 1 and 2 to original (saved as 3, 4).
exec 1>&- 2>&-
exec 1>&3 2>&4
fi
}
| 0 |
rapidsai_public_repos/cugraph | rapidsai_public_repos/cugraph/mg_utils/run-dask-process.sh | #!/bin/bash
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
THIS_DIR=$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)
source ${THIS_DIR}/default-config.sh
source ${THIS_DIR}/functions.sh
# Logs can be written to a specific location by setting the LOGS_DIR
# env var.
LOGS_DIR=${LOGS_DIR:-dask_logs-$$}
########################################
NUMARGS=$#
ARGS=$*
function hasArg {
(( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
}
VALIDARGS="-h --help scheduler workers --tcp --ucx --ucxib --ucx-ib"
HELP="$0 [<app> ...] [<flag> ...]
where <app> is:
scheduler - start dask scheduler
workers - start dask workers
and <flag> is:
--tcp - initalize a tcp cluster (default)
--ucx - initialize a ucx cluster with NVLink
--ucxib | --ucx-ib - initialize a ucx cluster with IB+NVLink
-h | --help - print this text
The cluster config order of precedence is any specification on the
command line (--tcp, --ucx, etc.) if provided, then the value of the
env var CLUSTER_CONFIG_TYPE if set, then the default value of tcp.
"
# CLUSTER_CONFIG_TYPE defaults to the env var value if set, else TCP
CLUSTER_CONFIG_TYPE=${CLUSTER_CONFIG_TYPE:-TCP}
START_SCHEDULER=0
START_WORKERS=0
if (( ${NUMARGS} == 0 )); then
echo "${HELP}"
exit 0
else
if hasArg -h || hasArg --help; then
echo "${HELP}"
exit 0
fi
for a in ${ARGS}; do
if ! (echo " ${VALIDARGS} " | grep -q " ${a} "); then
echo "Invalid option: ${a}"
exit 1
fi
done
fi
if hasArg scheduler; then
START_SCHEDULER=1
fi
if hasArg workers; then
START_WORKERS=1
fi
# Allow the command line to take precedence
if hasArg --tcp; then
CLUSTER_CONFIG_TYPE=TCP
elif hasArg --ucx; then
CLUSTER_CONFIG_TYPE=UCX
elif hasArg --ucxib || hasArg --ucx-ib; then
CLUSTER_CONFIG_TYPE=UCXIB
fi
########################################
#export DASK_LOGGING__DISTRIBUTED="DEBUG"
#ulimit -n 100000
SCHEDULER_LOG=${LOGS_DIR}/scheduler_log.txt
WORKERS_LOG=${LOGS_DIR}/worker-${HOSTNAME}_log.txt
function buildTcpArgs {
export DASK_DISTRIBUTED__COMM__TIMEOUTS__CONNECT="100s"
export DASK_DISTRIBUTED__COMM__TIMEOUTS__TCP="600s"
export DASK_DISTRIBUTED__COMM__RETRY__DELAY__MIN="1s"
export DASK_DISTRIBUTED__COMM__RETRY__DELAY__MAX="60s"
export DASK_DISTRIBUTED__WORKER__MEMORY__Terminate="False"
SCHEDULER_ARGS="--protocol=tcp
--scheduler-file $SCHEDULER_FILE
"
WORKER_ARGS="--rmm-pool-size=$WORKER_RMM_POOL_SIZE
--rmm-async
--local-directory=/tmp/$LOGNAME
--scheduler-file=$SCHEDULER_FILE
--memory-limit=$DASK_HOST_MEMORY_LIMIT
--device-memory-limit=$DASK_DEVICE_MEMORY_LIMIT
"
}
function buildUCXWithInfinibandArgs {
export UCX_MAX_RNDV_RAILS=1
export UCX_MEMTYPE_REG_WHOLE_ALLOC_TYPES=cuda
export DASK_RMM__POOL_SIZE=0.5GB
export DASK_DISTRIBUTED__COMM__UCX__CREATE_CUDA_CONTEXT=True
SCHEDULER_ARGS="--protocol=ucx
--interface=$DASK_CUDA_INTERFACE
--scheduler-file $SCHEDULER_FILE
"
WORKER_ARGS="--interface=$DASK_CUDA_INTERFACE
--rmm-pool-size=$WORKER_RMM_POOL_SIZE
--rmm-maximum-pool-size=$WORKER_RMM_POOL_SIZE
--local-directory=/tmp/$LOGNAME
--scheduler-file=$SCHEDULER_FILE
--memory-limit=$DASK_HOST_MEMORY_LIMIT
--device-memory-limit=$DASK_DEVICE_MEMORY_LIMIT
--enable-jit-unspill
"
}
function buildUCXwithoutInfinibandArgs {
export UCX_TCP_CM_REUSEADDR=y
export UCX_MAX_RNDV_RAILS=1
export UCX_TCP_TX_SEG_SIZE=8M
export UCX_TCP_RX_SEG_SIZE=8M
export DASK_DISTRIBUTED__COMM__UCX__CUDA_COPY=True
export DASK_DISTRIBUTED__COMM__UCX__TCP=True
export DASK_DISTRIBUTED__COMM__UCX__NVLINK=True
export DASK_DISTRIBUTED__COMM__UCX__INFINIBAND=False
export DASK_DISTRIBUTED__COMM__UCX__RDMACM=False
export DASK_RMM__POOL_SIZE=0.5GB
SCHEDULER_ARGS="--protocol=ucx
--scheduler-file $SCHEDULER_FILE
"
WORKER_ARGS="--enable-tcp-over-ucx
--enable-nvlink
--disable-infiniband
--disable-rdmacm
--rmm-pool-size=$WORKER_RMM_POOL_SIZE
--rmm-maximum-pool-size=$WORKER_RMM_POOL_SIZE
--local-directory=/tmp/$LOGNAME
--scheduler-file=$SCHEDULER_FILE
--memory-limit=$DASK_HOST_MEMORY_LIMIT
--device-memory-limit=$DASK_DEVICE_MEMORY_LIMIT
--enable-jit-unspill
"
}
if [[ "$CLUSTER_CONFIG_TYPE" == "UCX" ]]; then
logger "Using cluster configurtion for UCX"
buildUCXwithoutInfinibandArgs
elif [[ "$CLUSTER_CONFIG_TYPE" == "UCXIB" ]]; then
logger "Using cluster configurtion for UCX with Infiniband"
buildUCXWithInfinibandArgs
else
logger "Using cluster configurtion for TCP"
buildTcpArgs
fi
########################################
scheduler_pid=""
worker_pid=""
num_scheduler_tries=0
function startScheduler {
mkdir -p $(dirname $SCHEDULER_FILE)
echo "RUNNING: \"python -m distributed.cli.dask_scheduler $SCHEDULER_ARGS\"" > $SCHEDULER_LOG
dask-scheduler $SCHEDULER_ARGS >> $SCHEDULER_LOG 2>&1 &
scheduler_pid=$!
}
mkdir -p $LOGS_DIR
logger "Logs written to: $LOGS_DIR"
if [[ $START_SCHEDULER == 1 ]]; then
rm -f $SCHEDULER_FILE $SCHEDULER_LOG $WORKERS_LOG
startScheduler
sleep 6
num_scheduler_tries=$(python -c "print($num_scheduler_tries+1)")
# Wait for the scheduler to start first before proceeding, since
# it may require several retries (if prior run left ports open
# that need time to close, etc.)
while [ ! -f "$SCHEDULER_FILE" ]; do
scheduler_alive=$(ps -p $scheduler_pid > /dev/null ; echo $?)
if [[ $scheduler_alive != 0 ]]; then
if [[ $num_scheduler_tries != 30 ]]; then
echo "scheduler failed to start, retry #$num_scheduler_tries"
startScheduler
sleep 6
num_scheduler_tries=$(echo $num_scheduler_tries+1 | bc)
else
echo "could not start scheduler, exiting."
exit 1
fi
fi
done
echo "scheduler started."
fi
if [[ $START_WORKERS == 1 ]]; then
rm -f $WORKERS_LOG
while [ ! -f "$SCHEDULER_FILE" ]; do
echo "run-dask-process.sh: $SCHEDULER_FILE not present - waiting to start workers..."
sleep 2
done
echo "RUNNING: \"python -m dask_cuda.cli.dask_cuda_worker $WORKER_ARGS\"" > $WORKERS_LOG
dask-cuda-worker $WORKER_ARGS >> $WORKERS_LOG 2>&1 &
worker_pid=$!
echo "worker(s) started."
fi
# This script will not return until the following background process
# have been completed/killed.
if [[ $worker_pid != "" ]]; then
echo "waiting for worker pid $worker_pid to finish before exiting script..."
wait $worker_pid
fi
if [[ $scheduler_pid != "" ]]; then
echo "waiting for scheduler pid $scheduler_pid to finish before exiting script..."
wait $scheduler_pid
fi
| 0 |
rapidsai_public_repos/cugraph | rapidsai_public_repos/cugraph/python/.coveragerc | # Configuration file for Python coverage tests
[run]
include = cugraph/cugraph/*
cugraph-pyg/cugraph_pyg/*
cugraph-service/*
pylibcugraph/pylibcugraph/*
omit = cugraph/cugraph/tests/*
cugraph-pyg/cugraph_pyg/tests/*
cugraph-service/tests/*
pylibcugraph/pylibcugraph/tests/*
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/gpu_metric_poller.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# GPUMetricPoller
# Utility class and helpers for retrieving GPU metrics for a specific section
# of code.
#
"""
# Example:
# Create/start a GPUMetricPoller object, run code to measure, stop poller:
gpuPollObj = startGpuMetricPolling()
run_cuml_algo(data, **{**param_overrides, **cuml_param_overrides})
stopGpuMetricPolling(gpuPollObj)
# Retrieve measurements from the object:
print("Max GPU memory used: %s" % gpuPollObj.maxGpuMemUsed)
print("Max GPU utilization: %s" % gpuPollObj.maxGpuUtil)
"""
import os
import sys
import threading
from pynvml import smi
class GPUMetricPoller(threading.Thread):
"""
Polls smi in a forked child process, saves measurements to instance vars
"""
def __init__(self, *args, **kwargs):
self.__stop = False
super().__init__(*args, **kwargs)
self.maxGpuUtil = 0
self.maxGpuMemUsed = 0
@staticmethod
def __waitForInput(fd):
# assume non-blocking fd
while True:
if not fd.closed:
line = fd.readline()
if line:
return line
else:
break
return None
@staticmethod
def __writeToPipe(fd, strToWrite):
fd.write(strToWrite)
fd.flush()
def __runParentLoop(self, readFileNo, writeFileNo):
parentReadPipe = os.fdopen(readFileNo)
parentWritePipe = os.fdopen(writeFileNo, "w")
self.__writeToPipe(parentWritePipe, "1")
gpuMetricsStr = self.__waitForInput(parentReadPipe)
while True:
# FIXME: this assumes the input received is perfect!
(memUsed, gpuUtil) = [int(x) for x in gpuMetricsStr.strip().split()]
if memUsed > self.maxGpuMemUsed:
self.maxGpuMemUsed = memUsed
if gpuUtil > self.maxGpuUtil:
self.maxGpuUtil = gpuUtil
if not self.__stop:
self.__writeToPipe(parentWritePipe, "1")
else:
self.__writeToPipe(parentWritePipe, "0")
break
gpuMetricsStr = self.__waitForInput(parentReadPipe)
parentReadPipe.close()
parentWritePipe.close()
def __runChildLoop(self, readFileNo, writeFileNo):
childReadPipe = os.fdopen(readFileNo)
childWritePipe = os.fdopen(writeFileNo, "w")
smi.nvmlInit()
# hack - get actual device ID somehow
devObj = smi.nvmlDeviceGetHandleByIndex(0)
memObj = smi.nvmlDeviceGetMemoryInfo(devObj)
utilObj = smi.nvmlDeviceGetUtilizationRates(devObj)
initialMemUsed = memObj.used
initialGpuUtil = utilObj.gpu
controlStr = self.__waitForInput(childReadPipe)
while True:
memObj = smi.nvmlDeviceGetMemoryInfo(devObj)
utilObj = smi.nvmlDeviceGetUtilizationRates(devObj)
memUsed = memObj.used - initialMemUsed
gpuUtil = utilObj.gpu - initialGpuUtil
if controlStr.strip() == "1":
self.__writeToPipe(childWritePipe, "%s %s\n" % (memUsed, gpuUtil))
elif controlStr.strip() == "0":
break
controlStr = self.__waitForInput(childReadPipe)
smi.nvmlShutdown()
childReadPipe.close()
childWritePipe.close()
def run(self):
(parentReadPipeFileNo, childWritePipeFileNo) = os.pipe2(os.O_NONBLOCK)
(childReadPipeFileNo, parentWritePipeFileNo) = os.pipe2(os.O_NONBLOCK)
pid = os.fork()
# parent
if pid:
os.close(childReadPipeFileNo)
os.close(childWritePipeFileNo)
self.__runParentLoop(parentReadPipeFileNo, parentWritePipeFileNo)
# child
else:
os.close(parentReadPipeFileNo)
os.close(parentWritePipeFileNo)
self.__runChildLoop(childReadPipeFileNo, childWritePipeFileNo)
sys.exit(0)
def stop(self):
self.__stop = True
def startGpuMetricPolling():
gpuPollObj = GPUMetricPoller()
gpuPollObj.start()
return gpuPollObj
def stopGpuMetricPolling(gpuPollObj):
gpuPollObj.stop()
gpuPollObj.join() # consider using timeout and reporting errors
"""
smi.nvmlInit()
# hack - get actual device ID somehow
devObj = smi.nvmlDeviceGetHandleByIndex(0)
memObj = smi.nvmlDeviceGetMemoryInfo(devObj)
utilObj = smi.nvmlDeviceGetUtilizationRates(devObj)
initialMemUsed = memObj.used
initialGpuUtil = utilObj.gpu
while not self.__stop:
time.sleep(0.01)
memObj = smi.nvmlDeviceGetMemoryInfo(devObj)
utilObj = smi.nvmlDeviceGetUtilizationRates(devObj)
memUsed = memObj.used - initialMemUsed
gpuUtil = utilObj.gpu - initialGpuUtil
if memUsed > self.maxGpuMemUsed:
self.maxGpuMemUsed = memUsed
if gpuUtil > self.maxGpuUtil:
self.maxGpuUtil = gpuUtil
smi.nvmlShutdown()
"""
# if __name__ == "__main__":
# sto=stopGpuMetricPolling
# po = startGpuMetricPolling()
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/benchmark.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# from time import process_time_ns # only in 3.7!
from time import clock_gettime, CLOCK_MONOTONIC_RAW
import numpy as np
from gpu_metric_poller import startGpuMetricPolling, stopGpuMetricPolling
class Benchmark:
resultsDict = {}
metricNameCellWidth = 20
valueCellWidth = 40
def __init__(self, func, name="", args=None):
"""
func = the callable to wrap
name = name of callable, needed mostly for bookkeeping
args = args to pass the callable (default is no args)
"""
self.func = func
self.name = name or func.__name__
self.args = args or ()
def run(self, n=1):
"""
Run self.func() n times and compute the average of all runs for all
metrics after discarding the min and max values for each.
"""
retVal = None
# Return or create the results dict unique to the function name
funcResultsDict = self.resultsDict.setdefault(self.name, {})
# FIXME: use a proper logger
print("Running %s" % self.name, end="", flush=True)
try:
exeTimes = []
gpuMems = []
gpuUtils = []
if n > 1:
print(" - iteration ", end="", flush=True)
for i in range(n):
if n > 1:
print(i + 1, end="...", flush=True)
gpuPollObj = startGpuMetricPolling()
# st = process_time_ns()
st = clock_gettime(CLOCK_MONOTONIC_RAW)
retVal = self.func(*self.args)
stopGpuMetricPolling(gpuPollObj)
# exeTime = (process_time_ns() - st) / 1e9
exeTime = clock_gettime(CLOCK_MONOTONIC_RAW) - st
exeTimes.append(exeTime)
gpuMems.append(gpuPollObj.maxGpuUtil)
gpuUtils.append(gpuPollObj.maxGpuMemUsed)
print(" - done running %s." % self.name, flush=True)
except Exception as e:
funcResultsDict["ERROR"] = str(e)
print(
" %s | %s"
% (
"ERROR".ljust(self.metricNameCellWidth),
str(e).ljust(self.valueCellWidth),
)
)
stopGpuMetricPolling(gpuPollObj)
return
funcResultsDict["exeTime"] = self.__computeValue(exeTimes)
funcResultsDict["maxGpuUtil"] = self.__computeValue(gpuMems)
funcResultsDict["maxGpuMemUsed"] = self.__computeValue(gpuUtils)
for metricName in ["exeTime", "maxGpuUtil", "maxGpuMemUsed"]:
val = funcResultsDict[metricName]
print(
" %s | %s"
% (
metricName.ljust(self.metricNameCellWidth),
str(val).ljust(self.valueCellWidth),
),
flush=True,
)
return retVal
def __computeValue(self, vals):
"""
Return the avergage val from the list of vals filtered to remove 2
std-deviations from the original average.
"""
avg = np.mean(vals)
std = np.std(vals)
filtered = [x for x in vals if ((avg - (2 * std)) <= x <= (avg + (2 * std)))]
if len(filtered) != len(vals):
print("filtered outliers: %s" % (set(vals) - set(filtered)))
return np.average(filtered)
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/asv_report.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import platform
import psutil
from asvdb import BenchmarkInfo, BenchmarkResult, ASVDb
from utils import getCommitInfo, getRepoInfo
def cugraph_update_asv(
asvDir,
datasetName,
algoRunResults,
cudaVer="",
pythonVer="",
osType="",
machineName="",
repo="",
):
"""
algoRunResults is a list of (algoName, exeTime) tuples
"""
(commitHash, commitTime) = getCommitInfo()
(actualRepo, branch) = getRepoInfo()
repo = repo or actualRepo
db = ASVDb(asvDir, repo, [branch])
uname = platform.uname()
prefixDict = dict(
maxGpuUtil="gpuutil",
maxGpuMemUsed="gpumem",
exeTime="time",
)
unitsDict = dict(
maxGpuUtil="percent",
maxGpuMemUsed="bytes",
exeTime="seconds",
)
bInfo = BenchmarkInfo(
machineName=machineName or uname.machine,
cudaVer=cudaVer or "unknown",
osType=osType or "%s %s" % (uname.system, uname.release),
pythonVer=pythonVer or platform.python_version(),
commitHash=commitHash,
commitTime=commitTime,
gpuType="unknown",
cpuType=uname.processor,
arch=uname.machine,
ram="%d" % psutil.virtual_memory().total,
)
validKeys = set(list(prefixDict.keys()) + list(unitsDict.keys()))
for (funcName, metricsDict) in algoRunResults.items():
for (metricName, val) in metricsDict.items():
# If an invalid metricName is present (likely due to a benchmark
# run error), skip
if metricName in validKeys:
bResult = BenchmarkResult(
funcName="%s_%s" % (funcName, prefixDict[metricName]),
argNameValuePairs=[("dataset", datasetName)],
result=val,
)
bResult.unit = unitsDict[metricName]
db.addResult(bInfo, bResult)
if __name__ == "__main__":
# Test ASVDb with some mock data (that just so happens to be very similar
# to actual data)
# FIXME: consider breaking this out to a proper test_whatever.py file!
asvDir = "asv"
datasetName = "dolphins.csv"
algoRunResults = [
("loadDataFile", 3.2228727098554373),
("createGraph", 3.00713360495865345),
("pagerank", 3.00899268127977848),
("bfs", 3.004273353144526482),
("sssp", 3.004624705761671066),
("jaccard", 3.0025573652237653732),
("louvain", 3.32631026208400726),
("weakly_connected_components", 3.0034315641969442368),
("overlap", 3.002147899940609932),
("triangles", 3.2544921860098839),
("spectralBalancedCutClustering", 3.03329935669898987),
("spectralModularityMaximizationClustering", 3.011258183047175407),
("renumber", 3.001620553433895111),
("view_adj_list", 3.000927431508898735),
("degree", 3.0016251634806394577),
("degrees", None),
]
cugraph_update_asv(
asvDir, datasetName, algoRunResults, machineName="MN", pythonVer="3.6"
)
# Same arg values (the "datasetName" is still named "dolphins.csv"), but
# different results - this should override just the results.
algoRunResults = [(a, r + 1) for (a, r) in algoRunResults]
cugraph_update_asv(
asvDir, datasetName, algoRunResults, machineName="MN", pythonVer="3.6"
)
# New arg values (changed "datasetName" to "dolphins2.csv") - this should
# create a new set or arg values and results.
datasetName = "dolphins2.csv"
cugraph_update_asv(
asvDir, datasetName, algoRunResults, machineName="MN", pythonVer="3.6"
)
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/run_benchmarks.sh | #!/bin/bash
# Copyright (c) 2018-2020, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
THISDIR=$(dirname $0)
VERSION=${VERSION:=0.0}
UTILS_DIR=${UTILS_DIR:=${THISDIR}}
DATASET_DIR=${DATASET_DIR:=${THISDIR}/../../datasets}
MACHINE_NAME=${MACHINE_NAME:="mymachine"}
CONDA=${CONDA:=conda}
# To output results for use with ASV, set
# ASV_OUTPUT_OPTION="--update_asv_dir=/asv/cugraph-e2e" (update /asv/cugraph-e2e
# to the desired results dir)
ASV_OUTPUT_OPTION=${ASV_OUTPUT_OPTION:=""}
ERROR=0
for ds in ${DATASET_DIR}/csv/undirected/*; do
echo "================================ ${ds}"
if [ "${ds}" == "${DATASET_DIR}/csv/undirected/soc-twitter-2010.csv" ]; then
echo
echo "SKIPPING ${ds}"
echo
else
python ${UTILS_DIR}/run_benchmarks.py \
${ASV_OUTPUT_OPTION} \
--report_cuda_ver=${CUDA_VERSION} \
--report_python_ver=${PYTHON_VERSION} \
--report_os_type=${LINUX_VERSION} \
--report_machine_name=${MACHINE_NAME} \
--compute_adj_list \
\
--algo=cugraph.bfs \
--algo=cugraph.sssp \
--algo=cugraph.jaccard \
--algo=cugraph.louvain \
--algo=cugraph.weakly_connected_components \
--algo=cugraph.overlap \
--algo=cugraph.triangles \
--algo=cugraph.spectralBalancedCutClustering \
--algo=cugraph.spectralModularityMaximizationClustering \
--algo=cugraph.renumber \
--algo=cugraph.graph.degree \
--algo=cugraph.graph.degrees \
\
${ds}
python ${UTILS_DIR}/run_benchmarks.py \
${ASV_OUTPUT_OPTION} \
--report_cuda_ver=${CUDA_VERSION} \
--report_python_ver=${PYTHON_VERSION} \
--report_os_type=${LINUX_VERSION} \
--report_machine_name=${MACHINE_NAME} \
--compute_transposed_adj_list \
\
--algo=cugraph.pagerank \
\
${ds}
exitcode=$?
if (( ${exitcode} != 0 )); then
ERROR=${exitcode}
echo "ERROR: ${ds}"
fi
fi
echo
done
for ds in ${DATASET_DIR}/csv/directed/*; do
echo "================================ ${ds}"
python ${UTILS_DIR}/run_benchmarks.py \
${ASV_OUTPUT_OPTION} \
--report_cuda_ver=${CUDA_VERSION} \
--report_python_ver=${PYTHON_VERSION} \
--report_os_type=${LINUX_VERSION} \
--report_machine_name=${MACHINE_NAME} \
--compute_adj_list \
--digraph \
\
--algo=cugraph.bfs \
--algo=cugraph.sssp \
--algo=cugraph.overlap \
--algo=cugraph.renumber \
--algo=cugraph.graph.degree \
--algo=cugraph.graph.degrees \
\
${ds}
python ${UTILS_DIR}/run_benchmarks.py \
${ASV_OUTPUT_OPTION} \
--report_cuda_ver=${CUDA_VERSION} \
--report_python_ver=${PYTHON_VERSION} \
--report_os_type=${LINUX_VERSION} \
--report_machine_name=${MACHINE_NAME} \
--compute_transposed_adj_list \
--digraph \
\
--algo=cugraph.pagerank \
\
${ds}
exitcode=$?
if (( ${exitcode} != 0 )); then
ERROR=${exitcode}
echo "ERROR: ${ds}"
fi
echo
done
exit ${ERROR}
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/README-benchmark.md | # cuGraph Benchmarking
This directory contains utilities for writing and running benchmarks for cuGraph.
## Prerequisites
* An environment capable of running Python applications that use cuGraph. A
conda environment containing packages in the cuGraph env.yaml, or a RAPIDS
`runtime` or `devel` Docker container.
* NOTE: A RAPIDS `runtime` container contains the complete set of packages to
satisfy every RAPIDS component, installed in a conda environment named
`rapids`. A `devel` continer also contains the packages needed for RAPIDS
in a `rapids` conda environment, but also the complete toolchain used to
build RAPIDS from source, the source files, and the intermediate build
artifacts. `devel` containers are ideal for developers working on RAPIDS,
and `runtime` containers are better suited for users of RAPIDS that don't
need a toolchain or sources.
* For developers using benchmarks to investigate performance-oriented changes,
a `devel` container is probably a better choice.
* The existing benchmarks require datasets which can be obtained using the
script in `<cugraph src dir>/datasets/get_test_data.sh`
```
cd <cugraph src dir>/datasets
./get_test_data.sh
```
## Overview
The current benchmark running script, by default, assumes all benchmarks will be
run on the dataset name passed in. To run against multiple datasets, multiple
invocations of the script are required. The current implementation of the
script creates a single graph object from the dataset passed in and runs one or
more benchmarks on that - different datasets require new graphs to be created,
and the script currently only creates a single graph upfront. The script also
treates the dataset read and graph creation as individual benchmarks and reports
results for those steps too.
There are two scripts to be aware of when running benchmarks; a python script
named `<cugraph src dir>/python/utils/run_benchmarks.py`, and a shell script
named `<cugraph src dir>/python/utils/run_benchmarks.sh`, both described more
below. The python script is more general purpose in that it allows a user to
pass in a variety of different options and does not assume a particular set of
datasets exist, while the shell script is intended for easier use for common
invocations in that it assumes a specific set of options and datasets. The
shell script assumes the datasets downloaded and installed by the
`<cugraph src dir>/datasets/get_test_data.sh` script are in place.
## Running benchmarks
### Quick start
The examples below assumes a bash shell in a RAPIDS `devel` container:
```
# get datasets
cd /rapids/cugraph/datasets
./get_test_data.sh
# run benchmarks
cd /rapids/cugraph/python/utils
./run_benchmarks.sh
```
### `<cugraph src dir>/python/utils/run_benchmarks.py`
The run_benchmarks.py script allows a user to run specific benchmarks on
specific datasets using different options. The options vary based on the
benchmark being run, and typically have reasonable defaults. For more
information, see the `--help` output of `run_benchmarks.py`.
## Writing new benchmarks
### Quick start
* Write a new function to run as a benchmark
* The function need not perform any measurements - those will be handled by
the `Benchmark` class which wraps it.
* The function can take args with the understanding that they will need to be
passed by the runner. The runner already has a Graph object (created by
reading the dataset) available. Any other arg will need to be provided by
either a custom command line arg, a global, or some other means.
```
def my_new_benchmark(graphObj, arg1):
graphObj.my_new_algo(arg1)
```
* The above is an oversimplified example, and in the case above, the
`my_new_algo()` method of the Graph object itself could serve as the
callable which is wrapped by a `Benchmark` object (this is how most of the
benchmarks are done in `run_benchmarks.py`). A separate function like the
above is only needed if a series of operations are to be benchmarked
together.
* Add the new function to the runner
* The easiest way is to write the function inside `run_benchmarks.py`. A more
scalable way would be to write it in a separate module and `import` it into
`run_benchmarks.py`.
* Update `getBenchmarks()` in `run_benchmarks.py` to add an instance of a
`Benchmark` object that wraps the new benchmark function - see
`run_benchmarks.py` for more details and examples.
* If the new benchmark function requires special args that are passed in via
the command line, also update `parseCLI()` to add the new options.
```
from my_module import my_new_benchmark
def getBenchmarks(G, edgelist_gdf, args):
benches = [
Benchmark(name="my_new_benchmark",
func=my_new_benchmark,
args=(G, args.arg1)),
...
```
if the new algo is the only operation to be benchmarked, and is perhaps
just a new method in the cugraph module (like most other algos), then an
easier approach could just be:
```
def getBenchmarks(G, edgelist_gdf, args):
benches = [
Benchmark(name="my_new_benchmark",
func=cugraph.my_new_algo,
args=(G, args.arg1)),
...
```
### benchmark runner
The `run_benchmarks.py` script sets up a standard way to read in command-line
options (in most cases to be used to provide options to the underlying algos),
read the dataset specified to create an instance of a Graph class, and run the
specified algos on the Graph instance. This script is intended to be modified
to customize the setup needed for different benchmarks, but idally only the
`getBenchmarks()` and sometimes the `parseCLI()` functions will change.
### The cuGraph Benchmark class
The `Benchmark` class is defined in `benchmark.py`, and it simply uses a series
of decorators to wrap the algo function call in timers and other calls to take
measurements to be included in the benchmark output.
The current metrics included are execution time (using the system monotonic
timer), GPU memory, and GPU utilization. Each metric is defined in
`benchmark.py`, where new metrics can be added and applied. The `Benchmark`
class simply defines the standard set of metrics that will be applied to each
algo, like so:
```
class Benchmark(WrappedFunc):
wrappers = [logExeTime, logGpuMetrics, printLastResult]
```
See `benchmark.py` for more details about the `WrappedFunc` base class.
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/analyse_mtx_sparsity.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Input: <matrix.mtx>
# Output: <mmFile,rows, cols, nnz, sparcity (%), empty rows (%),
# sparsity the largest row (%),
# sparsity at Q1 (%), sparsity at med (%), sparsity at Q3 (%),
# Gini coeff>
# <mmFile>_row_lengths_histogram.png (please comment plt.*
# at the end of the script if not needed)
import numpy as np
import sys
from scipy.io import mmread
import scipy.sparse
import networkx as nx
import matplotlib.pyplot as plt
def gini(v):
# zero denotes total equality between rows,
# and one denote the dominance of a single row.
# v = np.sort(v) #values must be sorted
index = np.arange(1, v.shape[0] + 1) # index per v element
n = v.shape[0]
return (np.sum((2 * index - n - 1) * v)) / (n * np.sum(v))
def count_consecutive(v):
# accumulate each time v[i] = v[i-1]+1
return np.sum((v[1:] - v[:-1]) == 1)
def consecutive_entries_per_row(M):
# count the number of consecutive column indicies
# for each row of a saprse CSR matrix sparse CSR.
# not to be mixed with the longest sequence or the number of sequences
v = [0] * M.shape[0]
for i in range(M.shape[0]):
v[i] = count_consecutive(M.indices[M.indptr[i] : M.indptr[i + 1]])
return np.array(v)
# Command line arguments
argc = len(sys.argv)
if argc <= 1:
print("Error: usage is : python analyse_mtx_sparsity.py matrix.mtx")
sys.exit()
mmFile = sys.argv[1]
# Read
M_in = mmread(mmFile)
if M_in is None:
raise TypeError("Could not read the input")
M = scipy.sparse.csr_matrix(M_in)
if not M.has_sorted_indices:
M.sort_indices()
# M = M.transpose()
M.sort_indices()
if M is None:
raise TypeError("Could not convert to csr")
# General properties
row = M.shape[0]
col = M.shape[1]
nnz = M.nnz
real_nnz = M.count_nonzero()
nnz_per_row = M.getnnz(1)
# Distribution info
nnz_per_row.sort()
row_max = nnz_per_row.max()
quartile1 = nnz_per_row[round(row / 4)]
median = nnz_per_row[round(row / 2)]
quartile3 = nnz_per_row[round(3 * (row / 4))]
empty_rows = row - np.count_nonzero(nnz_per_row)
gini_coef = gini(nnz_per_row)
G = nx.from_scipy_sparse_matrix(M)
print(nx.number_connected_components(G))
# Extras:
# row_min = nnz_per_row.min()
# cepr = consecutive_entries_per_row(M)
# pairs = np.sum(cepr) # consecutive elements (pairs)
# max_pairs = cepr.max()
# print (CSV)
print(
str(mmFile)
+ ","
+ str(row)
+ ","
+ str(col)
+ ","
+ str(nnz)
+ ","
+ str(round((1.0 - (nnz / (row * col))) * 100.0, 2))
+ ","
+ str(round((empty_rows / row) * 100.0, 2))
+ ","
+ str(round((1.0 - row_max / col) * 100.0, 2))
+ ","
+ str(round((1.0 - quartile1 / col) * 100.0, 2))
+ ","
+ str(round((1.0 - median / col) * 100.0, 2))
+ ","
+ str(round((1.0 - quartile3 / col) * 100.0, 2))
+ ","
+ str(round(gini_coef, 2))
)
# Extras:
# str(round(((2*pairs)/nnz)*100,2)) )
# str(round(nnz/row,2)) +','+
# str(real_nnz) +','+
# str(empty_rows)+','+
# str(row_min) +','+
# str(row_max) +','+
# str(quartile1) +','+
# str(median) +','+
# str(quartile3) +','+
# str(max_pairs) +','+
# str(round((1.0-(real_nnz/(row*col)))* 100.0,2)) +','+
# str(round((1.0-row_min/col)*100.0,2)) +','+
# historgam
plt.xlabel("Row lengths")
plt.ylabel("Occurences")
plt.hist(nnz_per_row, log=True)
plt.savefig(str(mmFile) + "_transposed_row_lengths_histogram.png")
plt.clf()
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/mtx2csv.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
from scipy.io import mmread
import argparse
parser = argparse.ArgumentParser(
description="Convert the sparsity pattern \
of a MatrixMarket file into a CSV file. \
Each directed edge is explicitely stored, \
edges are unsorted, IDs are 0-based."
)
parser.add_argument(
"file", type=argparse.FileType(), help="Path to the MatrixMarket file"
)
parser.add_argument(
"--csv_separator_name",
type=str,
default="space",
choices=["space", "tab", "comma"],
help="csv separator can be : \
space, tab or comma. Default is space",
)
args = parser.parse_args()
# Read
print("Reading " + str(args.file.name) + "...")
t1 = time.time()
M = mmread(args.file.name).asfptype()
read_time = time.time() - t1
print("Time (s) : " + str(round(read_time, 3)))
print("V =" + str(M.shape[0]) + ", E = " + str(M.nnz))
if args.csv_separator_name == "space":
separator = " "
elif args.csv_separator_name == "tab":
separator = " "
elif args.csv_separator_name == "comma":
separator = ","
else:
parser.error("supported csv_separator_name values are space, tab, comma")
# Write
print(
"Writing CSV file: "
+ os.path.splitext(os.path.basename(args.file.name))[0]
+ ".csv ..."
)
t1 = time.time()
os.path.splitext(os.path.basename(args.file.name))[0] + ".csv"
csv_file = open(os.path.splitext(os.path.basename(args.file.name))[0] + ".csv", "w")
for item in range(M.getnnz()):
csv_file.write("{}{}{}\n".format(M.row[item], separator, M.col[item]))
csv_file.close()
write_time = time.time() - t1
print("Time (s) : " + str(round(write_time, 3)))
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/npz2mtx.py | # Copyright (c) 2018-2020, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
import scipy.io
import scipy.sparse
import argparse
parser = argparse.ArgumentParser(
description="Convert the sparsity pattern \
of a NPZ file into a MatrixMarket file. \
Each directed edge is explicitely stored, \
edges are unsorted, IDs are 0-based."
)
parser.add_argument(
"file", type=argparse.FileType(), help="Path to the MatrixMarket file"
)
parser.add_argument(
"--symmetry",
type=str,
default="general",
choices=["general", "symmetric"],
help="Pattern, either general or symmetric",
)
args = parser.parse_args()
# Read
print("Reading " + str(args.file.name) + "...")
t1 = time.time()
M = scipy.sparse.load_npz(args.file.name).tocoo()
read_time = time.time() - t1
print("Time (s) : " + str(round(read_time, 3)))
print("V =" + str(M.shape[0]) + ", E = " + str(M.nnz))
# Write
print(
"Writing mtx file: "
+ os.path.splitext(os.path.basename(args.file.name))[0]
+ ".csv ..."
)
t1 = time.time()
scipy.io.mmwrite(
os.path.splitext(os.path.basename(args.file.name))[0] + ".mtx",
M,
symmetry=args.symmetry,
)
write_time = time.time() - t1
print("Time (s) : " + str(round(write_time, 3)))
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/run_benchmarks.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
from collections import OrderedDict
from scipy.io import mmread
import cugraph
import cudf
from benchmark import Benchmark
###############################################################################
# Update this function to add new algos
###############################################################################
def getBenchmarks(G, edgelist_gdf, args):
"""Returns a dictionary of benchmark name to Benchmark objs. This dictionary
is used when processing the command-line args to this script so the script
can run a specific benchmakr by name.
The "edgelist_gdf" and "args" args are common to many benchmark runs, and
provided to this function to make it easy to pass to individual Benchmark
objs. The "args" arg in particular is a dictionary built from processing
the command line args to this script, and allow special parameters to be
added to the command line for use by specific benchmarks.
To add a new benchmark to run, simply add an instance of a Benchmark to the
"benches" list below.
* The Benchmark instance ctor takes 3 args:
* "name" - the name of the benchmark which will show up in reports,
output, etc.
* "func" - the function object which the benchmark will call. This can
be any callable.
* "args" - a tuple of args that are to be passed to the func callable.
A Benchmark object will, by default, run the callable with the args
provided as-is, and log the execution time and various GPU metrics. The
callable provided is written independent of the benchmarking code (this is
good for separation of concerns, bad if you need to do a custom
measurement).
If a new benchmark needs a special command-line parameter, add a new flag
to the command-line processing function and access it via the "args"
dictionary when passing args to the Benchmark ctor.
"""
benches = [
Benchmark(
name="cugraph.pagerank",
func=cugraph.pagerank,
args=(G, args.damping_factor, None, args.max_iter, args.tolerance),
),
Benchmark(name="cugraph.bfs", func=cugraph.bfs, args=(G, args.source, True)),
Benchmark(name="cugraph.sssp", func=cugraph.sssp, args=(G, args.source)),
Benchmark(name="cugraph.jaccard", func=cugraph.jaccard, args=(G,)),
Benchmark(name="cugraph.louvain", func=cugraph.louvain, args=(G,)),
Benchmark(
name="cugraph.weakly_connected_components",
func=cugraph.weakly_connected_components,
args=(G,),
),
Benchmark(name="cugraph.overlap", func=cugraph.overlap, args=(G,)),
Benchmark(name="cugraph.triangles", func=cugraph.triangles, args=(G,)),
Benchmark(
name="cugraph.spectralBalancedCutClustering",
func=cugraph.spectralBalancedCutClustering,
args=(G, 2),
),
Benchmark(
name="cugraph.spectralModularityMaximizationClustering",
func=cugraph.spectralModularityMaximizationClustering,
args=(G, 2),
),
Benchmark(
name="cugraph.renumber",
func=cugraph.renumber,
args=(edgelist_gdf["src"], edgelist_gdf["dst"]),
),
Benchmark(name="cugraph.graph.degree", func=G.degree),
Benchmark(name="cugraph.graph.degrees", func=G.degrees),
]
# Return a dictionary of Benchmark name to Benchmark obj mappings
return dict([(b.name, b) for b in benches])
########################################
# cugraph benchmarking utilities
def loadDataFile(file_name, csv_delimiter=" "):
file_type = file_name.split(".")[-1]
if file_type == "mtx":
edgelist_gdf = read_mtx(file_name)
elif file_type == "csv":
edgelist_gdf = read_csv(file_name, csv_delimiter)
else:
raise ValueError(
"bad file type: '%s', %s " % (file_type, file_name)
+ "must have a .csv or .mtx extension"
)
return edgelist_gdf
def createGraph(edgelist_gdf, createDiGraph, renumber, symmetrized):
if createDiGraph:
G = cugraph.DiGraph()
else:
G = cugraph.Graph(symmetrized=symmetrized)
G.from_cudf_edgelist(
edgelist_gdf,
source="src",
destination="dst",
edge_attr="val",
renumber=renumber,
)
return G
def computeAdjList(graphObj, transposed=False):
"""
Compute the adjacency list (or transposed adjacency list if transposed is
True) on the graph obj. This can be run as a benchmark itself, and is often
run separately so adj list computation isn't factored into an algo
benchmark.
"""
if transposed:
G.view_transposed_adj_list()
else:
G.view_adj_list()
def read_mtx(mtx_file):
M = mmread(mtx_file).asfptype()
gdf = cudf.DataFrame()
gdf["src"] = cudf.Series(M.row)
gdf["dst"] = cudf.Series(M.col)
if M.data is None:
gdf["val"] = 1.0
else:
gdf["val"] = cudf.Series(M.data)
return gdf
def read_csv(csv_file, delimiter):
cols = ["src", "dst", "val"]
dtypes = OrderedDict(
[
("src", "int32"),
("dst", "int32"),
("val", "float32"),
]
)
gdf = cudf.read_csv(
csv_file, names=cols, delimiter=delimiter, dtype=list(dtypes.values())
)
if gdf["src"].null_count > 0:
print("The reader failed to parse the input")
if gdf["dst"].null_count > 0:
print("The reader failed to parse the input")
# Assume an edge weight of 1.0 if dataset does not provide it
if gdf["val"].null_count > 0:
gdf["val"] = 1.0
return gdf
def parseCLI(argv):
parser = argparse.ArgumentParser(description="CuGraph benchmark script.")
parser.add_argument("file", type=str, help="Path to the input file")
parser.add_argument(
"--algo",
type=str,
action="append",
help='Algorithm to run, must be one of %s, or "ALL"'
% ", ".join(['"%s"' % k for k in getAllPossibleAlgos()]),
)
parser.add_argument(
"--damping_factor",
type=float,
default=0.85,
help="Damping factor for pagerank algo. Default is " "0.85",
)
parser.add_argument(
"--max_iter",
type=int,
default=100,
help="Maximum number of iteration for any iterative " "algo. Default is 100",
)
parser.add_argument(
"--tolerance",
type=float,
default=1e-5,
help="Tolerance for any approximation algo. Default " "is 1e-5",
)
parser.add_argument(
"--source", type=int, default=0, help="Source for bfs or sssp. Default is 0"
)
parser.add_argument(
"--compute_adj_list",
action="store_true",
help="Compute and benchmark the adjacency list "
"computation separately. Default is to NOT compute "
"the adjacency list and allow the algo to compute it "
"if necessary.",
)
parser.add_argument(
"--compute_transposed_adj_list",
action="store_true",
help="Compute and benchmark the transposed adjacency "
"list computation separately. Default is to NOT "
"compute the transposed adjacency list and allow the "
"algo to compute it if necessary.",
)
parser.add_argument(
"--delimiter",
type=str,
choices=["tab", "space"],
default="space",
help="Delimiter for csv files (default is space)",
)
parser.add_argument(
"--update_results_dir",
type=str,
help="Add (and compare) results to the dir specified",
)
parser.add_argument(
"--update_asv_dir",
type=str,
help="Add results to the specified ASV dir in ASV " "format",
)
parser.add_argument(
"--report_cuda_ver",
type=str,
default="",
help="The CUDA version to include in reports",
)
parser.add_argument(
"--report_python_ver",
type=str,
default="",
help="The Python version to include in reports",
)
parser.add_argument(
"--report_os_type",
type=str,
default="",
help="The OS type to include in reports",
)
parser.add_argument(
"--report_machine_name",
type=str,
default="",
help="The machine name to include in reports",
)
parser.add_argument(
"--digraph",
action="store_true",
help="Create a directed graph (default is undirected)",
)
return parser.parse_args(argv)
def getAllPossibleAlgos():
# Use the getBenchmarks() function to generate a list of benchmark names
# from the keys of the dictionary getBenchmarks() returns. Use a "nop"
# object since getBenchmarks() will try to access attrs for the args passed
# in, and there's no point in keeping track of the actual objects needed
# here since all this needs is the keys (not the values).
class Nop:
def __getattr__(self, attr):
return Nop()
def __getitem__(self, key):
return Nop()
def __call__(self, *args, **kwargs):
return Nop()
nop = Nop()
return list(getBenchmarks(nop, nop, nop).keys())
###############################################################################
if __name__ == "__main__":
args = parseCLI(sys.argv[1:])
# set algosToRun based on the command line args
allPossibleAlgos = getAllPossibleAlgos()
if args.algo and ("ALL" not in args.algo):
allowedAlgoNames = allPossibleAlgos + ["ALL"]
if (set(args.algo) - set(allowedAlgoNames)) != set():
raise ValueError(
"bad algo(s): '%s', must be in set of %s"
% (args.algo, ", ".join(['"%s"' % a for a in allowedAlgoNames]))
)
algosToRun = args.algo
else:
algosToRun = allPossibleAlgos
# Load the data file and create a Graph, treat these as benchmarks too. The
# Benchmark run() method returns the result of the function being
# benchmarked. In this case, "loadDataFile" and "createGraph" return a
# Dataframe and Graph object respectively, so save those and use them for
# future benchmarks.
csvDelim = {"space": " ", "tab": "\t"}[args.delimiter]
edgelist_gdf = Benchmark(
loadDataFile, "cugraph.loadDataFile", args=(args.file, csvDelim)
).run()
renumber = True
symmetrized = True
G = Benchmark(
createGraph,
"cugraph.createGraph",
args=(edgelist_gdf, args.digraph, renumber, symmetrized),
).run()
if G is None:
raise RuntimeError("could not create graph!")
# compute the adjacency list upfront as a separate benchmark. Special case:
# if pagerank is being benchmarked and the transposed adj matrix is
# requested, compute that too or instead. It's recommended that a pagerank
# benchmark be performed in a separate run since there's only one Graph obj
# and both an adj list and transposed adj list are probably not needed.
if args.compute_adj_list:
Benchmark(computeAdjList, "cugraph.graph.view_adj_list", args=(G, False)).run()
if args.compute_transposed_adj_list and ("cugraph.pagerank" in algosToRun):
Benchmark(
computeAdjList, "cugraph.graph.view_transposed_adj_list", args=(G, True)
).run()
print("-" * 80)
# get the individual benchmark functions and run them
benches = getBenchmarks(G, edgelist_gdf, args)
for algo in algosToRun:
benches[algo].run(n=3) # mean of 3 runs
# reports ########################
if args.update_results_dir:
raise NotImplementedError
if args.update_asv_dir:
# import this here since it pulls in a 3rd party package (asvdb) which
# may not be appreciated by non-ASV users.
from asv_report import cugraph_update_asv
# special case: do not include the full path to the datasetName, since
# the leading parts are redundant and take up UI space.
datasetName = "/".join(args.file.split("/")[-3:])
cugraph_update_asv(
asvDir=args.update_asv_dir,
datasetName=datasetName,
algoRunResults=Benchmark.resultsDict,
cudaVer=args.report_cuda_ver,
pythonVer=args.report_python_ver,
osType=args.report_os_type,
machineName=args.report_machine_name,
)
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/utils/utils.py | # Copyright (c) 2018-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
def getRepoInfo():
out = getCommandOutput("git remote -v")
repo = out.split("\n")[-1].split()[1]
branch = getCommandOutput("git rev-parse --abbrev-ref HEAD")
return (repo, branch)
def getCommandOutput(cmd):
result = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True
)
stdout = result.stdout.decode().strip()
if result.returncode == 0:
return stdout
stderr = result.stderr.decode().strip()
raise RuntimeError(
"Problem running '%s' (STDOUT: '%s' STDERR: '%s')" % (cmd, stdout, stderr)
)
def getCommitInfo():
commitHash = getCommandOutput("git rev-parse HEAD")
commitTime = getCommandOutput("git log -n1 --pretty=%%ct %s" % commitHash)
return (commitHash, str(int(commitTime) * 100))
def getCudaVer():
# FIXME
return "10.0"
def getGPUModel():
# FIXME
return "some GPU"
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/pyproject.toml | # Copyright (c) 2022, NVIDIA CORPORATION.
[build-system]
requires = [
"cmake>=3.26.4",
"cython>=3.0.0",
"ninja",
"pylibraft==23.12.*",
"rmm==23.12.*",
"scikit-build>=0.13.1",
"setuptools>=61.0.0",
"wheel",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
build-backend = "setuptools.build_meta"
[tool.pytest.ini_options]
testpaths = ["pylibcugraph/tests"]
[project]
name = "pylibcugraph"
dynamic = ["version"]
description = "pylibcugraph - Python bindings for the libcugraph cuGraph C/C++/CUDA library"
readme = { file = "README.md", content-type = "text/markdown" }
authors = [
{ name = "NVIDIA Corporation" },
]
license = { text = "Apache 2.0" }
requires-python = ">=3.9"
dependencies = [
"pylibraft==23.12.*",
"rmm==23.12.*",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
classifiers = [
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
]
[project.optional-dependencies]
test = [
"cudf==23.12.*",
"numpy>=1.21",
"pandas",
"pytest",
"pytest-benchmark",
"pytest-cov",
"pytest-xdist",
"scipy",
] # This list was generated by `rapids-dependency-file-generator`. To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
[project.urls]
Homepage = "https://github.com/rapidsai/cugraph"
Documentation = "https://docs.rapids.ai/api/cugraph/stable/"
[tool.setuptools]
license-files = ["LICENSE"]
[tool.setuptools.dynamic]
version = {file = "pylibcugraph/VERSION"}
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
cmake_minimum_required(VERSION 3.26.4 FATAL_ERROR)
set(pylibcugraph_version 23.12.00)
include(../../fetch_rapids.cmake)
# We always need CUDA for cuML because the raft dependency brings in a
# header-only cuco dependency that enables CUDA unconditionally.
include(rapids-cuda)
rapids_cuda_init_architectures(pylibcugraph-python)
project(
pylibcugraph-python
VERSION ${pylibcugraph_version}
LANGUAGES # TODO: Building Python extension modules via the python_extension_module requires the C
# language to be enabled here. The test project that is built in scikit-build to verify
# various linking options for the python library is hardcoded to build with C, so until
# that is fixed we need to keep C.
C CXX CUDA
)
################################################################################
# - User Options --------------------------------------------------------------
option(FIND_CUGRAPH_CPP "Search for existing CUGRAPH C++ installations before defaulting to local files"
OFF
)
option(CUGRAPH_BUILD_WHEELS "Whether we're building a wheel for pypi" OFF)
option(USE_CUGRAPH_OPS "Enable all functions that call cugraph-ops" ON)
if(NOT USE_CUGRAPH_OPS)
message(STATUS "Disabling libcugraph functions that reference cugraph-ops")
add_compile_definitions(NO_CUGRAPH_OPS)
endif()
# If the user requested it we attempt to find CUGRAPH.
if(FIND_CUGRAPH_CPP)
find_package(cugraph ${pylibcugraph_version} REQUIRED)
else()
set(cugraph_FOUND OFF)
endif()
include(rapids-cython)
if (NOT cugraph_FOUND)
set(BUILD_TESTS OFF)
set(BUILD_CUGRAPH_MG_TESTS OFF)
set(BUILD_CUGRAPH_OPS_CPP_TESTS OFF)
set(_exclude_from_all "")
if(CUGRAPH_BUILD_WHEELS)
# Statically link dependencies if building wheels
set(CUDA_STATIC_RUNTIME ON)
set(USE_RAFT_STATIC ON)
set(CUGRAPH_COMPILE_RAFT_LIB ON)
set(CUGRAPH_USE_CUGRAPH_OPS_STATIC ON)
set(CUGRAPH_EXCLUDE_CUGRAPH_OPS_FROM_ALL ON)
set(ALLOW_CLONE_CUGRAPH_OPS ON)
# Don't install the cuML C++ targets into wheels
set(_exclude_from_all EXCLUDE_FROM_ALL)
endif()
add_subdirectory(../../cpp cugraph-cpp ${_exclude_from_all})
set(cython_lib_dir pylibcugraph)
install(TARGETS cugraph DESTINATION ${cython_lib_dir})
install(TARGETS cugraph_c DESTINATION ${cython_lib_dir})
endif()
rapids_cython_init()
add_subdirectory(pylibcugraph)
if(DEFINED cython_lib_dir)
rapids_cython_add_rpath_entries(TARGET cugraph PATHS "${cython_lib_dir}")
endif()
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/README.md | <h1 align="center"; style="font-style: italic";>
<br>
<img src="img/cugraph_logo_2.png" alt="cuGraph" width="500">
</h1>
<div align="center">
<a href="https://github.com/rapidsai/cugraph/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License"></a>
<img alt="GitHub tag (latest by date)" src="https://img.shields.io/github/v/tag/rapidsai/cugraph">
<a href="https://github.com/rapidsai/cugraph/stargazers">
<img src="https://img.shields.io/github/stars/rapidsai/cugraph"></a>
<img alt="Conda" src="https://img.shields.io/conda/dn/rapidsai/cugraph">
<img alt="GitHub last commit" src="https://img.shields.io/github/last-commit/rapidsai/cugraph">
<img alt="Conda" src="https://img.shields.io/conda/pn/rapidsai/cugraph" />
<a href="https://rapids.ai/"><img src="img/rapids_logo.png" alt="RAPIDS" width="125"></a>
</div>
<br>
[RAPIDS](https://rapids.ai) cuGraph is a monorepo that represents a collection of packages focused on GPU-accelerated graph analytics, including support for property graphs, remote (graph as a service) operations, and graph neural networks (GNNs). cuGraph supports the creation and manipulation of graphs followed by the execution of scalable fast graph algorithms.
<div align="center">
[Getting cuGraph](./docs/cugraph/source/installation/getting_cugraph.md) *
[Graph Algorithms](./docs/cugraph/source/graph_support/algorithms.md) *
[Graph Service](./readme_pages/cugraph_service.md) *
[Property Graph](./readme_pages/property_graph.md) *
[GNN Support](./readme_pages/gnn_support.md)
</div>
-----
## News
___NEW!___ _[nx-cugraph](./python/nx-cugraph/README.md)_, a NetworkX backend that provides GPU acceleration to NetworkX with zero code change.
```
> pip install nx-cugraph-cu11 --extra-index-url https://pypi.nvidia.com
> export NETWORKX_AUTOMATIC_BACKENDS=cugraph
```
That's it. NetworkX now leverages cuGraph for accelerated graph algorithms.
-----
## Table of contents
- Installation
- [Getting cuGraph Packages](./docs/cugraph/source/installation/getting_cugraph.md)
- [Building from Source](./docs/cugraph/source/installation/source_build.md)
- [Contributing to cuGraph](./readme_pages/CONTRIBUTING.md)
- General
- [Latest News](./readme_pages/news.md)
- [Current list of algorithms](./docs/cugraph/source/graph_support/algorithms.md)
- [Blogs and Presentation](./docs/cugraph/source/tutorials/cugraph_blogs.rst)
- [Performance](./readme_pages/performance/performance.md)
- Packages
- [cuGraph Python](./readme_pages/cugraph_python.md)
- [Property Graph](./readme_pages/property_graph.md)
- [External Data Types](./readme_pages/data_types.md)
- [pylibcugraph](./readme_pages/pylibcugraph.md)
- [libcugraph (C/C++/CUDA)](./readme_pages/libcugraph.md)
- [nx-cugraph](./python/nx-cugraph/README.md)
- [cugraph-service](./readme_pages/cugraph_service.md)
- [cugraph-dgl](./readme_pages/cugraph_dgl.md)
- [cugraph-ops](./readme_pages/cugraph_ops.md)
- API Docs
- Python
- [Python Nightly](https://docs.rapids.ai/api/cugraph/nightly/)
- [Python Stable](https://docs.rapids.ai/api/cugraph/stable/)
- C++
- [C++ Nightly](https://docs.rapids.ai/api/libcugraph/nightly/)
- [C++ Stable](https://docs.rapids.ai/api/libcugraph/stable/)
- References
- [RAPIDS](https://rapids.ai/)
- [ARROW](https://arrow.apache.org/)
- [DASK](https://www.dask.org/)
<br><br>
-----
<img src="img/Stack2.png" alt="Stack" width="800">
[RAPIDS](https://rapids.ai) cuGraph is a collection of GPU-accelerated graph algorithms and services. At the Python layer, cuGraph operates on [GPU DataFrames](https://github.com/rapidsai/cudf), thereby allowing for seamless passing of data between ETL tasks in [cuDF](https://github.com/rapidsai/cudf) and machine learning tasks in [cuML](https://github.com/rapidsai/cuml). Data scientists familiar with Python will quickly pick up how cuGraph integrates with the Pandas-like API of cuDF. Likewise, users familiar with NetworkX will quickly recognize the NetworkX-like API provided in cuGraph, with the goal to allow existing code to be ported with minimal effort into RAPIDS. To simplify integration, cuGraph also supports data found in [Pandas DataFrame](https://pandas.pydata.org/), [NetworkX Graph Objects](https://networkx.org/) and several other formats.
While the high-level cugraph python API provides an easy-to-use and familiar interface for data scientists that's consistent with other RAPIDS libraries in their workflow, some use cases require access to lower-level graph theory concepts. For these users, we provide an additional Python API called pylibcugraph, intended for applications that require a tighter integration with cuGraph at the Python layer with fewer dependencies. Users familiar with C/C++/CUDA and graph structures can access libcugraph and libcugraph_c for low level integration outside of python.
**NOTE:** For the latest stable [README.md](https://github.com/rapidsai/cugraph/blob/main/README.md) ensure you are on the latest branch.
As an example, the following Python snippet loads graph data and computes PageRank:
```python
import cudf
import cugraph
# read data into a cuDF DataFrame using read_csv
gdf = cudf.read_csv("graph_data.csv", names=["src", "dst"], dtype=["int32", "int32"])
# We now have data as edge pairs
# create a Graph using the source (src) and destination (dst) vertex pairs
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
# Let's now get the PageRank score of each vertex by calling cugraph.pagerank
df_page = cugraph.pagerank(G)
# Let's look at the top 10 PageRank Score
df_page.sort_values('pagerank', ascending=False).head(10)
```
</br>
[Why cuGraph does not support Method Cascading](https://docs.rapids.ai/api/cugraph/nightly/basics/cugraph_cascading.html)
------
# Projects that use cuGraph
(alphabetical order)
* ArangoDB - a free and open-source native multi-model database system - https://www.arangodb.com/
* CuPy - "NumPy/SciPy-compatible Array Library for GPU-accelerated Computing with Python" - https://cupy.dev/
* Memgraph - In-memory Graph database - https://memgraph.com/
* NetworkX (via [nx-cugraph](./python/nx-cugraph/README.md) backend) - an extremely popular, free and open-source package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks - https://networkx.org/
* PyGraphistry - free and open-source GPU graph ETL, AI, and visualization, including native RAPIDS & cuGraph support - http://github.com/graphistry/pygraphistry
* ScanPy - a scalable toolkit for analyzing single-cell gene expression data - https://scanpy.readthedocs.io/en/stable/
(please post an issue if you have a project to add to this list)
------
<br>
## <div align="center"><img src="img/rapids_logo.png" width="265px"/></div> Open GPU Data Science <a name="rapids"></a>
The RAPIDS suite of open source software libraries aims to enable execution of end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization but exposing that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
<p align="center"><img src="img/rapids_arrow.png" width="50%"/></p>
For more project details, see [rapids.ai](https://rapids.ai/).
<br><br>
### Apache Arrow on GPU <a name="arrow"></a>
The GPU version of [Apache Arrow](https://arrow.apache.org/) is a common API that enables efficient interchange of tabular data between processes running on the GPU. End-to-end computation on the GPU avoids unnecessary copying and converting of data off the GPU, reducing compute time and cost for high-performance analytics common in artificial intelligence workloads. As the name implies, cuDF uses the Apache Arrow columnar data format on the GPU. Currently, a subset of the features in Apache Arrow are supported.
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/setup.py | # Copyright (c) 2018-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from setuptools import find_packages, Command
from skbuild import setup
class CleanCommand(Command):
"""Custom clean command to tidy up the project root."""
user_options = [
("all", None, None),
]
def initialize_options(self):
self.all = None
def finalize_options(self):
pass
def run(self):
setupFileDir = os.path.dirname(os.path.abspath(__file__))
os.chdir(setupFileDir)
os.system("rm -rf build")
os.system("rm -rf dist")
os.system("rm -rf dask-worker-space")
os.system('find . -name "__pycache__" -type d -exec rm -rf {} +')
os.system("rm -rf *.egg-info")
os.system('find . -name "*.cpp" -type f -delete')
os.system('find . -name "*.cpython*.so" -type f -delete')
os.system("rm -rf _skbuild")
def exclude_libcxx_symlink(cmake_manifest):
return list(
filter(
lambda name: not ("include/rapids/libcxx/include" in name), cmake_manifest
)
)
packages = find_packages(include=["pylibcugraph*"])
setup(
packages=packages,
package_data={key: ["VERSION", "*.pxd"] for key in packages},
cmake_process_manifest_hook=exclude_libcxx_symlink,
cmdclass={"clean": CleanCommand},
zip_safe=False,
)
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/pytest.ini | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[pytest]
markers =
cugraph_ops: Tests requiring cugraph-ops
| 0 |
rapidsai_public_repos/cugraph/python | rapidsai_public_repos/cugraph/python/pylibcugraph/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 NVIDIA CORPORATION
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/k_truss_subgraph.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.graph_functions cimport (
cugraph_induced_subgraph_result_t,
cugraph_induced_subgraph_get_sources,
cugraph_induced_subgraph_get_destinations,
cugraph_induced_subgraph_get_edge_weights,
cugraph_induced_subgraph_get_subgraph_offsets,
cugraph_induced_subgraph_result_free,
)
from pylibcugraph._cugraph_c.community_algorithms cimport (
cugraph_k_truss_subgraph,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
)
def k_truss_subgraph(ResourceHandle resource_handle,
_GPUGraph graph,
size_t k,
bool_t do_expensive_check):
"""
Extract k truss of a graph for a specific k.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
k: size_t
The desired k to be used for extracting the k-truss subgraph.
do_expensive_check : bool_t
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple of device arrays containing the sources, destinations,
edge_weights and edge_offsets.
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 1, 3, 1, 4, 2, 0, 2, 1, 2,
... 3, 3, 4, 3, 5, 4, 5], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 0, 3, 1, 4, 1, 0, 2, 1, 2, 3,
... 2, 4, 3, 5, 3, 5, 4], dtype=numpy.int32)
>>> weights = cupy.asarray(
... [0.1, 0.1, 2.1, 2.1, 1.1, 1.1, 7.2, 7.2, 2.1, 2.1,
... 1.1, 1.1, 7.2, 7.2, 3.2, 3.2, 6.1, 6.1]
... ,dtype=numpy.float32)
>>> k = 2
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=False, renumber=False, do_expensive_check=False)
>>> (sources, destinations, edge_weights, subgraph_offsets) =
... pylibcugraph.k_truss_subgraph(resource_handle, G, k, False)
>>> sources
[0 0 1 1 1 1 2 2 2 3 3 3 3 4 4 4 5 5]
>>> destinations
[1 2 0 2 3 4 0 1 3 1 2 4 5 1 3 5 3 4]
>>> edge_weights
[0.1 7.2 0.1 2.1 2.1 1.1 7.2 2.1 1.1 2.1 1.1 7.2 3.2 1.1 7.2 6.1 3.2 6.1]
>>> subgraph_offsets
[0 18]
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_induced_subgraph_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
error_code = cugraph_k_truss_subgraph(c_resource_handle_ptr,
c_graph_ptr,
k,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_k_truss_subgraph")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* sources_ptr = \
cugraph_induced_subgraph_get_sources(result_ptr)
cdef cugraph_type_erased_device_array_view_t* destinations_ptr = \
cugraph_induced_subgraph_get_destinations(result_ptr)
cdef cugraph_type_erased_device_array_view_t* edge_weights_ptr = \
cugraph_induced_subgraph_get_edge_weights(result_ptr)
cdef cugraph_type_erased_device_array_view_t* subgraph_offsets_ptr = \
cugraph_induced_subgraph_get_subgraph_offsets(result_ptr)
# FIXME: Get ownership of the result data instead of performing a copy
# for perfomance improvement
cupy_sources = copy_to_cupy_array(
c_resource_handle_ptr, sources_ptr)
cupy_destinations = copy_to_cupy_array(
c_resource_handle_ptr, destinations_ptr)
if edge_weights_ptr is not NULL:
cupy_edge_weights = copy_to_cupy_array(
c_resource_handle_ptr, edge_weights_ptr)
else:
cupy_edge_weights = None
# FIXME: Should we keep the offsets array or just drop it from the final
# solution?
cupy_subgraph_offsets = copy_to_cupy_array(
c_resource_handle_ptr, subgraph_offsets_ptr)
# Free pointer
cugraph_induced_subgraph_result_free(result_ptr)
return (cupy_sources, cupy_destinations, cupy_edge_weights, cupy_subgraph_offsets)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/spectral_modularity_maximization.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.community_algorithms cimport (
cugraph_clustering_result_t,
cugraph_spectral_modularity_maximization,
cugraph_clustering_result_get_vertices,
cugraph_clustering_result_get_clusters,
cugraph_clustering_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
)
def spectral_modularity_maximization(ResourceHandle resource_handle,
_GPUGraph graph,
num_clusters,
num_eigen_vects,
evs_tolerance,
evs_max_iter,
kmean_tolerance,
kmean_max_iter,
bool_t do_expensive_check
):
"""
Compute a clustering/partitioning of the given graph using the spectral
modularity maximization method.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
num_clusters : size_t
Specifies the number of clusters to find, must be greater than 1
num_eigen_vects : size_t
Specifies the number of eigenvectors to use. Must be lower or equal to
num_clusters.
evs_tolerance: double
Specifies the tolerance to use in the eigensolver.
evs_max_iter: size_t
Specifies the maximum number of iterations for the eigensolver.
kmean_tolerance: double
Specifies the tolerance to use in the k-means solver.
kmean_max_iter: size_t
Specifies the maximum number of iterations for the k-means solver.
do_expensive_check : bool_t
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple containing the clustering vertices, clusters
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 0], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=True, renumber=False, do_expensive_check=False)
>>> (vertices, clusters) = pylibcugraph.spectral_modularity_maximization(
... resource_handle, G, num_clusters=5, num_eigen_vects=2, evs_tolerance=0.00001
... evs_max_iter=100, kmean_tolerance=0.00001, kmean_max_iter=100)
# FIXME: Fix docstring result.
>>> vertices
############
>>> clusters
############
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_clustering_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
error_code = cugraph_spectral_modularity_maximization(c_resource_handle_ptr,
c_graph_ptr,
num_clusters,
num_eigen_vects,
evs_tolerance,
evs_max_iter,
kmean_tolerance,
kmean_max_iter,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_spectral_modularity_maximization")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_clustering_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* clusters_ptr = \
cugraph_clustering_result_get_clusters(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_clusters = copy_to_cupy_array(c_resource_handle_ptr, clusters_ptr)
cugraph_clustering_result_free(result_ptr)
return (cupy_vertices, cupy_clusters)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/betweenness_centrality.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.centrality_algorithms cimport (
cugraph_centrality_result_t,
cugraph_betweenness_centrality,
cugraph_centrality_result_get_vertices,
cugraph_centrality_result_get_values,
cugraph_centrality_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
assert_CAI_type,
create_cugraph_type_erased_device_array_view_from_py_obj,
)
from pylibcugraph.select_random_vertices import (
select_random_vertices
)
def betweenness_centrality(ResourceHandle resource_handle,
_GPUGraph graph,
k,
random_state,
bool_t normalized,
bool_t include_endpoints,
bool_t do_expensive_check):
"""
Compute the betweenness centrality for all vertices of the graph G.
Betweenness centrality is a measure of the number of shortest paths that
pass through a vertex. A vertex with a high betweenness centrality score
has more paths passing through it and is therefore believed to be more
important.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
k : int or device array type or None, optional (default=None)
If k is not None, use k node samples to estimate betweenness. Higher
values give better approximation. If k is a device array type,
use the content of the list for estimation: the list should contain
vertex identifiers. If k is None (the default), all the vertices are
used to estimate betweenness. Vertices obtained through sampling or
defined as a list will be used as sources for traversals inside the
algorithm.
random_state : int, optional (default=None)
if k is specified and k is an integer, use random_state to initialize the
random number generator.
Using None defaults to a hash of process id, time, and hostname
If k is either None or list or cudf objects: random_state parameter is
ignored.
normalized : bool_t
Normalization will ensure that values are in [0, 1].
include_endpoints : bool_t
If true, include the endpoints in the shortest path counts.
do_expensive_check : bool_t
A flag to run expensive checks for input arguments if True.
Returns
-------
Examples
--------
"""
if isinstance(k, int):
# randomly select vertices
#'select_random_vertices' internally creates a
# 'pylibcugraph.random.CuGraphRandomState'
vertex_list = select_random_vertices(
resource_handle, graph, random_state, k)
else:
vertex_list = k
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_centrality_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_type_erased_device_array_view_t* \
vertex_list_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
vertex_list)
error_code = cugraph_betweenness_centrality(c_resource_handle_ptr,
c_graph_ptr,
vertex_list_view_ptr,
normalized,
include_endpoints,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_betweenness_centrality")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_centrality_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* values_ptr = \
cugraph_centrality_result_get_values(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_values = copy_to_cupy_array(c_resource_handle_ptr, values_ptr)
cugraph_centrality_result_free(result_ptr)
cugraph_type_erased_device_array_view_free(vertex_list_view_ptr)
return (cupy_vertices, cupy_values)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/graph_properties.pyx | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
cdef class GraphProperties:
"""
Class wrapper around C cugraph_graph_properties_t struct
"""
def __cinit__(self, is_symmetric=False, is_multigraph=False):
self.c_graph_properties.is_symmetric = is_symmetric
self.c_graph_properties.is_multigraph = is_multigraph
# Pickle support methods: get args for __new__ (__cinit__), get/set state
def __getnewargs_ex__(self):
is_symmetric = self.c_graph_properties.is_symmetric
is_multigraph = self.c_graph_properties.is_multigraph
return ((),{"is_symmetric":is_symmetric, "is_multigraph":is_multigraph})
def __getstate__(self):
return ()
def __setstate__(self, state):
pass
@property
def is_symmetric(self):
return bool(self.c_graph_properties.is_symmetric)
@is_symmetric.setter
def is_symmetric(self, value):
self.c_graph_properties.is_symmetric = value
@property
def is_multigraph(self):
return bool(self.c_graph_properties.is_multigraph)
@is_multigraph.setter
def is_multigraph(self, value):
self.c_graph_properties.is_multigraph = value
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/utils.pxd | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
data_type_id_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
cdef assert_success(cugraph_error_code_t code,
cugraph_error_t* err,
api_name)
cdef assert_CAI_type(obj, var_name, allow_None=*)
cdef assert_AI_type(obj, var_name, allow_None=*)
cdef get_numpy_type_from_c_type(data_type_id_t c_type)
cdef get_c_type_from_numpy_type(numpy_type)
cdef get_c_weight_type_from_numpy_edge_ids_type(numpy_type)
cdef get_numpy_edge_ids_type_from_c_weight_type(data_type_id_t c_type)
cdef copy_to_cupy_array(
cugraph_resource_handle_t* c_resource_handle_ptr,
cugraph_type_erased_device_array_view_t* device_array_view_ptr)
cdef copy_to_cupy_array_ids(
cugraph_resource_handle_t* c_resource_handle_ptr,
cugraph_type_erased_device_array_view_t* device_array_view_ptr)
cdef cugraph_type_erased_device_array_view_t* \
create_cugraph_type_erased_device_array_view_from_py_obj(python_obj)
cdef create_cupy_array_view_for_device_ptr(
cugraph_type_erased_device_array_view_t* device_array_view_ptr,
owning_py_object)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/node2vec.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_create,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.algorithms cimport (
cugraph_node2vec,
cugraph_random_walk_result_t,
cugraph_random_walk_result_get_paths,
cugraph_random_walk_result_get_weights,
cugraph_random_walk_result_get_path_sizes,
cugraph_random_walk_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
assert_CAI_type,
get_c_type_from_numpy_type,
)
def node2vec(ResourceHandle resource_handle,
_GPUGraph graph,
seed_array,
size_t max_depth,
bool_t compress_result,
double p,
double q):
"""
Computes random walks under node2vec sampling procedure.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
seed_array: device array type
Device array containing the pointer to the array of seed vertices.
max_depth : size_t
Maximum number of vertices in generated path
compress_result : bool_t
If true, the paths are unpadded and a third return device array contains
the sizes for each path, otherwise the paths are padded and the third
return device array is empty.
p : double
The return factor p represents the likelihood of backtracking to a node
in the walk. A higher value (> max(q, 1)) makes it less likely to sample
a previously visited node, while a lower value (< min(q, 1)) would make it
more likely to backtrack, making the walk more "local".
q : double
The in-out factor q represents the likelihood of visiting nodes closer or
further from the outgoing node. If q > 1, the random walk is likelier to
visit nodes closer to the outgoing node. If q < 1, the random walk is
likelier to visit nodes further from the outgoing node.
Returns
-------
A tuple of device arrays, where the first item in the tuple is a device
array containing the compressed paths, the second item is a device
array containing the corresponding weights for each edge traversed in
each path, and the third item is a device array containing the sizes
for each of the compressed paths, if compress_result is True.
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 3], dtype=numpy.int32)
>>> seeds = cupy.asarray([0, 0, 1], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=False, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=False, renumber=False, do_expensive_check=False)
>>> (paths, weights, sizes) = pylibcugraph.node2vec(
... resource_handle, G, seeds, 3, True, 1.0, 1.0)
"""
# FIXME: import these modules here for now until a better pattern can be
# used for optional imports (perhaps 'import_optional()' from cugraph), or
# these are made hard dependencies.
try:
import cupy
except ModuleNotFoundError:
raise RuntimeError("node2vec requires the cupy package, which could not "
"be imported")
assert_CAI_type(seed_array, "seed_array")
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_random_walk_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef uintptr_t cai_seed_ptr = \
seed_array.__cuda_array_interface__["data"][0]
cdef cugraph_type_erased_device_array_view_t* seed_view_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_seed_ptr,
len(seed_array),
get_c_type_from_numpy_type(seed_array.dtype))
error_code = cugraph_node2vec(c_resource_handle_ptr,
c_graph_ptr,
seed_view_ptr,
max_depth,
compress_result,
p,
q,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_node2vec")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* paths_ptr = \
cugraph_random_walk_result_get_paths(result_ptr)
cdef cugraph_type_erased_device_array_view_t* weights_ptr = \
cugraph_random_walk_result_get_weights(result_ptr)
cdef cugraph_type_erased_device_array_view_t* path_sizes_ptr = \
cugraph_random_walk_result_get_path_sizes(result_ptr)
cupy_paths = copy_to_cupy_array(c_resource_handle_ptr, paths_ptr)
cupy_weights = copy_to_cupy_array(c_resource_handle_ptr, weights_ptr)
cupy_path_sizes = copy_to_cupy_array(c_resource_handle_ptr,
path_sizes_ptr)
cugraph_random_walk_result_free(result_ptr)
cugraph_type_erased_device_array_view_free(seed_view_ptr)
return (cupy_paths, cupy_weights, cupy_path_sizes)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/weakly_connected_components.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph import GraphProperties, SGGraph
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_copy,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.labeling_algorithms cimport (
cugraph_labeling_result_t,
cugraph_weakly_connected_components,
cugraph_labeling_result_get_vertices,
cugraph_labeling_result_get_labels,
cugraph_labeling_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
assert_CAI_type,
copy_to_cupy_array,
create_cugraph_type_erased_device_array_view_from_py_obj,
)
def _ensure_args(graph, offsets, indices, weights):
i = 0
if graph is not None:
# ensure the remaining parametes are None
invalid_input = [i for p in [offsets, indices, weights] if p is not None]
input_type = "graph"
else:
invalid_input = [i for p in [offsets, indices] if p is None]
input_type = "csr_arrays"
if len(invalid_input) != 0:
raise TypeError("Invalid input combination: Must set either 'graph' or "
"a combination of 'offsets', 'indices' and 'weights', not both")
else:
if input_type == "csr_arrays":
assert_CAI_type(offsets, "offsets")
assert_CAI_type(indices, "indices")
assert_CAI_type(weights, "weights", True)
return input_type
def weakly_connected_components(ResourceHandle resource_handle,
_GPUGraph graph,
offsets,
indices,
weights,
labels,
bool_t do_expensive_check):
"""
Generate the Weakly Connected Components from either an input graph or
or CSR arrays('offsets', 'indices', 'weights') and attach a component label
to each vertex.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph.
offsets : object supporting a __cuda_array_interface__ interface
Array containing the offsets values of a Compressed Sparse Row matrix
that represents the graph.
indices : object supporting a __cuda_array_interface__ interface
Array containing the indices values of a Compressed Sparse Row matrix
that represents the graph.
weights : object supporting a __cuda_array_interface__ interface
Array containing the weights values of a Compressed Sparse Row matrix
that represents the graph
do_expensive_check : bool_t
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple containing containing two device arrays which are respectively
vertices and their corresponding labels
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> from pylibcugraph import weakly_connected_components
>>> srcs = cupy.asarray([0, 1, 1, 2, 2, 0], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 0, 2, 1, 0, 2], dtype=numpy.int32)
>>> weights = cupy.asarray(
... [1.0, 1.0, 1.0, 1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=False, renumber=True, do_expensive_check=False)
>>> (vertices, labels) = weakly_connected_components(
... resource_handle, G, None, None, None, None, False)
>>> vertices
[0, 1, 2]
>>> labels
[2, 2, 2]
>>> import cupy as cp
>>> import numpy as np
>>> from scipy.sparse import csr_matrix
>>>
>>> graph = [
... [0, 1, 1, 0, 0],
... [0, 0, 1, 0, 0],
... [0, 0, 0, 0, 0],
... [0, 0, 0, 0, 1],
... [0, 0, 0, 0, 0],
... ]
>>> scipy_csr = csr_matrix(graph)
>>> rows, cols = scipy_csr.nonzero()
>>> scipy_csr[cols, rows] = scipy_csr[rows, cols]
>>>
>>> cp_offsets = cp.asarray(scipy_csr.indptr)
>>> cp_indices = cp.asarray(scipy_csr.indices, dtype=np.int32)
>>>
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> weakly_connected_components(resource_handle=resource_handle,
graph=None,
... offsets=cp_offsets,
... indices=cp_indices,
... weights=None,
... False)
>>> print(f"{len(set(cp_labels.tolist()))} - {cp_labels}")
2 - [2 2 2 4 4]
"""
# FIXME: Remove this function once the deprecation is completed
input_type = _ensure_args(graph, offsets, indices, weights)
if input_type == "csr_arrays":
if resource_handle is None:
# Get a default handle
resource_handle = ResourceHandle()
graph_props = GraphProperties(
is_symmetric=True, is_multigraph=False)
graph = SGGraph(
resource_handle,
graph_props,
offsets,
indices,
weights,
store_transposed=False,
renumber=False,
do_expensive_check=True,
input_array_format="CSR"
)
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_labeling_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
error_code = cugraph_weakly_connected_components(c_resource_handle_ptr,
c_graph_ptr,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_weakly_connected_components")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_labeling_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* labels_ptr = \
cugraph_labeling_result_get_labels(result_ptr)
cdef cugraph_type_erased_device_array_view_t* labels_view_ptr
if labels is not None:
labels_view_ptr = create_cugraph_type_erased_device_array_view_from_py_obj(
labels)
cugraph_type_erased_device_array_view_copy(
c_resource_handle_ptr,
labels_view_ptr,
labels_ptr,
&error_ptr
)
assert_success(
error_code, error_ptr, "cugraph_type_erased_device_array_view_copy")
cugraph_labeling_result_free(result_ptr)
else:
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_labels = copy_to_cupy_array(c_resource_handle_ptr, labels_ptr)
cugraph_labeling_result_free(result_ptr)
return (cupy_vertices, cupy_labels)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/jaccard_coefficients.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from libc.stdio cimport printf
from cython.operator cimport dereference
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free
)
from pylibcugraph._cugraph_c.graph_functions cimport (
cugraph_vertex_pairs_t,
cugraph_vertex_pairs_get_first,
cugraph_vertex_pairs_get_second,
cugraph_vertex_pairs_free,
cugraph_create_vertex_pairs
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.similarity_algorithms cimport (
cugraph_jaccard_coefficients,
cugraph_similarity_result_t,
cugraph_similarity_result_get_similarity,
cugraph_similarity_result_free
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
create_cugraph_type_erased_device_array_view_from_py_obj
)
def jaccard_coefficients(ResourceHandle resource_handle,
_GPUGraph graph,
first,
second,
bool_t use_weight,
bool_t do_expensive_check):
"""
Compute the Jaccard coefficients for the specified vertex_pairs.
Note that Jaccard similarity must run on a symmetric graph.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
first :
Source of the vertex pair.
second :
Destination of the vertex pair.
use_weight : bool, optional
If set to True, the compute weighted jaccard_coefficients(
the input graph must be weighted in that case).
Otherwise, computed un-weighted jaccard_coefficients
do_expensive_check : bool
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple of device arrays containing the vertex pairs with
their corresponding Jaccard coefficient scores.
Examples
--------
# FIXME: No example yet
"""
cdef cugraph_vertex_pairs_t* vertex_pairs_ptr
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_similarity_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
# 'first' is a required parameter
cdef cugraph_type_erased_device_array_view_t* \
first_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
first)
# 'second' is a required parameter
cdef cugraph_type_erased_device_array_view_t* \
second_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
second)
error_code = cugraph_create_vertex_pairs(c_resource_handle_ptr,
c_graph_ptr,
first_view_ptr,
second_view_ptr,
&vertex_pairs_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "vertex_pairs")
error_code = cugraph_jaccard_coefficients(c_resource_handle_ptr,
c_graph_ptr,
vertex_pairs_ptr,
use_weight,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_jaccard_coefficients")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* similarity_ptr = \
cugraph_similarity_result_get_similarity(result_ptr)
cupy_similarity = copy_to_cupy_array(c_resource_handle_ptr, similarity_ptr)
cdef cugraph_type_erased_device_array_view_t* first_ptr = \
cugraph_vertex_pairs_get_first(vertex_pairs_ptr)
cupy_first = copy_to_cupy_array(c_resource_handle_ptr, first_ptr)
cdef cugraph_type_erased_device_array_view_t* second_ptr = \
cugraph_vertex_pairs_get_second(vertex_pairs_ptr)
cupy_second = copy_to_cupy_array(c_resource_handle_ptr, second_ptr)
# Free all pointers
cugraph_similarity_result_free(result_ptr)
cugraph_vertex_pairs_free(vertex_pairs_ptr)
cugraph_type_erased_device_array_view_free(first_view_ptr)
cugraph_type_erased_device_array_view_free(second_view_ptr)
return cupy_first, cupy_second, cupy_similarity
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/random.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.random cimport (
cugraph_rng_state_create,
cugraph_rng_state_free,
cugraph_rng_state_t,
)
from pylibcugraph._cugraph_c.resource_handle cimport (
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph.utils cimport (
assert_success,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle
)
import time
import os
import socket
def generate_default_seed():
h = hash(
(
socket.gethostname(),
os.getpid(),
time.perf_counter_ns()
)
)
return h
cdef class CuGraphRandomState:
"""
This class wraps a cugraph_rng_state_t instance, which represents a
random state.
"""
def __cinit__(self, ResourceHandle resource_handle, seed=None):
"""
Constructs a new CuGraphRandomState instance.
Parameters
----------
resource_handle: pylibcugraph.ResourceHandle (Required)
The cugraph resource handle for this process.
seed: int (Optional)
The random seed of this random state object.
Defaults to the hash of the hostname, pid, and time.
"""
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_rng_state_t* new_rng_state_ptr
if seed is None:
seed = generate_default_seed()
# reinterpret as unsigned
seed &= (2**64 - 1)
error_code = cugraph_rng_state_create(
c_resource_handle_ptr,
<size_t>seed,
&new_rng_state_ptr,
&error_ptr
)
assert_success(error_code, error_ptr, "cugraph_rng_state_create")
self.rng_state_ptr = new_rng_state_ptr
def __dealloc__(self):
"""
Destroys this CuGraphRandomState instance. Properly calls
free to destroy the underlying C++ object.
"""
cugraph_rng_state_free(self.rng_state_ptr)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/analyze_clustering_modularity.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.community_algorithms cimport (
cugraph_clustering_result_t,
cugraph_analyze_clustering_modularity,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
create_cugraph_type_erased_device_array_view_from_py_obj
)
def analyze_clustering_modularity(ResourceHandle resource_handle,
_GPUGraph graph,
size_t num_clusters,
vertex,
cluster,
):
"""
Compute modularity score of the specified clustering.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
num_clusters : size_t
Specifies the number of clusters to find, must be greater than 1.
vertex : device array type
Vertex ids from the clustering to analyze.
cluster : device array type
Cluster ids from the clustering to analyze.
Returns
-------
The modularity score of the specified clustering.
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 0], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=True, renumber=False, do_expensive_check=False)
>>> (vertex, cluster) = pylibcugraph.spectral_modularity_maximization(
... resource_handle, G, num_clusters=5, num_eigen_vects=2, evs_tolerance=0.00001
... evs_max_iter=100, kmean_tolerance=0.00001, kmean_max_iter=100)
# FIXME: Fix docstring result.
>>> vertices
############
>>> clusters
############
>>> score = pylibcugraph.analyze_clustering_modularity(
... resource_handle, G, num_clusters=5, vertex=vertex, cluster=cluster)
>>> score
############
"""
cdef double score = 0
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_clustering_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_type_erased_device_array_view_t* \
vertex_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
vertex)
cdef cugraph_type_erased_device_array_view_t* \
cluster_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
cluster)
error_code = cugraph_analyze_clustering_modularity(c_resource_handle_ptr,
c_graph_ptr,
num_clusters,
vertex_view_ptr,
cluster_view_ptr,
&score,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_analyze_clustering_modularity")
if vertex is not None:
cugraph_type_erased_device_array_view_free(vertex_view_ptr)
if cluster is not None:
cugraph_type_erased_device_array_view_free(cluster_view_ptr)
return score
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/CMakeLists.txt | # =============================================================================
# Copyright (c) 2022-2023, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================
add_subdirectory(components)
add_subdirectory(internal_types)
add_subdirectory(testing)
set(cython_sources
analyze_clustering_edge_cut.pyx
analyze_clustering_modularity.pyx
analyze_clustering_ratio_cut.pyx
balanced_cut_clustering.pyx
betweenness_centrality.pyx
bfs.pyx
core_number.pyx
ecg.pyx
edge_betweenness_centrality.pyx
egonet.pyx
eigenvector_centrality.pyx
generate_rmat_edgelist.pyx
generate_rmat_edgelists.pyx
graph_properties.pyx
graphs.pyx
hits.pyx
induced_subgraph.pyx
k_core.pyx
k_truss_subgraph.pyx
jaccard_coefficients.pyx
sorensen_coefficients.pyx
overlap_coefficients.pyx
katz_centrality.pyx
leiden.pyx
louvain.pyx
node2vec.pyx
pagerank.pyx
personalized_pagerank.pyx
random.pyx
resource_handle.pyx
spectral_modularity_maximization.pyx
select_random_vertices.pyx
sssp.pyx
triangle_count.pyx
two_hop_neighbors.pyx
uniform_neighbor_sample.pyx
uniform_random_walks.pyx
utils.pyx
weakly_connected_components.pyx
replicate_edgelist.pyx
)
set(linked_libraries cugraph::cugraph;cugraph::cugraph_c)
rapids_cython_create_modules(
CXX
SOURCE_FILES "${cython_sources}"
LINKED_LIBRARIES ${linked_libraries}
ASSOCIATED_TARGETS cugraph
)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/generate_rmat_edgelist.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
cugraph_resource_handle_t,
bool_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph_generators cimport (
cugraph_generate_rmat_edgelist,
cugraph_generate_edge_weights,
cugraph_generate_edge_ids,
cugraph_generate_edge_types,
cugraph_coo_t,
cugraph_coo_get_sources,
cugraph_coo_get_destinations,
cugraph_coo_get_edge_weights,
cugraph_coo_get_edge_id,
cugraph_coo_get_edge_type,
cugraph_coo_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
get_c_type_from_numpy_type,
)
from pylibcugraph._cugraph_c.random cimport (
cugraph_rng_state_t
)
from pylibcugraph.random cimport (
CuGraphRandomState
)
def generate_rmat_edgelist(ResourceHandle resource_handle,
random_state,
size_t scale,
size_t num_edges,
double a,
double b,
double c,
bool_t clip_and_flip,
bool_t scramble_vertex_ids,
bool_t include_edge_weights,
minimum_weight,
maximum_weight,
dtype,
bool_t include_edge_ids,
bool_t include_edge_types,
min_edge_type_value,
max_edge_type_value,
bool_t multi_gpu,
):
"""
Generate RMAT edge list
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
random_state : int , optional
Random state to use when generating samples. Optional argument,
defaults to a hash of process id, time, and hostname.
(See pylibcugraph.random.CuGraphRandomState)
scale : size_t
Scale factor to set the number of vertices in the graph Vertex IDs have
values in [0, V), where V = 1 << 'scale'
num_edges : size_t
Number of edges to generate
a : double
Probability of the edge being in the first partition
The Graph 500 spec sets this value to 0.57
b : double
Probability of the edge being in the second partition
The Graph 500 spec sets this value to 0.19
c : double
Probability of the edge being in the third partition
The Graph 500 spec sets this value to 0.19
clip_and_flip : bool
Flag controlling whether to generate edges only in the lower triangular
part (including the diagonal) of the graph adjacency matrix
(if set to 'true') or not (if set to 'false).
scramble_vertex_ids : bool
Flag controlling whether to scramble vertex ID bits (if set to `true`)
or not (if set to `false`); scrambling vertex ID bits breaks
correlation between vertex ID values and vertex degrees.
include_edge_weights : bool
Flag controlling whether to generate edges with weights
(if set to 'true') or not (if set to 'false').
minimum_weight : double
Minimum weight value to generate (if 'include_edge_weights' is 'true')
maximum_weight : double
Maximum weight value to generate (if 'include_edge_weights' is 'true')
dtype : string
The type of weight to generate ("FLOAT32" or "FLOAT64"), ignored unless
include_weights is true
include_edge_ids : bool
Flag controlling whether to generate edges with ids
(if set to 'true') or not (if set to 'false').
include_edge_types : bool
Flag controlling whether to generate edges with types
(if set to 'true') or not (if set to 'false').
min_edge_type_value : int
Minimum edge type to generate if 'include_edge_types' is 'true'
otherwise, this parameter is ignored.
max_edge_type_value : int
Maximum edge type to generate if 'include_edge_types' is 'true'
otherwise, this paramter is ignored.
multi_gpu : bool
Flag if the COO is being created on multiple GPUs
Returns
-------
return a tuple containing the sources and destinations with their corresponding
weights, ids and types if the flags 'include_edge_weights', 'include_edge_ids'
and 'include_edge_types' are respectively set to 'true'
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_coo_t* result_coo_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cg_rng_state = CuGraphRandomState(resource_handle, random_state)
cdef cugraph_rng_state_t* rng_state_ptr = \
cg_rng_state.rng_state_ptr
error_code = cugraph_generate_rmat_edgelist(c_resource_handle_ptr,
rng_state_ptr,
scale,
num_edges,
a,
b,
c,
clip_and_flip,
scramble_vertex_ids,
&result_coo_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "generate_rmat_edgelist")
cdef cugraph_type_erased_device_array_view_t* \
sources_view_ptr = cugraph_coo_get_sources(result_coo_ptr)
cdef cugraph_type_erased_device_array_view_t* \
destinations_view_ptr = cugraph_coo_get_destinations(result_coo_ptr)
cdef cugraph_type_erased_device_array_view_t* edge_weights_view_ptr
cupy_edge_weights = None
cupy_edge_ids = None
cupy_edge_types = None
if include_edge_weights:
dtype = get_c_type_from_numpy_type(dtype)
error_code = cugraph_generate_edge_weights(c_resource_handle_ptr,
rng_state_ptr,
result_coo_ptr,
dtype,
minimum_weight,
maximum_weight,
&error_ptr)
assert_success(error_code, error_ptr, "generate_edge_weights")
edge_weights_view_ptr = cugraph_coo_get_edge_weights(result_coo_ptr)
cupy_edge_weights = copy_to_cupy_array(c_resource_handle_ptr, edge_weights_view_ptr)
if include_edge_ids:
error_code = cugraph_generate_edge_ids(c_resource_handle_ptr,
result_coo_ptr,
multi_gpu,
&error_ptr)
assert_success(error_code, error_ptr, "generate_edge_ids")
edge_ids_view_ptr = cugraph_coo_get_edge_id(result_coo_ptr)
cupy_edge_ids = copy_to_cupy_array(c_resource_handle_ptr, edge_ids_view_ptr)
if include_edge_types:
error_code = cugraph_generate_edge_types(c_resource_handle_ptr,
rng_state_ptr,
result_coo_ptr,
min_edge_type_value,
max_edge_type_value,
&error_ptr)
assert_success(error_code, error_ptr, "generate_edge_types")
edge_type_view_ptr = cugraph_coo_get_edge_type(result_coo_ptr)
cupy_edge_types = copy_to_cupy_array(c_resource_handle_ptr, edge_type_view_ptr)
cupy_sources = copy_to_cupy_array(c_resource_handle_ptr, sources_view_ptr)
cupy_destinations = copy_to_cupy_array(c_resource_handle_ptr, destinations_view_ptr)
cugraph_coo_free(result_coo_ptr)
return cupy_sources, cupy_destinations, cupy_edge_weights, cupy_edge_ids, cupy_edge_types
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/hits.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
data_type_id_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_create,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.centrality_algorithms cimport (
cugraph_hits,
cugraph_hits_result_t,
cugraph_hits_result_get_vertices,
cugraph_hits_result_get_hubs,
cugraph_hits_result_get_authorities,
cugraph_hits_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
assert_CAI_type,
copy_to_cupy_array,
get_c_type_from_numpy_type
)
def hits(ResourceHandle resource_handle,
_GPUGraph graph,
double tol,
size_t max_iter,
initial_hubs_guess_vertices,
initial_hubs_guess_values,
bool_t normalized,
bool_t do_expensive_check):
"""
Compute HITS hubs and authorities values for each vertex
The HITS algorithm computes two numbers for a node. Authorities
estimates the node value based on the incoming links. Hubs estimates
the node value based on outgoing links.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
tol : float, optional (default=1.0e-5)
Set the tolerance the approximation, this parameter should be a small
magnitude value. This parameter is not currently supported.
max_iter : int, optional (default=100)
The maximum number of iterations before an answer is returned.
initial_hubs_guess_vertices : device array type, optional (default=None)
Device array containing the pointer to the array of initial hub guess vertices
initial_hubs_guess_values : device array type, optional (default=None)
Device array containing the pointer to the array of initial hub guess values
normalized : bool, optional (default=True)
do_expensive_check : bool
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple of device arrays, where the third item in the tuple is a device
array containing the vertex identifiers, the first and second items are device
arrays containing respectively the hubs and authorities values for the corresponding
vertices
Examples
--------
# FIXME: No example yet
"""
cdef uintptr_t cai_initial_hubs_guess_vertices_ptr = <uintptr_t>NULL
cdef uintptr_t cai_initial_hubs_guess_values_ptr = <uintptr_t>NULL
cdef cugraph_type_erased_device_array_view_t* initial_hubs_guess_vertices_view_ptr = NULL
cdef cugraph_type_erased_device_array_view_t* initial_hubs_guess_values_view_ptr = NULL
# FIXME: Add check ensuring that both initial_hubs_guess_vertices
# and initial_hubs_guess_values are passed when calling only pylibcugraph HITS.
# This is already True for cugraph HITS
if initial_hubs_guess_vertices is not None:
assert_CAI_type(initial_hubs_guess_vertices, "initial_hubs_guess_vertices")
cai_initial_hubs_guess_vertices_ptr = \
initial_hubs_guess_vertices.__cuda_array_interface__["data"][0]
initial_hubs_guess_vertices_view_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_initial_hubs_guess_vertices_ptr,
len(initial_hubs_guess_vertices),
get_c_type_from_numpy_type(initial_hubs_guess_vertices.dtype))
if initial_hubs_guess_values is not None:
assert_CAI_type(initial_hubs_guess_values, "initial_hubs_guess_values")
cai_initial_hubs_guess_values_ptr = \
initial_hubs_guess_values.__cuda_array_interface__["data"][0]
initial_hubs_guess_values_view_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_initial_hubs_guess_values_ptr,
len(initial_hubs_guess_values),
get_c_type_from_numpy_type(initial_hubs_guess_values.dtype))
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_hits_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
error_code = cugraph_hits(c_resource_handle_ptr,
c_graph_ptr,
tol,
max_iter,
initial_hubs_guess_vertices_view_ptr,
initial_hubs_guess_values_view_ptr,
normalized,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_mg_hits")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_hits_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* hubs_ptr = \
cugraph_hits_result_get_hubs(result_ptr)
cdef cugraph_type_erased_device_array_view_t* authorities_ptr = \
cugraph_hits_result_get_authorities(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_hubs = copy_to_cupy_array(c_resource_handle_ptr, hubs_ptr)
cupy_authorities = copy_to_cupy_array(c_resource_handle_ptr,
authorities_ptr)
cugraph_hits_result_free(result_ptr)
if initial_hubs_guess_vertices is not None:
cugraph_type_erased_device_array_view_free(
initial_hubs_guess_vertices_view_ptr)
if initial_hubs_guess_values is not None:
cugraph_type_erased_device_array_view_free(
initial_hubs_guess_values_view_ptr)
return (cupy_vertices, cupy_hubs, cupy_authorities)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/random.pxd | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.random cimport (
cugraph_rng_state_t,
)
cdef class CuGraphRandomState:
cdef cugraph_rng_state_t* rng_state_ptr
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/graphs.pxd | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
cugraph_type_erased_device_array_view_t,
)
# Base class allowing functions to accept either SGGraph or MGGraph
# This is not visible in python
cdef class _GPUGraph:
cdef cugraph_graph_t* c_graph_ptr
cdef cugraph_type_erased_device_array_view_t* edge_id_view_ptr
cdef cugraph_type_erased_device_array_view_t* weights_view_ptr
cdef class SGGraph(_GPUGraph):
pass
cdef class MGGraph(_GPUGraph):
pass
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/analyze_clustering_edge_cut.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.community_algorithms cimport (
cugraph_analyze_clustering_edge_cut,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
create_cugraph_type_erased_device_array_view_from_py_obj
)
def analyze_clustering_edge_cut(ResourceHandle resource_handle,
_GPUGraph graph,
size_t num_clusters,
vertex,
cluster,
):
"""
Compute edge cut score of the specified clustering.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
num_clusters : size_t
Specifies the number of clusters to find, must be greater than 1.
vertex : device array type
Vertex ids from the clustering to analyze.
cluster : device array type
Cluster ids from the clustering to analyze.
Returns
-------
The edge cut score of the specified clustering.
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 0], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=True, renumber=False, do_expensive_check=False)
>>> (vertex, cluster) = pylibcugraph.spectral_modularity_maximization(
... resource_handle, G, num_clusters=5, num_eigen_vects=2, evs_tolerance=0.00001
... evs_max_iter=100, kmean_tolerance=0.00001, kmean_max_iter=100)
# FIXME: Fix docstring result.
>>> vertices
############
>>> clusters
############
>>> score = pylibcugraph.analyze_clustering_edge_cut(
... resource_handle, G, num_clusters=5, vertex=vertex, cluster=cluster)
>>> score
############
"""
cdef double score = 0
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_type_erased_device_array_view_t* \
vertex_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
vertex)
cdef cugraph_type_erased_device_array_view_t* \
cluster_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
cluster)
error_code = cugraph_analyze_clustering_edge_cut(c_resource_handle_ptr,
c_graph_ptr,
num_clusters,
vertex_view_ptr,
cluster_view_ptr,
&score,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_analyze_clustering_edge_cut")
if vertex is not None:
cugraph_type_erased_device_array_view_free(vertex_view_ptr)
if cluster is not None:
cugraph_type_erased_device_array_view_free(cluster_view_ptr)
return score
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/README.md | # `pylibcugraph`
This directory contains the sources to the `pylibcugraph` package. The sources
are primarily cython files which are built using the `setup.py` file in the
parent directory and depend on the `libcugraph_c` and `libcugraph` libraries and
headers.
## components
The `connected_components` APIs.
## structure
Internal utilities and types for use with the libcugraph C++ library.
## utilities
Utility functions.
## experimental
This subpackage defines the "experimental" APIs. many of these APIs are defined
elsewhere and simply imported into the `experimental/__init__.py` file.
## tests
pytest tests for `pylibcugraph`.
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/personalized_pagerank.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.centrality_algorithms cimport (
cugraph_centrality_result_t,
cugraph_personalized_pagerank_allow_nonconvergence,
cugraph_centrality_result_converged,
cugraph_centrality_result_get_vertices,
cugraph_centrality_result_get_values,
cugraph_centrality_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
create_cugraph_type_erased_device_array_view_from_py_obj,
)
from pylibcugraph.exceptions import FailedToConvergeError
def personalized_pagerank(ResourceHandle resource_handle,
_GPUGraph graph,
precomputed_vertex_out_weight_vertices,
precomputed_vertex_out_weight_sums,
initial_guess_vertices,
initial_guess_values,
personalization_vertices,
personalization_values,
double alpha,
double epsilon,
size_t max_iterations,
bool_t do_expensive_check,
fail_on_nonconvergence=True):
"""
Find the PageRank score for every vertex in a graph by computing an
approximation of the Pagerank eigenvector using the power method. The
number of iterations depends on the properties of the network itself; it
increases when the tolerance descreases and/or alpha increases toward the
limiting value of 1.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
precomputed_vertex_out_weight_vertices: device array type
Subset of vertices of graph for precomputed_vertex_out_weight
precomputed_vertex_out_weight_sums : device array type
Corresponding precomputed sum of outgoing vertices weight
initial_guess_vertices : device array type
Subset of vertices of graph for initial guess for pagerank values
initial_guess_values : device array type
Pagerank values for vertices
personalization_vertices : device array type
Subset of vertices of graph for personalization
personalization_values : device array type
Personalization values for vertices
alpha : double
The damping factor alpha represents the probability to follow an
outgoing edge, standard value is 0.85.
Thus, 1.0-alpha is the probability to “teleport” to a random vertex.
Alpha should be greater than 0.0 and strictly lower than 1.0.
epsilon : double
Set the tolerance the approximation, this parameter should be a small
magnitude value.
The lower the tolerance the better the approximation. If this value is
0.0f, cuGraph will use the default value which is 1.0E-5.
Setting too small a tolerance can lead to non-convergence due to
numerical roundoff. Usually values between 0.01 and 0.00001 are
acceptable.
max_iterations : size_t
The maximum number of iterations before an answer is returned. This can
be used to limit the execution time and do an early exit before the
solver reaches the convergence tolerance.
If this value is lower or equal to 0 cuGraph will use the default
value, which is 100.
do_expensive_check : bool_t
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
fail_on_nonconvergence : bool (default=True)
If the solver does not reach convergence, raise an exception if
fail_on_nonconvergence is True. If fail_on_nonconvergence is False,
the return value is a tuple of (pagerank, converged) where pagerank is
a cudf.DataFrame as described below, and converged is a boolean
indicating if the solver converged (True) or not (False).
Returns
-------
The return value varies based on the value of the fail_on_nonconvergence
paramter. If fail_on_nonconvergence is True:
A tuple of device arrays, where the first item in the tuple is a device
array containing the vertex identifiers, and the second item is a device
array containing the pagerank values for the corresponding vertices. For
example, the vertex identifier at the ith element of the vertex array has
the pagerank value of the ith element in the pagerank array.
If fail_on_nonconvergence is False:
A three-tuple where the first two items are the device arrays described
above, and the third is a bool indicating if the solver converged (True)
or not (False).
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 3], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> personalization_vertices = cupy.asarray([0, 2], dtype=numpy.int32)
>>> personalization_values = cupy.asarray(
... [0.008309, 0.991691], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=False, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=True, renumber=False, do_expensive_check=False)
>>> (vertices, pageranks) = pylibcugraph.personalized_pagerank(
... resource_handle, G, None, None, None, None, alpha=0.85,
... personalization_vertices=personalization_vertices,
... personalization_values=personalization_values, epsilon=1.0e-6,
... max_iterations=500,
... do_expensive_check=False)
>>> vertices
array([0, 1, 2, 3], dtype=int32)
>>> pageranks
array([0.00446455, 0.00379487, 0.53607565, 0.45566472 ], dtype=float32)
"""
# FIXME: import these modules here for now until a better pattern can be
# used for optional imports (perhaps 'import_optional()' from cugraph), or
# these are made hard dependencies.
try:
import cupy
except ModuleNotFoundError:
raise RuntimeError("pagerank requires the cupy package, which could "
"not be imported")
try:
import numpy
except ModuleNotFoundError:
raise RuntimeError("pagerank requires the numpy package, which could "
"not be imported")
cdef cugraph_type_erased_device_array_view_t* \
initial_guess_vertices_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
initial_guess_vertices)
cdef cugraph_type_erased_device_array_view_t* \
initial_guess_values_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
initial_guess_values)
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_type_erased_device_array_view_t* \
precomputed_vertex_out_weight_vertices_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
precomputed_vertex_out_weight_vertices)
# FIXME: assert that precomputed_vertex_out_weight_sums
# type == weight type
cdef cugraph_type_erased_device_array_view_t* \
precomputed_vertex_out_weight_sums_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
precomputed_vertex_out_weight_sums)
cdef cugraph_type_erased_device_array_view_t* \
personalization_vertices_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
personalization_vertices)
cdef cugraph_type_erased_device_array_view_t* \
personalization_values_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
personalization_values)
cdef cugraph_centrality_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef bool_t converged
cdef cugraph_type_erased_device_array_view_t* vertices_ptr
cdef cugraph_type_erased_device_array_view_t* pageranks_ptr
error_code = cugraph_personalized_pagerank_allow_nonconvergence(
c_resource_handle_ptr,
c_graph_ptr,
precomputed_vertex_out_weight_vertices_view_ptr,
precomputed_vertex_out_weight_sums_view_ptr,
initial_guess_vertices_view_ptr,
initial_guess_values_view_ptr,
personalization_vertices_view_ptr,
personalization_values_view_ptr,
alpha,
epsilon,
max_iterations,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(
error_code, error_ptr, "cugraph_personalized_pagerank_allow_nonconvergence")
converged = cugraph_centrality_result_converged(result_ptr)
# Only extract results if necessary
if (fail_on_nonconvergence is False) or (converged is True):
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
vertices_ptr = cugraph_centrality_result_get_vertices(result_ptr)
pageranks_ptr = cugraph_centrality_result_get_values(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_pageranks = copy_to_cupy_array(c_resource_handle_ptr, pageranks_ptr)
# Free all pointers
cugraph_centrality_result_free(result_ptr)
if initial_guess_vertices is not None:
cugraph_type_erased_device_array_view_free(initial_guess_vertices_view_ptr)
if initial_guess_values is not None:
cugraph_type_erased_device_array_view_free(initial_guess_values_view_ptr)
if precomputed_vertex_out_weight_vertices is not None:
cugraph_type_erased_device_array_view_free(precomputed_vertex_out_weight_vertices_view_ptr)
if precomputed_vertex_out_weight_sums is not None:
cugraph_type_erased_device_array_view_free(precomputed_vertex_out_weight_sums_view_ptr)
if personalization_vertices is not None:
cugraph_type_erased_device_array_view_free(personalization_vertices_view_ptr)
if personalization_values is not None:
cugraph_type_erased_device_array_view_free(personalization_values_view_ptr)
if fail_on_nonconvergence is False:
return (cupy_vertices, cupy_pageranks, bool(converged))
else:
if converged is True:
return (cupy_vertices, cupy_pageranks)
else:
raise FailedToConvergeError
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/uniform_neighbor_sample.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_create,
cugraph_type_erased_device_array_view_free,
cugraph_type_erased_host_array_view_t,
cugraph_type_erased_host_array_view_create,
cugraph_type_erased_host_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.algorithms cimport (
cugraph_sample_result_t,
cugraph_prior_sources_behavior_t,
cugraph_compression_type_t,
cugraph_sampling_options_t,
cugraph_sampling_options_create,
cugraph_sampling_options_free,
cugraph_sampling_set_with_replacement,
cugraph_sampling_set_return_hops,
cugraph_sampling_set_prior_sources_behavior,
cugraph_sampling_set_dedupe_sources,
cugraph_sampling_set_renumber_results,
cugraph_sampling_set_compress_per_hop,
cugraph_sampling_set_compression_type,
)
from pylibcugraph._cugraph_c.sampling_algorithms cimport (
cugraph_uniform_neighbor_sample,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
assert_CAI_type,
assert_AI_type,
get_c_type_from_numpy_type,
)
from pylibcugraph.internal_types.sampling_result cimport (
SamplingResult,
)
from pylibcugraph._cugraph_c.random cimport (
cugraph_rng_state_t
)
from pylibcugraph.random cimport (
CuGraphRandomState
)
import warnings
# TODO accept cupy/numpy random state in addition to raw seed.
def uniform_neighbor_sample(ResourceHandle resource_handle,
_GPUGraph input_graph,
start_list,
h_fan_out,
*,
bool_t with_replacement,
bool_t do_expensive_check,
with_edge_properties=False,
batch_id_list=None,
label_list=None,
label_to_output_comm_rank=None,
prior_sources_behavior=None,
deduplicate_sources=False,
return_hops=False,
renumber=False,
compression='COO',
compress_per_hop=False,
random_state=None,
return_dict=False,):
"""
Does neighborhood sampling, which samples nodes from a graph based on the
current node's neighbors, with a corresponding fanout value at each hop.
Parameters
----------
resource_handle: ResourceHandle
Handle to the underlying device and host resources needed for
referencing data and running algorithms.
input_graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
start_list: device array type
Device array containing the list of starting vertices for sampling.
h_fan_out: numpy array type
Device array containing the brancing out (fan-out) degrees per
starting vertex for each hop level.
with_replacement: bool
If true, sampling procedure is done with replacement (the same vertex
can be selected multiple times in the same step).
do_expensive_check: bool
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
with_edge_properties: bool
If True, returns the edge properties of each edges along with the
edges themselves. Will result in an error if the provided graph
does not have edge properties.
batch_id_list: list[int32] (Optional)
List of int32 batch ids that is returned with each edge. Optional
argument, defaults to NULL, returning nothing.
label_list: list[int32] (Optional)
List of unique int32 batch ids. Required if also passing the
label_to_output_comm_rank flag. Default to NULL (does nothing)
label_to_output_comm_rank: list[int32] (Optional)
Maps the unique batch ids in label_list to the rank of the
worker that should hold results for that batch id.
Defaults to NULL (does nothing)
prior_sources_behavior: str (Optional)
Options are "carryover", and "exclude".
Default will leave the source list as-is.
Carryover will carry over sources from previous hops to the
current hop.
Exclude will exclude sources from previous hops from reappearing
as sources in future hops.
deduplicate_sources: bool (Optional)
If True, will deduplicate the source list before sampling.
Defaults to False.
renumber: bool (Optional)
If True, will renumber the sources and destinations on a
per-batch basis and return the renumber map and batch offsets
in additional to the standard returns.
compression: str (Optional)
Options: COO (default), CSR, CSC, DCSR, DCSR
Sets the compression format for the returned samples.
compress_per_hop: bool (Optional)
If False (default), will create a compressed edgelist for the
entire batch.
If True, will create a separate compressed edgelist per hop within
a batch.
random_state: int (Optional)
Random state to use when generating samples. Optional argument,
defaults to a hash of process id, time, and hostname.
(See pylibcugraph.random.CuGraphRandomState)
return_dict: bool (Optional)
Whether to return a dictionary instead of a tuple.
Optional argument, defaults to False, returning a tuple.
This argument will eventually be deprecated in favor
of always returning a dictionary.
Returns
-------
A tuple of device arrays, where the first and second items in the tuple
are device arrays containing the starting and ending vertices of each
walk respectively, the third item in the tuple is a device array
containing the start labels, and the fourth item in the tuple is a device
array containing the indices for reconstructing paths.
If renumber was set to True, then the fifth item in the tuple is a device
array containing the renumber map, and the sixth item in the tuple is a
device array containing the renumber map offsets (which delineate where
the renumber map for each batch starts).
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = (
resource_handle.c_resource_handle_ptr
)
cdef cugraph_graph_t* c_graph_ptr = input_graph.c_graph_ptr
cdef bool_t c_deduplicate_sources = deduplicate_sources
cdef bool_t c_return_hops = return_hops
cdef bool_t c_renumber = renumber
cdef bool_t c_compress_per_hop = compress_per_hop
assert_CAI_type(start_list, "start_list")
assert_CAI_type(batch_id_list, "batch_id_list", True)
assert_CAI_type(label_list, "label_list", True)
assert_CAI_type(label_to_output_comm_rank, "label_to_output_comm_rank", True)
assert_AI_type(h_fan_out, "h_fan_out")
cdef cugraph_sample_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef uintptr_t cai_start_ptr = \
start_list.__cuda_array_interface__["data"][0]
cdef uintptr_t cai_batch_id_ptr
if batch_id_list is not None:
cai_batch_id_ptr = \
batch_id_list.__cuda_array_interface__['data'][0]
cdef uintptr_t cai_label_list_ptr
if label_list is not None:
cai_label_list_ptr = \
label_list.__cuda_array_interface__['data'][0]
cdef uintptr_t cai_label_to_output_comm_rank_ptr
if label_to_output_comm_rank is not None:
cai_label_to_output_comm_rank_ptr = \
label_to_output_comm_rank.__cuda_array_interface__['data'][0]
cdef uintptr_t ai_fan_out_ptr = \
h_fan_out.__array_interface__["data"][0]
cdef cugraph_type_erased_device_array_view_t* start_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_start_ptr,
len(start_list),
get_c_type_from_numpy_type(start_list.dtype))
cdef cugraph_type_erased_device_array_view_t* batch_id_ptr = <cugraph_type_erased_device_array_view_t*>NULL
if batch_id_list is not None:
batch_id_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_batch_id_ptr,
len(batch_id_list),
get_c_type_from_numpy_type(batch_id_list.dtype)
)
cdef cugraph_type_erased_device_array_view_t* label_list_ptr = <cugraph_type_erased_device_array_view_t*>NULL
if label_list is not None:
label_list_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_label_list_ptr,
len(label_list),
get_c_type_from_numpy_type(label_list.dtype)
)
cdef cugraph_type_erased_device_array_view_t* label_to_output_comm_rank_ptr = <cugraph_type_erased_device_array_view_t*>NULL
if label_to_output_comm_rank is not None:
label_to_output_comm_rank_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_label_to_output_comm_rank_ptr,
len(label_to_output_comm_rank),
get_c_type_from_numpy_type(label_to_output_comm_rank.dtype)
)
cdef cugraph_type_erased_host_array_view_t* fan_out_ptr = \
cugraph_type_erased_host_array_view_create(
<void*>ai_fan_out_ptr,
len(h_fan_out),
get_c_type_from_numpy_type(h_fan_out.dtype))
cg_rng_state = CuGraphRandomState(resource_handle, random_state)
cdef cugraph_rng_state_t* rng_state_ptr = \
cg_rng_state.rng_state_ptr
cdef cugraph_prior_sources_behavior_t prior_sources_behavior_e
if prior_sources_behavior is None:
prior_sources_behavior_e = cugraph_prior_sources_behavior_t.DEFAULT
elif prior_sources_behavior == 'carryover':
prior_sources_behavior_e = cugraph_prior_sources_behavior_t.CARRY_OVER
elif prior_sources_behavior == 'exclude':
prior_sources_behavior_e = cugraph_prior_sources_behavior_t.EXCLUDE
else:
raise ValueError(
f'Invalid option {prior_sources_behavior}'
' for prior sources behavior'
)
cdef cugraph_compression_type_t compression_behavior_e
if compression is None or compression == 'COO':
compression_behavior_e = cugraph_compression_type_t.COO
elif compression == 'CSR':
compression_behavior_e = cugraph_compression_type_t.CSR
elif compression == 'CSC':
compression_behavior_e = cugraph_compression_type_t.CSC
elif compression == 'DCSR':
compression_behavior_e = cugraph_compression_type_t.DCSR
elif compression == 'DCSC':
compression_behavior_e = cugraph_compression_type_t.DCSC
else:
raise ValueError(
f'Invalid option {compression}'
' for compression type'
)
cdef cugraph_sampling_options_t* sampling_options
error_code = cugraph_sampling_options_create(&sampling_options, &error_ptr)
assert_success(error_code, error_ptr, "cugraph_sampling_options_create")
cugraph_sampling_set_with_replacement(sampling_options, with_replacement)
cugraph_sampling_set_return_hops(sampling_options, c_return_hops)
cugraph_sampling_set_dedupe_sources(sampling_options, c_deduplicate_sources)
cugraph_sampling_set_prior_sources_behavior(sampling_options, prior_sources_behavior_e)
cugraph_sampling_set_renumber_results(sampling_options, c_renumber)
cugraph_sampling_set_compression_type(sampling_options, compression_behavior_e)
cugraph_sampling_set_compress_per_hop(sampling_options, c_compress_per_hop)
error_code = cugraph_uniform_neighbor_sample(
c_resource_handle_ptr,
c_graph_ptr,
start_ptr,
batch_id_ptr,
label_list_ptr,
label_to_output_comm_rank_ptr,
fan_out_ptr,
rng_state_ptr,
sampling_options,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_uniform_neighbor_sample")
# Free the sampling options
cugraph_sampling_options_free(sampling_options)
# Free the two input arrays that are no longer needed.
cugraph_type_erased_device_array_view_free(start_ptr)
cugraph_type_erased_host_array_view_free(fan_out_ptr)
if batch_id_list is not None:
cugraph_type_erased_device_array_view_free(batch_id_ptr)
# Have the SamplingResult instance assume ownership of the result data.
result = SamplingResult()
result.set_ptr(result_ptr)
# Get cupy "views" of the individual arrays to return. These each increment
# the refcount on the SamplingResult instance which will keep the data alive
# until all references are removed and the GC runs.
# TODO Return everything that isn't null in release 23.12
if with_edge_properties:
cupy_majors = result.get_majors()
cupy_major_offsets = result.get_major_offsets()
cupy_minors = result.get_minors()
cupy_edge_weights = result.get_edge_weights()
cupy_edge_ids = result.get_edge_ids()
cupy_edge_types = result.get_edge_types()
cupy_batch_ids = result.get_batch_ids()
cupy_label_hop_offsets = result.get_label_hop_offsets()
if renumber:
cupy_renumber_map = result.get_renumber_map()
cupy_renumber_map_offsets = result.get_renumber_map_offsets()
# TODO drop the placeholder for hop ids in release 23.12
if return_dict:
return {
'major_offsets': cupy_major_offsets,
'majors': cupy_majors,
'minors': cupy_minors,
'weight': cupy_edge_weights,
'edge_id': cupy_edge_ids,
'edge_type': cupy_edge_types,
'batch_id': cupy_batch_ids,
'label_hop_offsets': cupy_label_hop_offsets,
'hop_id': None,
'renumber_map': cupy_renumber_map,
'renumber_map_offsets': cupy_renumber_map_offsets
}
else:
cupy_majors = cupy_major_offsets if cupy_majors is None else cupy_majors
return (cupy_majors, cupy_minors, cupy_edge_weights, cupy_edge_ids, cupy_edge_types, cupy_batch_ids, cupy_label_hop_offsets, None, cupy_renumber_map, cupy_renumber_map_offsets)
else:
cupy_hop_ids = result.get_hop_ids() # FIXME remove this
if return_dict:
return {
'major_offsets': cupy_major_offsets,
'majors': cupy_majors,
'minors': cupy_minors,
'weight': cupy_edge_weights,
'edge_id': cupy_edge_ids,
'edge_type': cupy_edge_types,
'batch_id': cupy_batch_ids,
'label_hop_offsets': cupy_label_hop_offsets,
'hop_id': cupy_hop_ids,
}
else:
cupy_majors = cupy_major_offsets if cupy_majors is None else cupy_majors
return (cupy_majors, cupy_minors, cupy_edge_weights, cupy_edge_ids, cupy_edge_types, cupy_batch_ids, cupy_label_hop_offsets, cupy_hop_ids)
else:
# TODO this is deprecated, remove it in release 23.12
warnings.warn(
"Calling uniform_neighbor_sample with the 'with_edge_properties' argument is deprecated."
" Starting in release 23.12, this argument will be removed in favor of behaving like the "
"with_edge_properties=True option, returning whatever properties are in the graph.",
FutureWarning,
)
cupy_sources = result.get_sources()
cupy_destinations = result.get_destinations()
cupy_indices = result.get_indices()
if return_dict:
return {
'sources': cupy_sources,
'destinations': cupy_destinations,
'indices': cupy_indices
}
else:
return (cupy_sources, cupy_destinations, cupy_indices)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/k_core.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
import warnings
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.core_algorithms cimport (
cugraph_core_result_t,
cugraph_k_core_result_t,
cugraph_core_result_create,
cugraph_k_core,
cugraph_k_core_degree_type_t,
cugraph_k_core_result_get_src_vertices,
cugraph_k_core_result_get_dst_vertices,
cugraph_k_core_result_get_weights,
cugraph_k_core_result_free,
cugraph_core_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
create_cugraph_type_erased_device_array_view_from_py_obj,
)
def k_core(ResourceHandle resource_handle,
_GPUGraph graph,
size_t k,
degree_type,
core_result,
bool_t do_expensive_check):
"""
Compute the k-core of the graph G
A k-core of a graph is a maximal subgraph that
contains nodes of degree k or more. This call does not support a graph
with self-loops and parallel edges.
Parameters
----------
resource_handle: ResourceHandle
Handle to the underlying device and host resource needed for
referencing data and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
k : size_t (default=None)
Order of the core. This value must not be negative. If set to None
the main core is returned.
degree_type: str
This option determines if the core number computation should be based
on input, output, or both directed edges, with valid values being
"incoming", "outgoing", and "bidirectional" respectively.
This option is currently ignored in this release, and setting it will
result in a warning.
core_result : device array type
Precomputed core number of the nodes of the graph G
If set to None, the core numbers of the nodes are calculated
internally.
do_expensive_check: bool
If True, performs more extensive tests on the inputs to ensure
validity, at the expense of increased run time.
Returns
-------
A tuple of device arrays contaning the sources, destinations vertices
and the weights.
Examples
--------
# FIXME: No example yet
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_core_result_t* core_result_ptr
cdef cugraph_k_core_result_t* k_core_result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
degree_type_map = {
"incoming": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_IN,
"outgoing": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_OUT,
"bidirectional": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_INOUT}
cdef cugraph_type_erased_device_array_view_t* \
vertices_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
core_result["vertex"])
cdef cugraph_type_erased_device_array_view_t* \
core_numbers_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
core_result["values"])
# Create a core_number result
error_code = cugraph_core_result_create(c_resource_handle_ptr,
vertices_view_ptr,
core_numbers_view_ptr,
&core_result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_core_result_create")
# compute k_core
error_code = cugraph_k_core(c_resource_handle_ptr,
c_graph_ptr,
k,
degree_type_map[degree_type],
core_result_ptr,
do_expensive_check,
&k_core_result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_k_core_number")
cdef cugraph_type_erased_device_array_view_t* src_vertices_ptr = \
cugraph_k_core_result_get_src_vertices(k_core_result_ptr)
cdef cugraph_type_erased_device_array_view_t* dst_vertices_ptr = \
cugraph_k_core_result_get_dst_vertices(k_core_result_ptr)
cdef cugraph_type_erased_device_array_view_t* weights_ptr = \
cugraph_k_core_result_get_weights(k_core_result_ptr)
cupy_src_vertices = copy_to_cupy_array(c_resource_handle_ptr, src_vertices_ptr)
cupy_dst_vertices = copy_to_cupy_array(c_resource_handle_ptr, dst_vertices_ptr)
if weights_ptr is not NULL:
cupy_weights = copy_to_cupy_array(c_resource_handle_ptr, weights_ptr)
else:
cupy_weights = None
cugraph_k_core_result_free(k_core_result_ptr)
cugraph_core_result_free(core_result_ptr)
return (cupy_src_vertices, cupy_dst_vertices, cupy_weights)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/exceptions.py | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Exception classes for pylibcugraph.
"""
class FailedToConvergeError(Exception):
"""
Raised when an algorithm fails to converge within a predetermined set of
constraints which vary based on the algorithm, and may or may not be
user-configurable.
"""
pass
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/induced_subgraph.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.graph_functions cimport (
cugraph_induced_subgraph_result_t,
cugraph_extract_induced_subgraph,
cugraph_induced_subgraph_get_sources,
cugraph_induced_subgraph_get_destinations,
cugraph_induced_subgraph_get_edge_weights,
cugraph_induced_subgraph_get_subgraph_offsets,
cugraph_induced_subgraph_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
create_cugraph_type_erased_device_array_view_from_py_obj,
)
def induced_subgraph(ResourceHandle resource_handle,
_GPUGraph graph,
subgraph_vertices,
subgraph_offsets,
bool_t do_expensive_check):
"""
extract a list of edges that represent the subgraph
containing only the specified vertex ids.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph.
subgraph_vertices : cupy array
array of vertices to include in extracted subgraph.
subgraph_offsets : cupy array
array of subgraph offsets into subgraph_vertices.
do_expensive_check : bool_t
If True, performs more extensive tests on the inputs to ensure
validitity, at the expense of increased run time.
Returns
-------
A tuple of device arrays containing the sources, destinations, edge_weights
and the subgraph_offsets(if there are more than one seeds)
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 1, 2, 2, 2, 3, 4], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 3, 4, 0, 1, 3, 5, 5], dtype=numpy.int32)
>>> weights = cupy.asarray(
... [0.1, 2.1, 1.1, 5.1, 3.1, 4.1, 7.2, 3.2], dtype=numpy.float32)
>>> subgraph_vertices = cupy.asarray([0, 1, 2, 3], dtype=numpy.int32)
>>> subgraph_offsets = cupy.asarray([0, 4], dtype=numpy.int32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=False, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=False, renumber=False, do_expensive_check=False)
>>> (sources, destinations, edge_weights, subgraph_offsets) =
... pylibcugraph.induced_subgraph(
... resource_handle, G, subgraph_vertices, subgraph_offsets, False)
>>> sources
[0, 1, 2, 2, 2]
>>> destinations
[1, 3, 0, 1, 3]
>>> edge_weights
[0.1, 2.1, 5.1, 3.1, 4.1]
>>> subgraph_offsets
[0, 5]
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_induced_subgraph_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_type_erased_device_array_view_t* \
subgraph_offsets_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
subgraph_offsets)
cdef cugraph_type_erased_device_array_view_t* \
subgraph_vertices_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
subgraph_vertices)
error_code = cugraph_extract_induced_subgraph(c_resource_handle_ptr,
c_graph_ptr,
subgraph_offsets_view_ptr,
subgraph_vertices_view_ptr,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_extract_induced_subgraph")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* sources_ptr = \
cugraph_induced_subgraph_get_sources(result_ptr)
cdef cugraph_type_erased_device_array_view_t* destinations_ptr = \
cugraph_induced_subgraph_get_destinations(result_ptr)
cdef cugraph_type_erased_device_array_view_t* edge_weights_ptr = \
cugraph_induced_subgraph_get_edge_weights(result_ptr)
cdef cugraph_type_erased_device_array_view_t* subgraph_offsets_ptr = \
cugraph_induced_subgraph_get_subgraph_offsets(result_ptr)
# FIXME: Get ownership of the result data instead of performing a copy
# for perfomance improvement
cupy_sources = copy_to_cupy_array(
c_resource_handle_ptr, sources_ptr)
cupy_destinations = copy_to_cupy_array(
c_resource_handle_ptr, destinations_ptr)
cupy_edge_weights = copy_to_cupy_array(
c_resource_handle_ptr, edge_weights_ptr)
cupy_subgraph_offsets = copy_to_cupy_array(
c_resource_handle_ptr, subgraph_offsets_ptr)
# Free pointer
cugraph_induced_subgraph_result_free(result_ptr)
return (cupy_sources, cupy_destinations,
cupy_edge_weights, cupy_subgraph_offsets)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/core_number.pyx | # Copyright (c) 2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
data_type_id_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.core_algorithms cimport (
cugraph_core_result_t,
cugraph_core_number,
cugraph_k_core_degree_type_t,
cugraph_core_result_get_vertices,
cugraph_core_result_get_core_numbers,
cugraph_core_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
get_c_type_from_numpy_type,
)
def core_number(ResourceHandle resource_handle,
_GPUGraph graph,
degree_type,
bool_t do_expensive_check):
"""
Computes core number.
Parameters
----------
resource_handle: ResourceHandle
Handle to the underlying device and host resource needed for
referencing data and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
degree_type: str
This option determines if the core number computation should be based
on input, output, or both directed edges, with valid values being
"incoming", "outgoing", and "bidirectional" respectively.
This option is currently ignored in this release, and setting it will
result in a warning.
do_expensive_check: bool
If True, performs more extensive tests on the inputs to ensure
validity, at the expense of increased run time.
Returns
-------
A tuple of device arrays, where the first item in the tuple is a device
array containing the vertices and the second item in the tuple is a device
array containing the core numbers for the corresponding vertices.
Examples
--------
# FIXME: No example yet
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_core_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
degree_type_map = {
"incoming": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_IN,
"outgoing": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_OUT,
"bidirectional": cugraph_k_core_degree_type_t.K_CORE_DEGREE_TYPE_INOUT}
error_code = cugraph_core_number(c_resource_handle_ptr,
c_graph_ptr,
degree_type_map[degree_type],
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_core_number")
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_core_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* values_ptr = \
cugraph_core_result_get_core_numbers(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_values = copy_to_cupy_array(c_resource_handle_ptr, values_ptr)
cugraph_core_result_free(result_ptr)
return (cupy_vertices, cupy_values)
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/analyze_clustering_ratio_cut.pyx | # Copyright (c) 2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from pylibcugraph._cugraph_c.resource_handle cimport (
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.community_algorithms cimport (
cugraph_analyze_clustering_ratio_cut,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
create_cugraph_type_erased_device_array_view_from_py_obj
)
def analyze_clustering_ratio_cut(ResourceHandle resource_handle,
_GPUGraph graph,
size_t num_clusters,
vertex,
cluster,
):
"""
Compute ratio cut score of the specified clustering.
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph
The input graph.
num_clusters : size_t
Specifies the number of clusters to find, must be greater than 1.
vertex : device array type
Vertex ids from the clustering to analyze.
cluster : device array type
Cluster ids from the clustering to analyze.
Returns
-------
The ratio cut score of the specified clustering.
Examples
--------
>>> import pylibcugraph, cupy, numpy
>>> srcs = cupy.asarray([0, 1, 2], dtype=numpy.int32)
>>> dsts = cupy.asarray([1, 2, 0], dtype=numpy.int32)
>>> weights = cupy.asarray([1.0, 1.0, 1.0], dtype=numpy.float32)
>>> resource_handle = pylibcugraph.ResourceHandle()
>>> graph_props = pylibcugraph.GraphProperties(
... is_symmetric=True, is_multigraph=False)
>>> G = pylibcugraph.SGGraph(
... resource_handle, graph_props, srcs, dsts, weights,
... store_transposed=True, renumber=False, do_expensive_check=False)
>>> (vertex, cluster) = pylibcugraph.spectral_modularity_maximization(
... resource_handle, G, num_clusters=5, num_eigen_vects=2, evs_tolerance=0.00001
... evs_max_iter=100, kmean_tolerance=0.00001, kmean_max_iter=100)
# FIXME: Fix docstring result.
>>> vertices
############
>>> clusters
############
>>> score = pylibcugraph.analyze_clustering_ratio_cut(
... resource_handle, G, num_clusters=5, vertex=vertex, cluster=cluster)
>>> score
############
"""
cdef double score = 0
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef cugraph_type_erased_device_array_view_t* \
vertex_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
vertex)
cdef cugraph_type_erased_device_array_view_t* \
cluster_view_ptr = \
create_cugraph_type_erased_device_array_view_from_py_obj(
cluster)
error_code = cugraph_analyze_clustering_ratio_cut(c_resource_handle_ptr,
c_graph_ptr,
num_clusters,
vertex_view_ptr,
cluster_view_ptr,
&score,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_analyze_clustering_ratio_cut")
if vertex is not None:
cugraph_type_erased_device_array_view_free(vertex_view_ptr)
if cluster is not None:
cugraph_type_erased_device_array_view_free(cluster_view_ptr)
return score
| 0 |
rapidsai_public_repos/cugraph/python/pylibcugraph | rapidsai_public_repos/cugraph/python/pylibcugraph/pylibcugraph/katz_centrality.pyx | # Copyright (c) 2022-2023, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Have cython use python 3 syntax
# cython: language_level = 3
from libc.stdint cimport uintptr_t
from pylibcugraph._cugraph_c.resource_handle cimport (
bool_t,
cugraph_resource_handle_t,
)
from pylibcugraph._cugraph_c.error cimport (
cugraph_error_code_t,
cugraph_error_t,
)
from pylibcugraph._cugraph_c.array cimport (
cugraph_type_erased_device_array_view_t,
cugraph_type_erased_device_array_view_create,
cugraph_type_erased_device_array_view_free,
)
from pylibcugraph._cugraph_c.graph cimport (
cugraph_graph_t,
)
from pylibcugraph._cugraph_c.centrality_algorithms cimport (
cugraph_centrality_result_t,
cugraph_katz_centrality,
cugraph_centrality_result_get_vertices,
cugraph_centrality_result_get_values,
cugraph_centrality_result_free,
)
from pylibcugraph.resource_handle cimport (
ResourceHandle,
)
from pylibcugraph.graphs cimport (
_GPUGraph,
)
from pylibcugraph.utils cimport (
assert_success,
copy_to_cupy_array,
get_c_type_from_numpy_type,
)
def katz_centrality(ResourceHandle resource_handle,
_GPUGraph graph,
betas,
double alpha,
double beta,
double epsilon,
size_t max_iterations,
bool_t do_expensive_check):
"""
Compute the Katz centrality for the nodes of the graph. This implementation
is based on a relaxed version of Katz defined by Foster with a reduced
computational complexity of O(n+m)
Parameters
----------
resource_handle : ResourceHandle
Handle to the underlying device resources needed for referencing data
and running algorithms.
graph : SGGraph or MGGraph
The input graph, for either Single or Multi-GPU operations.
betas : device array type
Device array containing the values to be added to each vertex's new
Katz Centrality score in every iteration. If set to None then beta is
used for all vertices.
alpha : double
The attenuation factor, should be smaller than the inverse of the
maximum eigenvalue of the graph
beta : double
Constant value to be added to each vertex's new Katz Centrality score
in every iteration. Relevant only when betas is None
epsilon : double
Error tolerance to check convergence
max_iterations: size_t
Maximum number of Katz Centrality iterations
do_expensive_check : bool_t
A flag to run expensive checks for input arguments if True.
Returns
-------
Examples
--------
"""
cdef cugraph_resource_handle_t* c_resource_handle_ptr = \
resource_handle.c_resource_handle_ptr
cdef cugraph_graph_t* c_graph_ptr = graph.c_graph_ptr
cdef cugraph_centrality_result_t* result_ptr
cdef cugraph_error_code_t error_code
cdef cugraph_error_t* error_ptr
cdef uintptr_t cai_betas_ptr
cdef cugraph_type_erased_device_array_view_t* betas_ptr
if betas is not None:
cai_betas_ptr = betas.__cuda_array_interface__["data"][0]
betas_ptr = \
cugraph_type_erased_device_array_view_create(
<void*>cai_betas_ptr,
len(betas),
get_c_type_from_numpy_type(betas.dtype))
else:
betas_ptr = NULL
error_code = cugraph_katz_centrality(c_resource_handle_ptr,
c_graph_ptr,
betas_ptr,
alpha,
beta,
epsilon,
max_iterations,
do_expensive_check,
&result_ptr,
&error_ptr)
assert_success(error_code, error_ptr, "cugraph_katz_centrality")
# Extract individual device array pointers from result and copy to cupy
# arrays for returning.
cdef cugraph_type_erased_device_array_view_t* vertices_ptr = \
cugraph_centrality_result_get_vertices(result_ptr)
cdef cugraph_type_erased_device_array_view_t* values_ptr = \
cugraph_centrality_result_get_values(result_ptr)
cupy_vertices = copy_to_cupy_array(c_resource_handle_ptr, vertices_ptr)
cupy_values = copy_to_cupy_array(c_resource_handle_ptr, values_ptr)
cugraph_centrality_result_free(result_ptr)
cugraph_type_erased_device_array_view_free(betas_ptr)
return (cupy_vertices, cupy_values)
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.