instance_id
string
repo_id
string
repo_url
string
base_commit
string
language
string
setup_commands
list
test_command
string
test_timeout
int64
refactor_type
string
description
string
files
list
task_type
string
categories
list
bobbyyyan__scorch-refactor_dedupe_ops
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2071e9b
python
[]
/testbed/run_tests.sh
3,000
There is a lot of duplicated code in src/scorch/ops.py. Please fix that (e.g. make things more modular and maintainable). Note: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The ful...
[]
refactor
[ "API/Linear Algebra/Matmul variants", "API/Element-wise/Binary arithmetic" ]
bobbyyyan__scorch-refactor_cost_estimator
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2071e9b
python
[]
/testbed/run_tests.sh
3,000
extract
Extract the cost-model logic from `src/scorch/compiler/scheduler.py` into a separate `CostEstimator` class or module. The three private static methods `Scheduler._compute_comp_cost`, `Scheduler._compute_workspace_cost`, and `Scheduler._compute_transposition_cost` each independently recompute the same derived quantities...
[]
refactor
[ "Scheduler/IR analyses & scalar opts/Dataflow analyses" ]
bobbyyyan__scorch-refactor_cin_collector
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
b061b53
python
[]
/testbed/run_tests.sh
3,000
consolidate
Consolidate the multiple nearly identical CINVisitor-based classes defined inline within methods of cin.py. The classes TensorAccessGetter, ResultTensorAccessCollector, RHSAccessCollector, WorkspaceGetter, IndexVarCollector, and LoopOrderGetter all follow the same pattern of visiting CIN nodes and collecting specific e...
[]
refactor
[ "IR/CIN nodes" ]
bobbyyyan__scorch-refactor_format_converter
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2071e9b
python
[]
/testbed/run_tests.sh
3,000
extract
Extract the format conversion methods to_dense and to_sparse from stensor.py into a separate FormatConverter class or module. Both methods contain extensive duplicate code for: generating index variables, creating TensorVar definitions, building assignment expressions using exec/eval, lowering CIN to LLIR, generating C...
[]
refactor
[ "Format/Semantic extensions" ]
bobbyyyan__scorch-refactor_lattice_loop_generator
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2071e9b
python
[]
/testbed/run_tests.sh
3,000
extract
Extract the gen_single_lattice_loop inner function from the get_lattice_loops method in iter_lattice.py into a separate LatticeLoopGenerator class. This function handles multiple cases based on iterator count and result tensor access patterns. The class should encapsulate loop generation state and provide cleaner metho...
[]
refactor
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_block_sparse_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add block sparse format support to scorch's compiler, enabling the full CIN->LLIR->C++ pipeline to generate code for block-structured sparse tensors. In scorch's format system, CSR is `[DENSE, COMPRESSED]` and COO is `[COORDINATE, COORDINATE]` - the level types describe per-dimension storage. Block sparse formats like ...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_elementwise_mul
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Implement element-wise multiplication (`__mul__`) on `STensor` through the full CIN compilation pipeline, following the same pattern as the existing `__add__` implementation in `stensor.py`. The key difference from addition is **intersection semantics**: when multiplying two sparse tensors, the result is non-zero only ...
[]
feature
[ "API/Element-wise/Binary arithmetic" ]
bobbyyyan__scorch-feature_transpose
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add a `transpose()` method and `.T` property to `STensor` that physically transposes a 2D sparse tensor, reorganizing its storage layout. This is not a lazy/view transpose - it must produce a new `STensor` with properly restructured index arrays. For CSR format (level types `[DENSE, COMPRESSED]`): transposing produces ...
[]
feature
[ "API/Shape & Layout/Transpose & permute" ]
bobbyyyan__scorch-feature_sum_reduction
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Implement `sum(axis=None)` on `STensor` and as a standalone function in `ops.py` that reduces a sparse tensor along specified dimensions. `axis=None` reduces all dimensions to a scalar. `axis=0` sums along rows (producing a row vector). `axis=1` sums along columns (producing a column vector). For a 2D CSR matrix with `...
[]
feature
[ "API/Reductions & Scans/Aggregate" ]
bobbyyyan__scorch-feature_sddmm
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Implement `sddmm(S, A, B)` as a first-class operation in `ops.py`. SDDMM computes the **sampled dense-dense matrix multiply**: `C = S * (A @ B)` where S is a sparse matrix (the "sample" mask), A and B are dense matrices, `*` is element-wise multiply, and @ is matrix multiply. This is a critical primitive in sparse ML -...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_autograd
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add automatic differentiation support for scorch's core operations by implementing custom `torch.autograd.Function` subclasses. Currently, `STensor` extends `torch.nn.Module` and has a `requires_grad` flag, but no gradient computation is implemented. Implement: (1) `SparseAddFunction` - for `C = A + B`, the backward pa...
[]
feature
[ "API/ML Primitives/Autograd" ]
bobbyyyan__scorch-feature_unary_ops
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add compiler-level support for unary operations on sparse tensors. Currently, CIN only supports `BinaryOp` (add, mul, sub, div). Extend the IR and compilation pipeline to support unary operations: (1) Add a `UnaryOp` CIN node with an `Operation` enum including `ABS`, `NEG`, `RELU`, `SQRT`, `EXP`, `LOG`, `TANH`, `SIGMOI...
[]
feature
[ "API/Element-wise/Unary math" ]
bobbyyyan__scorch-feature_conv1d
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Implement `conv1d(input, kernel, padding=0)` in `ops.py` for sparse 1D signals. The mathematical definition is `y[i] = sum_k x[i + k] * w[k]` where `x` is a sparse 1D input signal and `w` is a (typically dense) convolution kernel. This requires **computed index** support in the CIN pipeline. Scorch already has partial ...
[]
feature
[ "API/Convolution & Pooling" ]
bobbyyyan__scorch-feature_ell_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add ELL (ELLPACK) sparse format support to scorch's format system. ELL format stores a fixed number of entries per row (`max_nnz_per_row`), using two 2D arrays: `indices[num_rows][max_nnz]` for column indices and `values[num_rows][max_nnz]` for values. Rows with fewer non-zeros are padded with a sentinel value (e.g., -...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_getitem_2d
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add `__getitem__` support to `STensor` for extracting sub-tensors. Implement three indexing modes: (1) **Integer indexing**: `A[i]` extracts row i from a 2D tensor, returning a 1D STensor. For CSR, this is O(1) - just extract the slice `values[crow[i]:crow[i+1]]` and `col_indices[crow[i]:crow[i+1]]`. For COO, filter en...
[]
feature
[ "API/Indexing & Mutation/Read indexing" ]
bobbyyyan__scorch-feature_batched_matmul
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add support for batched sparse matrix multiplication. A batched sparse tensor is a 3D STensor with shape `(batch_size, M, N)` where the first dimension is dense (the batch dimension) and the remaining dimensions can be sparse. Implement: (1) Extend `STensor` to support a batch dimension - conceptually this is a list of...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_sparse_softmax
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Implement `sparse_softmax(A, dim=-1)` that computes softmax over rows (or columns) of a sparse matrix, operating only on the non-zero entries. This is critical for sparse attention in transformers and GNNs. The operation is: for each row i, compute `softmax(A[i,:])` over the non-zero entries only, setting zero entries ...
[]
feature
[ "API/ML Primitives/Activations & losses" ]
bobbyyyan__scorch-feature_dia_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
2105686711292cd1d1eb2438035b555fa644bf2a
python
[]
/testbed/run_tests.sh
3,000
Add DIA (diagonal) sparse format support. DIA format is designed for banded/diagonal matrices - it stores the matrix as a set of diagonals, each represented by a dense array. Storage consists of: `offsets` (1D array of diagonal offsets, where 0 is the main diagonal, positive is above, negative is below) and `data` (2D ...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_elementwise_sub
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement element-wise subtraction (`__sub__`) on `STensor` through the full CIN compilation pipeline, following the same pattern as the existing `__add__` implementation. The key difference from addition is asymmetry: subtraction uses union semantics (like addition, the result is non-zero where *either* operand is non...
[]
feature
[ "API/Element-wise/Binary arithmetic" ]
bobbyyyan__scorch-feature_fill_value
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Extend the format system to support configurable non-zero fill values (currently hardcoded to 0.0). Changes required: (1) `format.py` - add a configurable `_fill_value` attribute to the `Format` class with a default of 0.0. (2) `stensor.py` - propagate the fill value through format conversions (`to_dense`, `to_sparse`,...
[]
feature
[ "Format/Semantic extensions" ]
bobbyyyan__scorch-feature_outer_product
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement a sparse outer product operation `outer(a, b)` in `ops.py` that takes two 1D sparse vectors and produces a 2D sparse matrix. The CIN expression is `C[i,j] = A[i] * B[j]` (equivalent to einsum `"i,j->ij"`). This is a dimension-increasing operation - verify that the iteration lattice correctly handles it with n...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_dtype_int_float64
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Extend the compilation pipeline to support multiple data types beyond float32, specifically int32, int64, and float64. The dtype maps in `utils.py` already define these types but they aren't exercised through the pipeline. Changes required: (1) `stensor.py` - preserve dtype through all operations and format conversions...
[]
feature
[ "API/Type System/Value dtypes" ]
bobbyyyan__scorch-feature_kron
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement a sparse Kronecker product operation `kron(A, B)` in `ops.py`. The Kronecker product produces output `C[i*P+p, j*Q+q] = A[i,j] * B[p,q]` where A is MxN, B is PxQ, and C is (M*P)x(N*Q). This requires new CIN IR nodes for computed indices with multiplication - currently only `IndexVarAdd` exists for additive in...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_trace_diagonal
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement `trace(A)` and `diagonal(A, offset=0)` operations for sparse tensors. The CIN expression for diagonal extraction is `d[i] = A[i,i]` - the same index variable appears in both positions. This requires 'locate' capability in the iteration lattice: when iterating over the first dimension, the second dimension mus...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_norm
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement `norm(A, dim, ord=2)` for computing per-row or per-column norms of sparse tensors. This requires: (1) UnaryOp lowering - `abs` and `sqrt` are defined in the CIN IR but not lowered through the compilation pipeline. Implement their lowering in `cin_lowerer.py` and `codegen.py` to emit the corresponding C++ math...
[]
feature
[ "API/Reductions & Scans/Aggregate" ]
bobbyyyan__scorch-feature_kernel_fusion
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement kernel fusion for chained sparse operations. When multiple operations are composed (e.g., `C = (A @ B) + D`), currently each operation compiles and executes a separate C++ kernel with intermediate materialization. Implement a fusion system that compiles chained operations into a single C++ kernel. Design: (1)...
[]
feature
[ "Scheduler/Loop transformations/Fusion" ]
bobbyyyan__scorch-feature_openmp_parallel
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Extend the code generation pipeline to emit OpenMP parallel directives for auto-generated sparse kernels. Changes required: (1) Add a `ParallelFor` node to the LLIR (Low-Level IR) to represent parallelizable loops. (2) In `cin_lowerer.py`, analyze loop nests for parallelizability - the outermost loop over a dense dimen...
[]
feature
[ "Codegen/Parallelism" ]
bobbyyyan__scorch-feature_cat_stack
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse tensor concatenation `cat(tensors, dim)` and stacking `stack(tensors, dim)` in `ops.py`. These are structural operations that operate on storage arrays directly (not through the CIN pipeline). For CSR concatenation along dim=0: concatenate `crow_indices` arrays (with appropriate offset adjustments), co...
[]
feature
[ "API/Shape & Layout/Concat & pad" ]
bobbyyyan__scorch-feature_tensor_factories
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse tensor creation utility factory class methods on `STensor`: (1) `STensor.eye(n, format)` - create an nxn identity matrix in the specified sparse format. (2) `STensor.diag(values, format)` - create a diagonal matrix from a 1D tensor of values. (3) `STensor.rand_sparse(shape, density, format)` - create a...
[]
feature
[ "API/Constructors & I/O/Factories" ]
bobbyyyan__scorch-feature_scipy_interop
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement SciPy sparse matrix interoperability for `STensor`. Add two methods: (1) `STensor.from_scipy(scipy_matrix)` - construct an STensor from a SciPy sparse matrix, auto-detecting the format (CSR, CSC, or COO). Handle format mapping: scipy's `indptr`/`indices`/`data` arrays map to scorch's `crow_indices`/`col_indic...
[]
feature
[ "API/Constructors & I/O/External I/O" ]
bobbyyyan__scorch-feature_coalesce_coo
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement a `coalesce()` method for COO-format sparse tensors that sorts coordinates lexicographically and sums duplicate entries. Add an `is_coalesced` property that returns whether the tensor's coordinates are already sorted and free of duplicates. Support N-dimensional COO tensors (not just 2D). Add an optional `rem...
[]
feature
[ "API/Indexing & Mutation/Canonicalization" ]
bobbyyyan__scorch-feature_setitem
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement `__setitem__` on `STensor` for mutating sparse tensor entries in-place. Support four indexing modes: (1) Scalar assignment `A[i, j] = value` - for COO, search for existing entry and update or append; for CSR, search within the row's column range and update or insert with `crow_indices` adjustment; for dense, ...
[]
feature
[ "API/Indexing & Mutation/Write & mutation" ]
bobbyyyan__scorch-feature_bsr_block_matmul
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement a hybrid block-sparse computation path that delegates dense block arithmetic to PyTorch dense operators. Add 2D block-structured constructors/converters on `STensor`: `from_bsr(crow_indices, col_indices, values, block_size, shape)` and `to_bsr(block_size)` where values has shape `(nnz_blocks, block_h, block_w...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_csc_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add native CSC (Compressed Sparse Column) support across `STensor` and ops. Implement `STensor.from_csc(ccol_indices, row_indices, values, shape)` and `STensor.to_csc()`, and extend `STensor.from_torch` to ingest `torch.sparse_csc` tensors. Update format parsing/validation so CSC is representable explicitly (for exampl...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_pytorch_broadcasting
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement full PyTorch-style broadcasting for sparse elementwise binary operations. Today `STensor.__add__` explicitly says broadcasting is TODO, and `__mul__`/`__sub__` paths do not provide broadcast semantics. Add shared broadcast shape inference and index mapping utilities (including implicit leading dimensions, sin...
[]
feature
[ "API/Constructors & I/O/Broadcasting" ]
bobbyyyan__scorch-feature_conv2d
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse `conv2d` in `ops.py` for 2D inputs and kernels, analogous to the existing `conv1d` task but with full 2D indexing. Support arguments `stride`, `padding`, and `dilation` (start with `groups=1`), and allow sparse input with dense kernel as the primary path. Lower through CIN/LLIR when feasible, with clea...
[]
feature
[ "API/Convolution & Pooling" ]
bobbyyyan__scorch-feature_singleton_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add full `LevelType.SINGLETON` support throughout the sparse compiler pipeline. `LevelType.SINGLETON` already exists in `format.py`, but parser/iterator/lowering paths do not currently implement execution semantics. Define singleton semantics as one coordinate per parent position, then implement: (1) format parsing in ...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_addmm_linear
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement fused sparse linear algebra front-end ops `addmm` and `linear`. Add `ops.addmm(input, mat1, mat2, beta=1.0, alpha=1.0)` with semantics matching `torch.addmm`: `beta * input + alpha * (mat1 @ mat2)`, supporting sparse `mat1` with dense or sparse `mat2` where valid. Add `ops.linear(input, weight, bias=None)` as...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_incremental_insert
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement incremental sparse tensor construction APIs centered on the currently unimplemented `STensor.insert`. Add `insert(indices, values, accumulate=True)` that can append or update entries in COO and CSR tensors without forcing full dense materialization, plus a convenience constructor `STensor.from_entries(indices...
[]
feature
[ "API/Indexing & Mutation/Write & mutation" ]
bobbyyyan__scorch-feature_lifecycle_device
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add full tensor lifecycle/device support by implementing `STensor.validate()`, `STensor.clone()`, and `STensor.to(device)` (with `cpu()`/`cuda()` convenience behavior). `validate()` should check shape/index/value invariants per format (e.g., CSR `crow_indices` length and monotonicity, coordinate bounds, mode-order cons...
[]
feature
[ "API/Constructors & I/O/Introspection" ]
bobbyyyan__scorch-feature_einsum_ellipsis
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Generalize `ops.einsum` parsing and shape handling to support PyTorch-style ellipsis and implicit output inference. The current implementation assumes single-character explicit index lists without ellipsis. Extend it to parse expressions like `...ij,...jk->...ik`, `bij,bjk->bik`, and implicit-output forms (no `->`) whi...
[]
feature
[ "API/Linear Algebra/Einsum" ]
bobbyyyan__scorch-feature_triangular_solve
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse triangular solve support: `ops.triangular_solve(A, B, upper=False, unit_diagonal=False, left=True)` for sparse `A` (CSR primary path, COO via conversion) and dense/sparse right-hand side `B`. Use forward substitution for lower-triangular and backward substitution for upper-triangular systems, including...
[]
feature
[ "API/Linear Algebra/Solvers" ]
bobbyyyan__scorch-feature_semiring_matmul
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add semiring-based sparse matrix multiplication as a first-class feature. Extend `ops.matmul` (and `einsum` where appropriate) with a `semiring` argument supporting at least: `plus_times` (default arithmetic), `min_plus`, `max_plus`, and `logical_or_and`. This requires extending operation/reduction plumbing in CIN/LLIR...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_int64_indices
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement end-to-end 64-bit sparse index support for large tensors. Today `TensorIndex` coerces indices to `torch.int` (int32), which limits representable coordinates and can overflow for very large shapes. Add index-dtype awareness so mode indices can remain int32 or int64, propagate that through storage, lowering, an...
[]
feature
[ "API/Type System/Index dtypes" ]
bobbyyyan__scorch-feature_multi_output_kernels
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement first-class multi-output CIN kernels and use them to add sparse `max`/`min` reductions with optional index returns. `cin_lowerer.py` currently assumes a single result tensor (`TODO: need to handle multiple result tensors` / `TODO: deal with multiple outputs`), which blocks APIs like `torch.max(..., dim=...)` ...
[]
feature
[ "API/Reductions & Scans/Argmax-style", "IR/CIN nodes" ]
bobbyyyan__scorch-feature_zero_copy_views
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement zero-copy tensor window views using the existing `Window` and `TensorStorageView` scaffolding. Add `STensor.narrow(dim, start, length)` and `STensor.slice_view(offset, shape, step)` that return lightweight views without copying storage whenever possible. Dense views should alias the base value buffer; CSR row...
[]
feature
[ "API/Shape & Layout/Views" ]
bobbyyyan__scorch-feature_torch_sparse_roundtrip
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add sparse-native PyTorch round-trip APIs without forcing dense conversion. Keep existing `to_torch()` behavior for dense export, and add `STensor.to_torch_sparse(layout='coo'|'csr')` to emit PyTorch sparse tensors directly from stored indices/values. Extend `STensor.from_torch` to robustly ingest both 2D and batched (...
[]
feature
[ "API/Constructors & I/O/External I/O" ]
bobbyyyan__scorch-feature_comparisons_where
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement comparison and masking operations for sparse tensors. Add elementwise comparisons (`==`, `!=`, `<`, `<=`, `>`, `>=`) for STensor-STensor and STensor-scalar inputs, returning boolean STensors, plus `ops.where(mask, x, y)` and `STensor.masked_fill(mask, value)`. Extend CIN/LLIR lowering to emit comparison expre...
[]
feature
[ "API/Element-wise/Comparison & predicate" ]
bobbyyyan__scorch-feature_dropout
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse dropout for training workflows. Add `ops.dropout(input, p=0.5, training=True, inplace=False, generator=None)` and `STensor.dropout(...)`. For sparse inputs, sample Bernoulli masks over explicitly stored values only, scale retained values by `1/(1-p)` in training mode, and preserve index structure (with...
[]
feature
[ "API/ML Primitives/Regularization" ]
bobbyyyan__scorch-feature_kernel_cache
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Introduce a robust content-addressed kernel compilation cache for generated C++ kernels. Current compilation paths repeatedly call `load_inline(name='kernel', ...)` and cache mostly by CIN string, which can cause redundant compilations and potential collisions across dtype/format/mode-order variants. Add a `KernelCache...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_affine_gather_scatter
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Generalize computed indexing in CIN/LLIR and expose it through sparse gather/scatter APIs. In `cin.py`, `IndexVarExpr` currently only has `IndexVarAdd(lhs: IndexVar, rhs: IndexVar)`, which blocks common affine forms like `i + 1`, `i - k`, and `i * stride + offset`. Extend the IR with literal constants and affine index ...
[]
feature
[ "API/Indexing & Mutation/Computed indexing" ]
bobbyyyan__scorch-feature_spgemm_symbolic
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement symbolic+numeric SpGEMM planning with reusable sparsity structure for repeated sparse matmul. Today `ops.matmul` computes structure and values together each call. Add a symbolic planning path for CSR/COO multiplication that computes only output index structure and metadata once (for example row pointers and c...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_torch_function_dispatch
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add PyTorch operator dispatch integration for `STensor` so `torch.*` calls route to scorch implementations automatically. Implement `STensor.__torch_function__` (or an equivalent dispatch layer) for core ops already supported by scorch, including `torch.matmul`, `torch.einsum`, `torch.add`, `torch.sub`, and `torch.mul`...
[]
feature
[ "API/Constructors & I/O/Torch dispatch" ]
bobbyyyan__scorch-feature_serialization
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement robust serialization and checkpoint round-tripping for sparse tensors. Add `STensor.to_dict()` / `STensor.from_dict()` plus convenience `save(path)` / `load(path)` helpers that preserve shape, dtype, index dtype, format, mode_order, values, and all mode indices without dense conversion. Ensure compatibility w...
[]
feature
[ "API/Constructors & I/O/Serialization" ]
bobbyyyan__scorch-feature_schedule_autotuner
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add runtime schedule autotuning for generated sparse kernels. Scorch already has scheduling hooks (`Scheduler.auto_schedule`, tiling support, mode-order changes), but kernel choice is mostly heuristic and static. Implement an autotuner that explores candidate schedules (loop order, tiling factors, workspace choices, an...
[]
feature
[ "Runtime/Tuning & user control" ]
bobbyyyan__scorch-feature_reshape_flatten
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse shape-transformation APIs without dense materialization: `STensor.reshape(*shape)`, `STensor.flatten(start_dim=0, end_dim=-1)`, and `STensor.unflatten(dim, sizes)`, plus `ops.reshape` convenience wrappers. The transformation must preserve values and remap indices exactly (including negative dimensions ...
[]
feature
[ "API/Shape & Layout/Reshape" ]
bobbyyyan__scorch-feature_pooling_2d
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add sparse pooling operators for vision workloads: `ops.max_pool2d` and `ops.avg_pool2d` with parameters `kernel_size`, `stride=None`, `padding=0`, `dilation=1`, and `ceil_mode=False`, plus corresponding `STensor` method wrappers. Support NCHW inputs where activations are sparse and missing entries represent fill value...
[]
feature
[ "API/Convolution & Pooling" ]
bobbyyyan__scorch-feature_fp16_bf16_dtype
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Extend dtype support to include `torch.float16` and `torch.bfloat16` end-to-end in CIN->LLIR->C++ execution, including cached kernels in `csrc`. Add dtype mappings in `utils.py`/`llir.py`, ensure generated C++ uses correct scalar and torch dtype constants, and implement mixed-precision accumulation controls (e.g., `acc...
[]
feature
[ "API/Type System/Value dtypes" ]
bobbyyyan__scorch-feature_user_scheduling_api
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Expose a user-controlled scheduling API for CIN execution instead of relying solely on `Scheduler.auto_schedule`. Add a schedule object or kwargs surface (for `ops.einsum`, `ops.matmul`, and `lower_and_exec_cin`) that can explicitly set loop order, tile sizes per index variable, workspace insertion policy (dense/coo/no...
[]
feature
[ "Runtime/Tuning & user control" ]
bobbyyyan__scorch-feature_matrix_market_io
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add scientific sparse I/O interoperability for external datasets: Matrix Market (`.mtx`) and SciPy sparse NPZ (`.npz`) round-trips. Implement `STensor.from_matrix_market(path)`, `STensor.to_matrix_market(path)`, `STensor.from_scipy_npz(path)`, and `STensor.to_scipy_npz(path)` with optional dtype/index dtype controls an...
[]
feature
[ "API/Constructors & I/O/External I/O" ]
bobbyyyan__scorch-feature_n_m_sparsity
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement semi-structured N:M sparsity support with an initial optimized path for 2:4 sparsity. Add constructors/converters on `STensor` (e.g., `from_semi_structured(values, metadata, pattern=(2,4), dim=-1)` and `to_semi_structured(pattern=(2,4), dim=-1)`), plus validation that each 4-element group has exactly 2 stored...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_topk_kthvalue
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse `topk`/`kthvalue` along a specified dimension: `ops.topk(input, k, dim=-1, largest=True, sorted=True)` and `ops.kthvalue(input, k, dim=-1, keepdim=False)` plus `STensor` method wrappers. Semantics should align with PyTorch, including how implicit fill values (zeros) compete with explicit non-zero entri...
[]
feature
[ "API/Reductions & Scans/Argmax-style" ]
bobbyyyan__scorch-feature_simd_vectorization
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add SIMD-aware vectorization to generated C++ kernels for dense innermost loops. Extend LLIR/codegen so when an inner loop is contiguous and arithmetic-only, emitted code uses vectorization-friendly constructs (`#pragma omp simd` and/or architecture-gated intrinsics) with safe scalar fallbacks. Ensure correctness for r...
[]
feature
[ "Codegen/Vectorization" ]
bobbyyyan__scorch-feature_complex_dtype
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Extend scorch with end-to-end complex dtype support (`torch.complex64` and `torch.complex128`) across `STensor`, CIN->LLIR lowering, and generated C++ kernels. The current dtype plumbing in `llir.py`/codegen and cached native kernels is real-valued; add complex scalar mappings, kernel argument marshalling, and correct ...
[]
feature
[ "API/Type System/Value dtypes" ]
bobbyyyan__scorch-feature_cuda_backend
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add a true CUDA codegen backend for CIN-generated kernels so sparse workloads can execute on GPU without host fallback. Today execution is centered around CPU C++ `load_inline`; extend lowering/codegen to emit CUDA-compatible kernels (or CUDA-specialized C++ with `cuda_sources`) and runtime dispatch that chooses CPU or...
[]
feature
[ "Codegen/Backend targets" ]
bobbyyyan__scorch-feature_bitmap_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add bitmap sparse level support to the format system (`LevelType.BITMAP`) and compiler pipeline. A bitmap level stores a dense occupancy bitset plus compacted values, which is efficient for near-dense regions and predictable iteration. Implement bitmap parsing/serialization in `format.py`, storage representation in `Te...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_hyb_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement HYB (ELL+COO) sparse format support for matrices with skewed row densities. HYB stores up to `ell_width` entries per row in an ELL component and spills overflow entries into a COO tail. Add format/storage support plus `STensor.from_hyb(ell_indices, ell_values, coo_indices, coo_values, shape)` and `STensor.to_...
[]
feature
[ "Format/Block & ELL family" ]
bobbyyyan__scorch-feature_csf_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add CSF (Compressed Sparse Fiber) format support for higher-order sparse tensors (3D+), enabling efficient tensor contractions without flattening to COO. Implement hierarchical compressed storage where each sparse mode contributes position/coordinate arrays (generalizing CSR to multiple sparse levels). Extend `TensorFo...
[]
feature
[ "Format/Hierarchical & multi-d" ]
bobbyyyan__scorch-feature_sparse_attention
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement fused sparse scaled-dot-product attention for block/coordinate masks: `ops.scaled_dot_product_attention_sparse(Q, K, V, attn_mask_sparse=None, dropout_p=0.0, is_causal=False, training=False)`. The key requirement is to avoid dense `QK^T` materialization: use sparse mask structure to compute only sampled score...
[]
feature
[ "API/ML Primitives/Attention & embedding" ]
bobbyyyan__scorch-feature_int8_quantized
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add int8 quantized sparse inference as a first-class path. Implement value-only quantization APIs on `STensor` (`quantize_per_tensor`, `quantize_per_channel`, `dequantize`, and `from_torch_quantized`) that preserve sparse index structures while quantizing stored values. Add `ops.matmul_quantized(A, B, bias=None, out_dt...
[]
feature
[ "API/ML Primitives/Quantization" ]
bobbyyyan__scorch-feature_iterative_solvers
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement iterative sparse linear solvers for scientific workloads. Add `ops.cg(A, b, x0=None, tol=1e-6, maxiter=None, M=None)` for symmetric positive definite systems and `ops.bicgstab(A, b, x0=None, tol=1e-6, maxiter=None, M=None)` for general non-symmetric systems, with optional Jacobi preconditioning. Reuse existin...
[]
feature
[ "API/Linear Algebra/Solvers" ]
bobbyyyan__scorch-feature_layer_rms_norm
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add sparse normalization operators for transformer-style models: `ops.layer_norm_sparse` and `ops.rms_norm_sparse`, plus `STensor.layer_norm(...)` and `STensor.rms_norm(...)`. Support affine parameters (`weight`, `bias`) and epsilon controls with semantics matching PyTorch over full normalized dimensions, where implici...
[]
feature
[ "API/ML Primitives/Normalization" ]
bobbyyyan__scorch-feature_einsum_repeated_idx
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Generalize `ops.einsum` to fully support repeated-index semantics within a single operand (diagonal extraction/trace-style behavior) without dense fallback. The current parser/scheduling logic in `ops.py` assumes effectively unique per-operand indices and does not robustly handle expressions like `ii->i`, `bijj->bi`, o...
[]
feature
[ "API/Linear Algebra/Einsum" ]
bobbyyyan__scorch-feature_prune_eliminate_zeros
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement explicit-zero management and structured pruning APIs for sparse tensors. Add `STensor.eliminate_zeros(inplace=False, atol=0.0)` to remove stored zeros/near-zeros and rebuild indices correctly for COO/CSR, plus `STensor.prune(threshold=None, topk=None, dim=None, keep_structure=False)` for magnitude-based pruni...
[]
feature
[ "API/Indexing & Mutation/Canonicalization" ]
bobbyyyan__scorch-feature_transpose_matmul
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add transpose-aware matmul APIs that avoid physical tensor transposition. Extend `ops.matmul` with flags `transpose_a=False` and `transpose_b=False` (and matching `STensor.matmul` kwargs) so callers can request `A^T @ B`, `A @ B^T`, or `A^T @ B^T` directly. Implement this via index remapping in CIN/einsum lowering rath...
[]
feature
[ "API/Linear Algebra/Matmul variants" ]
bobbyyyan__scorch-feature_log_softmax_nll
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add sparse log-probability training primitives: `ops.sparse_log_softmax(input, dim=-1)` and `ops.sparse_nll_loss(log_probs, target, reduction='mean', ignore_index=-100)`, with `STensor` method wrappers. Build on sparse softmax infrastructure but compute and return log probabilities directly for numerical stability, and...
[]
feature
[ "API/ML Primitives/Activations & losses" ]
bobbyyyan__scorch-feature_block_diag
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement block-diagonal sparse packing utilities for variable-size mini-batch workloads. Add `STensor.from_block_diag(tensors)` to pack a list of 2D sparse tensors into a single block-diagonal sparse matrix and `STensor.to_block_diag(block_sizes)` to unpack. Add `ops.block_diag_matmul(A_blockdiag, X, block_sizes)` to ...
[]
feature
[ "API/Shape & Layout/Concat & pad" ]
bobbyyyan__scorch-feature_elementwise_div
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement element-wise division (`__truediv__` and `__rtruediv__`) on `STensor` through the full CIN compilation pipeline. `Operation.DIV` already exists in `src/scorch/compiler/cin.py` (line 897: `DIV = "/"`), `AssignOp.DIV_ASSIGN` exists in `src/scorch/compiler/llir.py` (line 81), and `IndexExpr.__sub__` at line 143 ...
[]
feature
[ "API/Element-wise/Binary arithmetic" ]
bobbyyyan__scorch-feature_elementwise_pow
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement element-wise power (`__pow__`) on `STensor` that raises each stored value to a given exponent. Unlike add/sub/mul/div, power is not a binary operation between two equally-shaped sparse tensors in the typical case -- the primary use case is `A ** n` where `n` is a scalar (integer or float). This requires a new...
[]
feature
[ "API/Element-wise/Binary arithmetic" ]
bobbyyyan__scorch-feature_matrix_power
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement `ops.matrix_power(A, n)` that computes the n-th matrix power of a square sparse matrix `A` by repeated matrix multiplication, reusing the existing `ops.matmul` infrastructure. This is a higher-level operation that does not require new CIN primitives but does require careful handling of sparse format propagati...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_cholesky
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse Cholesky factorization for symmetric positive definite (SPD) sparse matrices: `ops.cholesky(A, upper=False)` that returns a sparse lower-triangular `L` such that `A = L @ L^T` (or upper-triangular `U` with `A = U^T @ U` when `upper=True`). This is a two-phase algorithm: symbolic factorization (determin...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_eigenvalue_solvers
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement sparse eigenvalue computation for finding dominant eigenvalues and eigenvectors without dense materialization. Add two methods: (1) `ops.power_iteration(A, num_iters=100, tol=1e-6)` for finding the largest-magnitude eigenvalue and its eigenvector, and (2) `ops.lanczos(A, k=6, tol=1e-8, maxiter=None)` for comp...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_cumsum_cumprod
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement cumulative reduction operations `ops.cumsum(A, dim)` and `ops.cumprod(A, dim)` for sparse tensors along a specified dimension. These are prefix-scan operations where `cumsum(A, dim=1)[i,j] = sum(A[i, 0:j+1])` and `cumprod(A, dim=1)[i,j] = prod(A[i, 0:j+1])`. The key semantic design decision for sparse tensors...
[]
feature
[ "API/Reductions & Scans/Scans & segment" ]
bobbyyyan__scorch-feature_graph_adjacency
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement graph adjacency matrix utility functions essential for graph neural network workloads, directly supporting the existing GCN example in `examples/gcn/scorch_gcn.py` which currently requires manual adjacency construction. Add the following functions to `src/scorch/ops.py`: (1) `ops.degree(A, dim=1)` -- compute ...
[]
feature
[ "API/Constructors & I/O/Factories" ]
bobbyyyan__scorch-feature_hash_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add hash-map based sparse level support (`LevelType.HASH`) to the format system and compiler pipeline. Unlike COMPRESSED (CSR-like sorted arrays with O(log n) or O(nnz) lookup) and COORDINATE (COO unsorted coordinate lists), a HASH level provides O(1) amortized random access to individual entries via a hash table mappi...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_sparse_dim_tiling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement true sparse-dimension tiling in the CIN scheduler and lowering pipeline. Today `Scheduler.auto_schedule` explicitly removes sparse index vars from tiling and `IndexVar.size_llir_var` assumes a dense access. Extend `Scheduler.add_tile` and `auto_schedule` so COMPRESSED/COORDINATE dimensions can be strip-mined ...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_tile_remainder_predication
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add remainder-safe predication for tiled loops across dense and sparse domains. Current tiling assumes fixed tile-size loop bounds, which can overrun when dimension sizes (or row-local sparse fiber lengths) are smaller than or not divisible by tile size. Introduce per-tile end bounds: dense loops should use `tile_end =...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_segmented_sparse_tiling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement segmented sparse tiling for CSR/COO reductions to improve cache locality in sparse matmul kernels. Add a scheduling transformation that tiles sparse reduction dimensions by nonzero-count segments (position-space tiles) inside each parent fiber (for CSR: per-row `p` segments; for COO: per-leading-coordinate bu...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_dual_axis_tiling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add dual-axis tiling for mixed sparse-dense kernels (for example SpMM `C[i,n] += A[i,k_sparse] * B[k_sparse,n_dense]`). Support applying two independent tile transforms in one schedule: sparse reduction tiling in position space and dense output-column tiling in coordinate space. Extend scheduler validation so multiple ...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_nnz_balanced_partition
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Implement nnz-balanced sparse tile partitioning for parallel execution. Row-wise parallelization is often imbalanced on skewed sparse matrices; add an inspector step that partitions work into tiles/blocks with roughly equal nonzero counts instead of equal row counts. For CSR, build block boundaries from cumulative `cro...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_workspace_touched_tracking
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
33532a3
python
[]
/testbed/run_tests.sh
3,000
Add sparse-tile workspace optimization with touched-entry tracking to avoid full-tile clears each iteration. For tiled sparse reductions that use dense workspaces, replace unconditional tile-wide initialization/flush/clear with a touched-index list (and optional small bitmap) that records only entries updated in the cu...
[]
feature
[ "Scheduler/Sparse-specific passes/Workspace transforms" ]
bobbyyyan__scorch-feature_repr_str
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement informative `__repr__` and `__str__` methods on `STensor` that replace the current placeholder returning `"Tensor"`. The output must display the tensor's shape, per-mode format annotations (e.g. `[d, s]` for a dense-then-sparse 2D tensor), number of stored non-zeros (`nnz`), density as a percentage, dtype, an...
[]
feature
[ "API/Constructors & I/O/Introspection" ]
bobbyyyan__scorch-feature_metadata_introspection
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a sparse metadata and introspection API to `STensor`. Implement the following: (1) `nnz` property returning the count of explicitly stored non-zero entries. (2) `density` property returning nnz divided by the total number of elements. (3) `sparsity` property returning 1 minus density. (4) `nonzero()` method returni...
[]
feature
[ "API/Constructors & I/O/Introspection" ]
bobbyyyan__scorch-feature_mode_n_product
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement mode-n tensor-matrix product operations in `ops.py`. Add `ops.mode_n_product(X, M, n)` that multiplies an N-dimensional sparse tensor X by a dense matrix M along mode n. The implementation should dynamically generate einsum subscript strings for arbitrary dimensionality rather than hard-coding cases for speci...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_squeeze_unsqueeze
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement `STensor.squeeze(dim=None)` and `STensor.unsqueeze(dim)` for sparse tensors. `squeeze(dim)` removes a dimension of size 1 at the specified position, updating shape, format (removing the corresponding level type), mode_indices, and mode_order. When `dim=None`, squeeze all dimensions of size 1. `unsqueeze(dim)`...
[]
feature
[ "API/Shape & Layout/Reshape" ]
bobbyyyan__scorch-feature_unfold_refold
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse tensor unfolding (matricization) and refolding. Add `STensor.unfold(mode)` that converts an N-dimensional sparse tensor into a 2D matrix by unfolding along the specified mode: the given mode becomes the row dimension and the remaining modes are combined (in order) into the column dimension. For COO for...
[]
feature
[ "API/Shape & Layout/Reshape" ]
bobbyyyan__scorch-feature_lu_decomposition
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse LU decomposition. Add `ops.lu(A, pivoting=True)` that decomposes a 2D sparse matrix A into lower triangular L, upper triangular U, and permutation matrix P such that P @ A = L @ U. Use a left-looking sparse algorithm that processes columns left to right, computing each column of L and U by solving a sp...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_index_select
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse `index_select` for N-dimensional tensors. Add `ops.index_select(input, dim, index)` that selects slices from the sparse tensor `input` along dimension `dim` according to the entries in `index` (a 1D integer tensor). This is distinct from `__getitem__` (which uses int/slice indexing) and from `gather`/`...
[]
feature
[ "API/Indexing & Mutation/Read indexing" ]
bobbyyyan__scorch-feature_expand_repeat
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse `expand` and `repeat` operations on `STensor`. Add `STensor.expand(*sizes)` for broadcasting-style logical expansion of singleton dimensions to larger sizes without physically duplicating stored values (the expanded dimension's entries are logically shared). Size -1 means keep the current size. Also su...
[]
feature
[ "API/Shape & Layout/Reshape" ]
bobbyyyan__scorch-feature_symmetric_matrix
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add sparse symmetric matrix support. Extend `TensorFormat` to include an optional `symmetric` flag indicating that only the lower (or upper) triangle is stored. Add `STensor.from_symmetric(indices, values, shape)` class method that constructs a symmetric sparse matrix storing only one triangle. Add `STensor.to_symmetri...
[]
feature
[ "Format/Semantic extensions" ]