tensorbench-1.0 / tensorbench.json
tensorbench's picture
tensorbench dataset
0e74d35
[
{
"instance_id": "bobbyyyan__scorch-refactor_dedupe_ops",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2071e9b",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "There is a lot of duplicated code in src/scorch/ops.py. Please fix that (e.g. make things more modular and maintainable).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "refactor",
"categories": [
"API/Linear Algebra/Matmul variants",
"API/Element-wise/Binary arithmetic"
]
},
{
"instance_id": "bobbyyyan__scorch-refactor_cost_estimator",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2071e9b",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "extract",
"description": "Extract the cost-model logic from `src/scorch/compiler/scheduler.py` into a separate `CostEstimator` class or module. The three private static methods `Scheduler._compute_comp_cost`, `Scheduler._compute_workspace_cost`, and `Scheduler._compute_transposition_cost` each independently recompute the same derived quantities - RHS tensor accesses, per-ivar extents, per-ivar selectivities, and sparse-filter scores - by calling `_get_rhs_tensor_accesses`, `_estimate_index_extent`, `_estimate_index_selectivity`, and `_sparse_filter_score` on every invocation. Extract the shared per-CIN state into a `CostEstimator` that computes these derived quantities once in its constructor and exposes `comp_cost(loop_order)`, `workspace_cost(loop_order)`, and `transposition_cost(loop_order)` methods. `Scheduler.cost_to_push` and `Scheduler.optimize_loop_order` should construct a single `CostEstimator` per CIN and reuse it across candidate loop orders.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "refactor",
"categories": [
"Scheduler/IR analyses & scalar opts/Dataflow analyses"
]
},
{
"instance_id": "bobbyyyan__scorch-refactor_cin_collector",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "b061b53",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "consolidate",
"description": "Consolidate the multiple nearly identical CINVisitor-based classes defined inline within methods of cin.py. The classes TensorAccessGetter, ResultTensorAccessCollector, RHSAccessCollector, WorkspaceGetter, IndexVarCollector, and LoopOrderGetter all follow the same pattern of visiting CIN nodes and collecting specific elements. Consolidate into a single parameterized CINCollector class that accepts a predicate function for what to collect.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "refactor",
"categories": [
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-refactor_format_converter",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2071e9b",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "extract",
"description": "Extract the format conversion methods to_dense and to_sparse from stensor.py into a separate FormatConverter class or module. Both methods contain extensive duplicate code for: generating index variables, creating TensorVar definitions, building assignment expressions using exec/eval, lowering CIN to LLIR, generating C++ code, and executing the kernel. Extract this shared logic into a common conversion pipeline.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "refactor",
"categories": [
"Format/Semantic extensions"
]
},
{
"instance_id": "bobbyyyan__scorch-refactor_lattice_loop_generator",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2071e9b",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "extract",
"description": "Extract the gen_single_lattice_loop inner function from the get_lattice_loops method in iter_lattice.py into a separate LatticeLoopGenerator class. This function handles multiple cases based on iterator count and result tensor access patterns. The class should encapsulate loop generation state and provide cleaner method decomposition for: zero iterators (dense domain), single iterator, and multiple iterators cases.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "refactor",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_block_sparse_format",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add block sparse format support to scorch's compiler, enabling the full CIN->LLIR->C++ pipeline to generate code for block-structured sparse tensors. In scorch's format system, CSR is `[DENSE, COMPRESSED]` and COO is `[COORDINATE, COORDINATE]` - the level types describe per-dimension storage. Block sparse formats like BSR use the *same* level types to describe how blocks are organized, but each \"entry\" is a dense sub-tile instead of a scalar. The design: extend the `Format` class in `format.py` with an optional `block_sizes` tuple (one int per dimension). When `block_sizes` is set, the level types describe the block-level structure, and within each non-zero block, elements are stored densely. For example: BSR with 4x4 blocks = `Format(level_types=[DENSE, COMPRESSED], block_sizes=(4, 4))` - the block-level iteration is identical to CSR (dense rows, compressed columns), but each entry is a 4x4 dense tile. Block-COO with 4x4 blocks = `Format(level_types=[COORDINATE, COORDINATE], block_sizes=(4, 4))`. Regular CSR = `Format(level_types=[DENSE, COMPRESSED])` (no block_sizes, equivalent to block_sizes=(1,1)). Implementation: (1) Extend `Format` in `format.py` to accept an optional `block_sizes` tuple. Validate that `shape[d] % block_sizes[d] == 0` for each dimension d. The block-level shape is `(shape[d] // block_sizes[d])` per dimension. (2) Update `ModeIterator` in `iterator.py`: when a level has block_size > 1, the existing iterator (DENSE for-loop, COMPRESSED while-loop, COORDINATE loop, etc.) iterates over *block indices*. After the block-level iterator, add an inner dense for-loop of size block_size for intra-block offsets. The original CIN index variable is reconstructed as `j = j_block * block_size + j_local`. (3) Update `CINLowerer` in `cin_lowerer.py` to handle blocked formats: when lowering a tensor access with block_sizes, generate the index decomposition (block index + local offset), create both the block-level iterator (using the existing level type's iteration logic) and the inner dense loop, and wire the CIN index variable to the composed index. (4) Update `codegen.py` to emit C++ for blocked iteration: the outer loop is the same code as the non-blocked case for that level type, the inner loop is a fixed-count dense for-loop. Array accesses compute flat indices into the values array: `block_idx * product(block_sizes) + local_i * block_w + local_j` (generalized for N-d). (5) Add `from_bsr(crow_indices, col_indices, values, block_size, shape)` to `STensor` where values is a 3D tensor of shape `(nnz_blocks, block_h, block_w)`, and `to_bsr(block_size)` for conversion. (6) Implement `block_spmm(A, B)` in `ops.py` through the CIN pipeline - the compiler generates block-aware code where outer iteration follows the block-level format and inner loops handle within-block dense computation. Generated C++ should use OpenMP parallelism over block rows. (7) Support arbitrary dimensionality: 3D tensors with 2x2x2 blocks, tensors blocked along only some dimensions (e.g., block_sizes=(4, 1) blocks only the first dimension), etc. The block_sizes tuple length must match the number of dimensions. Add comprehensive tests: construct BSR matrices with known block patterns, multiply against dense matrices, verify results match `torch.sparse_bsr_tensor(...).to_dense() @ B` for block sizes 1x1, 4x4, 8x8. Test block-COO format, 3D block sparse tensors, mixed block sizes across dimensions, format conversions (block<->CSR, block<->dense), and verify generated C++ has correct loop structures.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_elementwise_mul",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement element-wise multiplication (`__mul__`) on `STensor` through the full CIN compilation pipeline, following the same pattern as the existing `__add__` implementation in `stensor.py`. The key difference from addition is **intersection semantics**: when multiplying two sparse tensors, the result is non-zero only where *both* operands are non-zero, whereas addition produces non-zeros where *either* is non-zero. This means the iteration lattice must use intersection (AND) rather than union (OR) when merging sparse iterators. The CIN statement is `C[i,j] = A[i,j] * B[i,j]` with `Operation.MUL`. The implementation should: (1) Create index variables and TensorVars for both inputs and the output. (2) Build the CIN assignment with multiply. (3) Infer the output format - for multiplication, if either input level is sparse, the output level should be sparse (intersection makes the result at least as sparse as the sparsest input). (4) Lower through CIN->LLIR->C++ and compile/execute. Handle all format combinations: densexdense, sparsexsparse (CSRxCSR, COOxCOO), and mixed sparsexdense. Also implement scalar multiplication (`STensor * float` and `float * STensor`) as a special case. Write comprehensive tests covering: sparsexsparse with overlapping and non-overlapping sparsity patterns, sparsexdense, densexdense, scalar multiplication, and verify all results against `A.to_dense().to_torch() * B.to_dense().to_torch()`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Binary arithmetic"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_transpose",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a `transpose()` method and `.T` property to `STensor` that physically transposes a 2D sparse tensor, reorganizing its storage layout. This is not a lazy/view transpose - it must produce a new `STensor` with properly restructured index arrays. For CSR format (level types `[DENSE, COMPRESSED]`): transposing produces CSC-like storage, which in scorch's format system is `[COMPRESSED, DENSE]` with swapped dimensions - but to keep it as valid CSR of the transposed matrix, you need to rebuild the `crow_indices` and `col_indices` arrays for the transposed matrix. For COO format: swap the row and column coordinate arrays and sort by the new row-major order. Implement this through the CIN compilation pipeline: express transpose as `B[j,i] = A[i,j]` where B has an appropriate output format, then lower and execute. This requires the codegen to correctly handle the index variable reordering - the iteration follows A's storage order but writes to B in transposed order. The format inference for the output should produce the transposed format (if A is `ds`, the transposed output should be `ds` of the transposed shape). Add a `permute(dims)` method for general dimension reordering that works for tensors with more than 2 dimensions. Write tests that: transpose CSR matrices and verify against `torch.sparse_csr_tensor(...).to_dense().T`, transpose COO matrices, transpose dense STensors, verify that `A.T.T` equals `A`, and test that `(A @ B).T` equals `B.T @ A.T`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Transpose & permute"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sum_reduction",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `sum(axis=None)` on `STensor` and as a standalone function in `ops.py` that reduces a sparse tensor along specified dimensions. `axis=None` reduces all dimensions to a scalar. `axis=0` sums along rows (producing a row vector). `axis=1` sums along columns (producing a column vector). For a 2D CSR matrix with `axis=1`, this corresponds to the CIN statement `y[i] = A[i,j]` - iterating over j and accumulating into y[i]. This is a reduction operation that requires the scheduler's workspace insertion: the reduction variable (j) needs a workspace/accumulator. For `axis=0`, the CIN is `y[j] = A[i,j]` - reducing over i. The format of the output depends on the reduction: summing a sparse matrix along its compressed dimension may produce a dense vector, while summing along the dense dimension preserves structure. Implement through the full CIN pipeline with auto-scheduling. The workspace should be a dense accumulator since reductions typically produce dense results. Also implement `mean(axis=...)` as `sum(axis) / count` where count is either the dimension size (for dense dims) or the number of non-zeros (for sparse dims, depending on semantics - clarify: mean should divide by the full dimension size, not nnz). Write tests comparing against `A.to_dense().to_torch().sum(dim=axis)` for various sparse formats and axis values, including higher-dimensional tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Aggregate"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sddmm",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `sddmm(S, A, B)` as a first-class operation in `ops.py`. SDDMM computes the **sampled dense-dense matrix multiply**: `C = S * (A @ B)` where S is a sparse matrix (the \"sample\" mask), A and B are dense matrices, `*` is element-wise multiply, and @ is matrix multiply. This is a critical primitive in sparse ML - it appears in the backward pass of sparse attention and GNN message passing. The CIN expression is `C[i,j] = S[i,j] * A[i,k] * B[k,j]` where C's output format matches S's sparsity pattern (since C is zero wherever S is zero). The key optimization is that iteration should follow S's sparsity structure: for each non-zero `(i,j)` in S, compute the dot product `sum_k A[i,k] * B[k,j]` and multiply by `S[i,j]`. This avoids computing the full dense product `A @ B`. The reduction over k requires a workspace. Implement this through the CIN pipeline with proper format inference - the output should have the same format as S. Also write an optimized C++ kernel in `csrc/` for the CSR case: iterate over S's non-zero entries, and for each `(i,j)` compute the k-reduction as a dot product of row i of A and column j of B. Write tests verifying against `S.to_dense().to_torch() * (A @ B)` for CSR and COO sparsity patterns, various matrix sizes, and different sparsity levels.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_autograd",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add automatic differentiation support for scorch's core operations by implementing custom `torch.autograd.Function` subclasses. Currently, `STensor` extends `torch.nn.Module` and has a `requires_grad` flag, but no gradient computation is implemented. Implement: (1) `SparseAddFunction` - for `C = A + B`, the backward pass passes gradients through: `dA = dC` and `dB = dC` (with appropriate format conversion if needed). (2) `SparseMatMulFunction` - for `C = A @ B`, the backward pass is `dA = dC @ B^T` and `dB = A^T @ dC`. This requires the transpose operation to work. (3) `SparseHadamardFunction` - for `C = A * B` (element-wise), `dA = dC * B` and `dB = dC * A`. Wrap each operation to participate in PyTorch's autograd graph: the forward pass calls the existing scorch operation and saves tensors needed for backward, the backward pass computes gradients using scorch operations. The gradient tensors should be returned as dense `torch.Tensor` objects (by calling `to_dense().to_torch()`) so they integrate with standard PyTorch optimizers. Modify `STensor` to store an underlying `torch.Tensor` that participates in the autograd graph. Write tests that: create sparse tensors with `requires_grad=True`, perform operations, compute a scalar loss (e.g., `sum()`), call `backward()`, and verify gradients match `torch.autograd.gradcheck` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Autograd"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_unary_ops",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add compiler-level support for unary operations on sparse tensors. Currently, CIN only supports `BinaryOp` (add, mul, sub, div). Extend the IR and compilation pipeline to support unary operations: (1) Add a `UnaryOp` CIN node with an `Operation` enum including `ABS`, `NEG`, `RELU`, `SQRT`, `EXP`, `LOG`, `TANH`, `SIGMOID`. (2) Add LLIR lowering for `UnaryOp` - generate the corresponding C++ math function call (e.g., `std::abs()`, `std::sqrt()`, `std::exp()`). For `RELU`, generate `std::max(0.0f, x)`. For `SIGMOID`, generate `1.0f / (1.0f + std::exp(-x))`. (3) Update `LLIRLowerer.lower_llir()` in `codegen.py` to emit the C++ code. (4) Add convenience methods to `STensor`: `abs()`, `neg()`, `relu()`, `sqrt()`, `exp()`, `log()`. Each should build a CIN statement like `B[i,j] = abs(A[i,j])`, lower it, and execute. An important consideration: for sparse tensors, unary ops should only iterate over non-zero values (preserving the sparsity pattern), except for operations where `f(0) != 0` (like `exp` where `exp(0) = 1`) - in those cases, the output must be dense. The format inference should detect this: if `f(0) == 0` (abs, neg, relu, sqrt), output format matches input; if `f(0) != 0` (exp, sigmoid, tanh), output must be dense. Write tests for each unary op on CSR, COO, and dense STensors, verifying against the equivalent `torch` function on dense tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Unary math"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_conv1d",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `conv1d(input, kernel, padding=0)` in `ops.py` for sparse 1D signals. The mathematical definition is `y[i] = sum_k x[i + k] * w[k]` where `x` is a sparse 1D input signal and `w` is a (typically dense) convolution kernel. This requires **computed index** support in the CIN pipeline. Scorch already has partial support via `IndexVarAdd` (CIN node for `i + k`), but it's not fully wired through the lowering and codegen stages. Complete the implementation: (1) Make `IndexVarAdd` (and more generally `IndexVarExpr`) work end-to-end in `cin_lowerer.py` - when a `TensorAccess` uses a computed index like `x[i + k]`, the generated C++ should compute the offset `i + k` and use it for the array access. (2) Handle bounds checking: `i + k` must be within `[0, len(x))`. (3) The iteration strategy: iterate over the kernel indices k (dense, small), and for each k, iterate over the input's non-zeros for the shifted positions. (4) Format inference: the output is at least as dense as the input. Implement through the CIN pipeline. For the 2D extension, `conv2d` would use `y[i,j] = x[i+ki, j+kj] * w[ki, kj]`. Write tests: convolve sparse 1D vectors with small kernels (size 3, 5), compare against `torch.nn.functional.conv1d` on the dense equivalents, test various padding values, and verify with different input sparsity patterns.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Convolution & Pooling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_ell_format",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add ELL (ELLPACK) sparse format support to scorch's format system. ELL format stores a fixed number of entries per row (`max_nnz_per_row`), using two 2D arrays: `indices[num_rows][max_nnz]` for column indices and `values[num_rows][max_nnz]` for values. Rows with fewer non-zeros are padded with a sentinel value (e.g., -1 for indices, 0.0 for values). ELL is efficient for matrices where rows have similar numbers of non-zeros, and it's very GPU-friendly due to regular memory access patterns. Implement: (1) Add `ELL` level type to `LevelType` enum in `format.py`. An ELL tensor has format `[DENSE, ELL]` - the first level (rows) is dense, the second level (columns within a row) uses ELL storage. (2) Add `from_ell(indices, values, shape)` and `to_ell(max_nnz_per_row=None)` to `STensor`. If `max_nnz_per_row` is not specified, compute it from the data. (3) Add a new `ModeIterator` variant in `iterator.py` for ELL levels - iteration over an ELL level is a fixed-count loop with sentinel checking: `for (k = 0; k < max_nnz; k++) { j = indices[i][k]; if (j < 0) break; ... }`. (4) Update `CINLowerer` to handle ELL level arrays and lowering. (5) Update `codegen.py` if needed. Write tests: construct ELL tensors from known data, convert between ELL and other formats (CSR, COO, dense), perform SpMV and element-wise operations using ELL tensors, and verify against dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_getitem_2d",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `__getitem__` support to `STensor` for extracting sub-tensors. Implement three indexing modes: (1) **Integer indexing**: `A[i]` extracts row i from a 2D tensor, returning a 1D STensor. For CSR, this is O(1) - just extract the slice `values[crow[i]:crow[i+1]]` and `col_indices[crow[i]:crow[i+1]]`. For COO, filter entries where row index equals i. (2) **Slice indexing**: `A[start:stop]` extracts a contiguous range of rows, returning a 2D STensor with the same format. For CSR, extract the sub-range of the position array and adjust indices. For COO, filter and shift coordinates. (3) **Boolean mask indexing**: `A[mask]` where mask is a 1D boolean tensor, extracts rows where mask is True. This is useful for graph operations (selecting nodes). Also implement column indexing for the second dimension: `A[:, j]` extracts column j (returns 1D), `A[:, start:stop]` extracts column range (returns 2D). Column slicing on CSR requires scanning all rows, while on COO it's a coordinate filter. For each indexing mode, determine the output format: row slicing CSR -> CSR with adjusted indices, row slicing COO -> COO with shifted coordinates, column slicing -> compressed on the remaining dimension. Write tests covering all indexing modes on CSR, COO, and dense tensors, verifying against equivalent PyTorch dense tensor indexing.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Read indexing"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_batched_matmul",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add support for batched sparse matrix multiplication. A batched sparse tensor is a 3D STensor with shape `(batch_size, M, N)` where the first dimension is dense (the batch dimension) and the remaining dimensions can be sparse. Implement: (1) Extend `STensor` to support a batch dimension - conceptually this is a list of independent sparse matrices stored contiguously. The format should be `[DENSE, ...]` where the first level is always dense (batch), and remaining levels can be any format. For example, batched CSR is format `\"dds\"` - dense batch, dense rows, compressed columns. (2) Add `from_batch(tensors: List[STensor])` that stacks multiple STensors into a batched tensor. The batch dimension's storage simply concatenates the inner tensors' storage with appropriate offsets. (3) Implement `batched_matmul(A, B)` in `ops.py` where A is batched sparse and B is either batched dense or a single dense matrix broadcast across the batch. The CIN expression is `C[b,i,j] = A[b,i,k] * B[b,k,j]` (or `B[k,j]` for broadcast). (4) The C++ kernel should parallelize across the batch dimension with `#pragma omp parallel for` - each batch element is independent. (5) Extend the codegen to handle the 3-level nesting with the dense batch level. Write tests: create batches of sparse matrices, multiply against batched dense matrices, verify against `torch.bmm` on dense equivalents, test broadcasting B across the batch, and verify with different batch sizes and sparsity patterns.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sparse_softmax",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `sparse_softmax(A, dim=-1)` that computes softmax over rows (or columns) of a sparse matrix, operating only on the non-zero entries. This is critical for sparse attention in transformers and GNNs. The operation is: for each row i, compute `softmax(A[i,:])` over the non-zero entries only, setting zero entries to zero in the output (rather than `exp(0)/sum`). The algorithm requires three passes per row: (1) find the maximum value in the row (for numerical stability), (2) compute `exp(val - max)` for each non-zero entry, (3) sum the exponentials, (4) divide each entry by the sum. Implement this as a multi-stage compiled kernel: define the operation through CIN with workspace patterns - the row-max and row-sum are reductions over the compressed dimension that require workspace accumulators. The implementation should generate a single C++ kernel that fuses all passes for cache efficiency. For CSR format, each row's entries are contiguous, making this efficient. For COO format, entries must first be grouped by row. The output has the same sparsity pattern as the input. Write tests: construct sparse matrices with known values, compute sparse softmax, verify that (a) each row's non-zero entries sum to 1.0, (b) results match `torch.softmax(A_dense, dim=-1)` at the non-zero positions, (c) test numerical stability with large values, and (d) test with various sparsity patterns.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Activations & losses"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dia_format",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "2105686711292cd1d1eb2438035b555fa644bf2a",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add DIA (diagonal) sparse format support. DIA format is designed for banded/diagonal matrices - it stores the matrix as a set of diagonals, each represented by a dense array. Storage consists of: `offsets` (1D array of diagonal offsets, where 0 is the main diagonal, positive is above, negative is below) and `data` (2D array of shape `[num_diags, max_dim]` containing the diagonal values). This format is optimal for stencil computations, finite difference/element matrices, and tridiagonal systems. Implement: (1) Add `DIA` level type to `format.py` and represent DIA tensors with a new storage scheme - store `offsets` and `data` tensors. (2) Add `from_dia(offsets, data, shape)` and `to_dia()` to `STensor`. The `to_dia()` method should auto-detect the non-zero diagonals. (3) Implement DIA-specific SpMV: `y[i] = A[i,j] * x[j]` where iterating over DIA format means for each diagonal d with offset o, iterate over elements where `j = i + o`. This is a specialized iteration pattern - the loop nest is `for d in diags: for i in range: y[i] += data[d][i] * x[i + offsets[d]]`. (4) Write a C++ kernel for DIA SpMV that is vectorization-friendly - each diagonal is a contiguous array, so the inner loop over i is trivially vectorizable. (5) Implement conversion between DIA and other formats (CSR, dense). Write tests: construct tridiagonal and pentadiagonal matrices, perform SpMV, verify against dense equivalents, test conversion round-trips, and benchmark against CSR SpMV to verify the format advantage for banded matrices.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_elementwise_sub",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement element-wise subtraction (`__sub__`) on `STensor` through the full CIN compilation pipeline, following the same pattern as the existing `__add__` implementation. The key difference from addition is asymmetry: subtraction uses union semantics (like addition, the result is non-zero where *either* operand is non-zero), but the iteration lattice must handle SUB asymmetry - when only B is present at a position, the result should be `-B[i,j]`, not `0`. The CIN statement is `C[i,j] = A[i,j] - B[i,j]` with `Operation.SUB`. Also implement `__rsub__` for reverse subtraction (`scalar - STensor`). Handle all format combinations: sparse-sparse (CSR-CSR, COO-COO, mixed), sparse-dense, and dense-dense. Write comprehensive tests covering: sparse-sparse with overlapping and non-overlapping sparsity patterns, sparse-dense, verify that `A - A` produces a zero tensor, verify that `A - B != B - A` (non-commutativity), various format combinations (CSR, COO, mixed), scalar subtraction, and verify all results against `A.to_dense().to_torch() - B.to_dense().to_torch()`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Binary arithmetic"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_fill_value",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the format system to support configurable non-zero fill values (currently hardcoded to 0.0). Changes required: (1) `format.py` - add a configurable `_fill_value` attribute to the `Format` class with a default of 0.0. (2) `stensor.py` - propagate the fill value through format conversions (`to_dense`, `to_sparse`, etc.) so that 'missing' entries use the fill value instead of 0.0. (3) `cin_lowerer.py` - replace the `memset(0)` initialization of output arrays with a fill loop using the configured fill value. (4) `cin.py` - add a `fill_value` attribute to `TensorVar` so the compilation pipeline is aware of each tensor's fill value. (5) `csrc/header.cpp` - add a `fill_value` field to the `Tensor` struct so the C++ runtime knows the fill value. Write comprehensive tests covering: fill_value=inf and fill_value=-inf round-trip through format conversions, verify that kernel initialization uses the configured fill value, backward compatibility with default 0.0 fill value (existing tests should still pass), arithmetic operations with non-zero fill values, and format conversion correctness.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Semantic extensions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_outer_product",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sparse outer product operation `outer(a, b)` in `ops.py` that takes two 1D sparse vectors and produces a 2D sparse matrix. The CIN expression is `C[i,j] = A[i] * B[j]` (equivalent to einsum `\"i,j->ij\"`). This is a dimension-increasing operation - verify that the iteration lattice correctly handles it with no reduction variable. The format inference should produce an appropriate output format based on the inputs. The result has `nnz_a * nnz_b` non-zeros when both inputs are sparse. Write comprehensive tests covering: sparse-sparse outer product, sparse-dense, dense-dense, verify the nnz count equals `nnz_a * nnz_b`, test with vectors of different lengths, verify results match `torch.outer(a.to_dense().to_torch(), b.to_dense().to_torch())`, test with various sparse formats (CSR, COO), and edge cases like zero vectors and single-element vectors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dtype_int_float64",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the compilation pipeline to support multiple data types beyond float32, specifically int32, int64, and float64. The dtype maps in `utils.py` already define these types but they aren't exercised through the pipeline. Changes required: (1) `stensor.py` - preserve dtype through all operations and format conversions. (2) `cin_lowerer.py` - make `ResultTensorAssembler` dtype-generic so it allocates and initializes arrays with the correct type. (3) `codegen.py` - emit correct C++ pointer types and casts based on tensor dtype (e.g., `int32_t*` instead of `float*`). (4) `ops.py` - implement dtype propagation rules for binary operations. Handle type promotion: int32 + float32 -> float32, int32 + int64 -> int64, int64 + float32 -> float64, etc. Write comprehensive tests covering: int32 SpMV (sparse matrix-vector multiply), float64 matmul with precision verification (results should be more precise than float32), dtype preservation through operations, type promotion rules for mixed-dtype operations, format conversions preserving dtype, and edge cases with integer overflow.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Type System/Value dtypes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_kron",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sparse Kronecker product operation `kron(A, B)` in `ops.py`. The Kronecker product produces output `C[i*P+p, j*Q+q] = A[i,j] * B[p,q]` where A is MxN, B is PxQ, and C is (M*P)x(N*Q). This requires new CIN IR nodes for computed indices with multiplication - currently only `IndexVarAdd` exists for additive index expressions. Add `IndexVarMul` and/or `IndexVarAffine` (for general `a*i + b` expressions) to `cin.py`. Update `cin_lowerer.py` to lower these new index expression nodes into C++ arithmetic. Update `iterator.py` to handle the composed index iteration. Update `ops.py` with the `kron` function that builds the CIN tree using these new index nodes. Write comprehensive tests covering: sparse-sparse Kronecker product, verify that `kron(I, B)` equals a block-diagonal matrix, shape verification (output is (M*P)x(N*Q)), nnz verification, results match `torch.kron(A.to_dense().to_torch(), B.to_dense().to_torch())`, test with various sparse formats, and edge cases like identity matrices and single-element matrices.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_trace_diagonal",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `trace(A)` and `diagonal(A, offset=0)` operations for sparse tensors. The CIN expression for diagonal extraction is `d[i] = A[i,i]` - the same index variable appears in both positions. This requires 'locate' capability in the iteration lattice: when iterating over the first dimension, the second dimension must perform random access (binary search in the compressed level) rather than sequential iteration. Changes required: (1) `iter_lattice.py` - add support for 'locate' mode when the same index variable is used in multiple positions. (2) `cin_lowerer.py` - generate `std::lower_bound` for binary search in compressed arrays when locate mode is needed. (3) `iterator.py` - add a `locate` method to `ModeIterator` for random-access lookup. (4) `ops.py` - implement `trace(A)` (sum of diagonal) and `diagonal(A, offset=0)` (extract diagonal as 1D tensor). Support non-zero offsets for super/sub-diagonals. Write comprehensive tests covering: trace of identity matrix equals n, trace and diagonal on CSR/COO/dense formats, various diagonal offsets (positive and negative), non-square matrices, verify results match torch equivalents, and verify that trace(A) equals sum(diagonal(A)).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_norm",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `norm(A, dim, ord=2)` for computing per-row or per-column norms of sparse tensors. This requires: (1) UnaryOp lowering - `abs` and `sqrt` are defined in the CIN IR but not lowered through the compilation pipeline. Implement their lowering in `cin_lowerer.py` and `codegen.py` to emit the corresponding C++ math calls (`std::abs`, `std::sqrt`). (2) MAX reduction support - extend the `Operation` enum to include MAX, and implement its lowering for computing infinity norms. (3) Two-stage computation for L2 norm - square elements, reduce (sum), then take square root. Support three norm orders: L1 (`ord=1`, sum of absolute values), L2 (`ord=2`, square root of sum of squares), and infinity (`ord=inf`, maximum absolute value). Write comprehensive tests covering: L1/L2/infinity norms on CSR/COO/dense tensors, row norms (dim=1) and column norms (dim=0), identity matrix norms, zero-row handling, results match `torch.linalg.norm(A.to_dense().to_torch(), ord=ord, dim=dim)`, and various sparsity patterns.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Aggregate"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_kernel_fusion",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement kernel fusion for chained sparse operations. When multiple operations are composed (e.g., `C = (A @ B) + D`), currently each operation compiles and executes a separate C++ kernel with intermediate materialization. Implement a fusion system that compiles chained operations into a single C++ kernel. Design: (1) Create a `LazyGraph` that records operations instead of executing them immediately. (2) Implement a `Fuser` class that merges CIN trees by substituting output->input connections - the output TensorVar of one operation becomes an intermediate in the fused CIN tree. (3) Provide a context manager `scorch.fuse()` that enables lazy evaluation and triggers fusion+compilation on exit. (4) Handle fusion constraints: operations with different iteration orders may not be fusible, multi-consumer intermediates require fallback to separate kernels. Write comprehensive tests covering: fused matmul+add produces correct results, verify single kernel compilation (only one C++ file generated), mixed format fusion, multi-consumer fallback to separate kernels, nested fuse contexts, and results match unfused computation.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Fusion"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_openmp_parallel",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the code generation pipeline to emit OpenMP parallel directives for auto-generated sparse kernels. Changes required: (1) Add a `ParallelFor` node to the LLIR (Low-Level IR) to represent parallelizable loops. (2) In `cin_lowerer.py`, analyze loop nests for parallelizability - the outermost loop over a dense dimension with no cross-iteration dependencies is typically parallelizable. (3) In `codegen.py`, emit `#pragma omp parallel for` with appropriate clauses: `reduction(+:...)` for accumulator variables, `schedule(dynamic)` for load balancing on sparse iteration, and private clauses for workspace arrays. (4) Ensure workspace arrays are privatized per-thread to avoid data races. (5) Handle cases where parallelization is NOT safe: coordinate-format (COO) outer loops with potential write conflicts, loops with cross-iteration dependencies. Write comprehensive tests covering: parallel SpMV correctness matches sequential, parallel SpMM correctness, verify pragma string appears in generated C++ code, workspace privatization works correctly, coordinate loops are NOT parallelized, and results are deterministic across runs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Codegen/Parallelism"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cat_stack",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse tensor concatenation `cat(tensors, dim)` and stacking `stack(tensors, dim)` in `ops.py`. These are structural operations that operate on storage arrays directly (not through the CIN pipeline). For CSR concatenation along dim=0: concatenate `crow_indices` arrays (with appropriate offset adjustments), concatenate `col_indices` arrays, and concatenate `values` arrays. For CSR concatenation along dim=1: merge per-row column indices from each tensor, maintaining sorted order within each row. For COO: concatenate coordinate arrays with shifted indices to account for the offset along the concatenation dimension. For dense: standard tensor concatenation. `stack(tensors, dim)` adds a new dimension and stacks tensors along it. Write comprehensive tests covering: CSR concatenation along dim=0 and dim=1, COO concatenation along both dims, dense concatenation, stacking more than 2 tensors, mixed shapes (same dim along concatenation axis), results match `torch.cat` / `torch.stack` on dense equivalents, edge cases with empty tensors, and structural integrity of output storage arrays.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Concat & pad"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_tensor_factories",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse tensor creation utility factory class methods on `STensor`: (1) `STensor.eye(n, format)` - create an nxn identity matrix in the specified sparse format. (2) `STensor.diag(values, format)` - create a diagonal matrix from a 1D tensor of values. (3) `STensor.rand_sparse(shape, density, format)` - create a random sparse tensor with the specified density (fraction of non-zeros). (4) `STensor.zeros(shape, format)` - create a zero tensor in the specified format. (5) `STensor.ones_like(other)` - create a tensor of ones with the same shape and format as another tensor. (6) `STensor.full_like(other, fill_value)` - create a tensor filled with a given value with the same shape and format. Each method must construct the correct `TensorStorage` and `TensorIndex` for the requested format. Write comprehensive tests covering: `eye` matches `torch.eye`, `diag` matches `torch.diag`, density verification for `rand_sparse`, edge cases (size 0, size 1, density 0.0, density 1.0), format correctness for CSR/COO/dense outputs, `ones_like` and `full_like` preserve shape and format, and round-trip to dense verification.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Factories"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_scipy_interop",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement SciPy sparse matrix interoperability for `STensor`. Add two methods: (1) `STensor.from_scipy(scipy_matrix)` - construct an STensor from a SciPy sparse matrix, auto-detecting the format (CSR, CSC, or COO). Handle format mapping: scipy's `indptr`/`indices`/`data` arrays map to scorch's `crow_indices`/`col_indices`/`values`. CSC input should be converted to CSR (transpose the structure). (2) `STensor.to_scipy(format='csr')` - convert an STensor to a SciPy sparse matrix in the requested format. Support output formats: `'csr'` (scipy.sparse.csr_matrix), `'coo'` (scipy.sparse.coo_matrix), `'csc'` (scipy.sparse.csc_matrix). Guard the scipy import with a try/except and raise an informative error if scipy is not installed. Write comprehensive tests covering: round-trip CSR (scorch->scipy->scorch), round-trip COO, CSC input conversion, dtype handling (float32, float64, int32), large sparse matrix conversion, empty matrix handling, and value/index correctness verification.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/External I/O"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_coalesce_coo",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a `coalesce()` method for COO-format sparse tensors that sorts coordinates lexicographically and sums duplicate entries. Add an `is_coalesced` property that returns whether the tensor's coordinates are already sorted and free of duplicates. Support N-dimensional COO tensors (not just 2D). Add an optional `remove_zeros` parameter that, when True, also removes entries whose values are zero after summing duplicates. The implementation can use Python-level operations (torch.sort + scan) or leverage the CIN pipeline (the `coo_workspace` mechanism already sums duplicates during compilation). Calling `coalesce()` on a non-COO tensor should raise a `ValueError`. Write comprehensive tests covering: known duplicate entries are correctly summed, already-coalesced tensors are a no-op, 3D COO tensors coalesce correctly, round-trip `to_dense` matches after coalescing, `is_coalesced` returns correct boolean, `remove_zeros` eliminates zero-valued entries, non-COO tensors raise ValueError, and edge cases with all-duplicate coordinates.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Canonicalization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_setitem",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `__setitem__` on `STensor` for mutating sparse tensor entries in-place. Support four indexing modes: (1) Scalar assignment `A[i, j] = value` - for COO, search for existing entry and update or append; for CSR, search within the row's column range and update or insert with `crow_indices` adjustment; for dense, direct flat index assignment. (2) Row assignment `A[i, :] = values` - replace all entries in row i with the given values (1D tensor). (3) Column assignment `A[:, j] = values` - replace all entries in column j. (4) Block assignment `A[i1:i2, j1:j2] = submatrix` - replace a rectangular sub-region. For CSR format, insertions require shifting subsequent entries in `col_indices` and `values` arrays and updating `crow_indices` for all subsequent rows. Write comprehensive tests covering: update an existing non-zero entry, insert a new non-zero entry (structural modification), row assignment, column assignment, block assignment, CSR structural integrity after mutation (crow_indices consistency), negative indexing, out-of-bounds errors (should raise IndexError), COO and dense format support, and verify results by converting to dense and comparing.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Write & mutation"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_bsr_block_matmul",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a hybrid block-sparse computation path that delegates dense block arithmetic to PyTorch dense operators. Add 2D block-structured constructors/converters on `STensor`: `from_bsr(crow_indices, col_indices, values, block_size, shape)` and `to_bsr(block_size)` where values has shape `(nnz_blocks, block_h, block_w)`. Then add `ops.block_matmul(A, B, block_size, output_format='ds')`: sparse iteration is over block rows/columns, but each nonzero block product is computed with `torch.matmul` / `torch.addmm` on dense subtensors (rather than scalar inner loops). Keep output assembly in scorch storage structures, and support both block-sparse x dense and block-sparse x block-sparse. Include robust validation: block sizes must divide matrix dimensions, and values tensor shape must match metadata. Write comprehensive tests covering: correctness against dense baseline (`A.to_dense().to_torch() @ B.to_dense().to_torch()`), block sizes 1x1, 2x2, 4x4, rectangular blocks (e.g. 2x4), random sparsity patterns, conversion round-trips (`to_bsr` -> `from_bsr`), invalid metadata errors, and autograd behavior when block values require gradients.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_csc_format",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add native CSC (Compressed Sparse Column) support across `STensor` and ops. Implement `STensor.from_csc(ccol_indices, row_indices, values, shape)` and `STensor.to_csc()`, and extend `STensor.from_torch` to ingest `torch.sparse_csc` tensors. Update format parsing/validation so CSC is representable explicitly (for example via format + mode_order conventions) without relying on ad-hoc transposes. In `ops.matmul`, add a fast path for CSC x dense and CSC x CSC (either dedicated kernels or a structured transpose-to-CSR strategy that preserves expected output semantics). Ensure conversions preserve dtype and mode order metadata. Write comprehensive tests covering: `torch.sparse_csc_tensor` import/export, CSC round-trip correctness, CSC x dense matmul vs torch dense baseline, CSC x CSC matmul vs baseline, empty-column edge cases, unsorted column indices rejection/normalization behavior, dtype coverage (float32/float64/int32), and parity with existing CSR/COO behavior.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_pytorch_broadcasting",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement full PyTorch-style broadcasting for sparse elementwise binary operations. Today `STensor.__add__` explicitly says broadcasting is TODO, and `__mul__`/`__sub__` paths do not provide broadcast semantics. Add shared broadcast shape inference and index mapping utilities (including implicit leading dimensions, singleton expansion, and scalar operands), then apply them consistently to `__add__`, `__sub__`, and `__mul__` (and any helper paths in `ops.py`). Broadcasting must be logical (no eager dense materialization) and should preserve sparse structure when possible. Ensure invalid broadcast pairs raise informative errors matching torch semantics. Write comprehensive tests covering: vector + matrix broadcasting, row/column vector broadcasting in 2D, scalar-tensor ops, higher-rank cases (e.g. `[B,1,N]` with `[1,M,N]`), mixed sparse+dense operands, format stability expectations, and exact equality vs PyTorch dense reference results.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Broadcasting"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_conv2d",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse `conv2d` in `ops.py` for 2D inputs and kernels, analogous to the existing `conv1d` task but with full 2D indexing. Support arguments `stride`, `padding`, and `dilation` (start with `groups=1`), and allow sparse input with dense kernel as the primary path. Lower through CIN/LLIR when feasible, with clear fallback behavior when an unsupported format combination is requested. The output should be an `STensor` with correct shape inference and format handling. Write comprehensive tests covering: no-padding baseline, non-trivial padding/stride/dilation combinations, random sparse inputs at multiple densities, correctness against `torch.nn.functional.conv2d` on dense equivalents, edge cases with empty outputs, and reproducibility across COO/CSR-like input formats after conversion.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Convolution & Pooling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_singleton_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add full `LevelType.SINGLETON` support throughout the sparse compiler pipeline. `LevelType.SINGLETON` already exists in `format.py`, but parser/iterator/lowering paths do not currently implement execution semantics. Define singleton semantics as one coordinate per parent position, then implement: (1) format parsing in `utils.parse_format` for singleton aliases, (2) iterator construction in `compiler/iterator.py`, (3) level-specific lowering in `cin_lowerer.py`, and (4) correct codegen/index array handling in generated C++. Also update conversion utilities so tensors can be converted into and out of singleton-containing formats. Write comprehensive tests covering: singleton format parsing, dense->singleton conversion and back, matrix/vector ops with singleton levels, mixed singleton+compressed level combinations, invalid singleton index structures raising errors, and value/index correctness vs dense reference computations.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_addmm_linear",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement fused sparse linear algebra front-end ops `addmm` and `linear`. Add `ops.addmm(input, mat1, mat2, beta=1.0, alpha=1.0)` with semantics matching `torch.addmm`: `beta * input + alpha * (mat1 @ mat2)`, supporting sparse `mat1` with dense or sparse `mat2` where valid. Add `ops.linear(input, weight, bias=None)` as a convenience wrapper built on `addmm`/`matmul`, supporting sparse weight matrices and preserving sparse-aware execution paths. Avoid materializing unnecessary intermediates by fusing scale/add where possible in CIN or kernel dispatch. Write comprehensive tests covering: correctness vs `torch.addmm` and `torch.nn.functional.linear` on dense references, different `alpha`/`beta` values (including zero and negative), with/without bias, batched inputs for `linear`, sparse-dense and sparse-sparse combinations, shape mismatch error handling, and dtype parity with existing matmul behavior.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_incremental_insert",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement incremental sparse tensor construction APIs centered on the currently unimplemented `STensor.insert`. Add `insert(indices, values, accumulate=True)` that can append or update entries in COO and CSR tensors without forcing full dense materialization, plus a convenience constructor `STensor.from_entries(indices, values, shape, format='oo', coalesce=True)`. For COO, insertion should append coordinates and optionally coalesce duplicates; for CSR, insertion should maintain per-row sorted columns and update `crow_indices`; for dense tensors, insertion should route to direct value updates. Add validation for index/value shape agreement, bounds checks, duplicate-handling policy, and dtype consistency. Implement a fast bulk path so `insert` with batched coordinates does not devolve into O(nnz^2) behavior. Write comprehensive tests covering: single-entry insert, batched inserts, duplicate accumulation vs overwrite behavior, CSR structural integrity after insertion (`crow_indices` monotonic and length `rows+1`), COO coalescing correctness, out-of-bounds errors, dtype/device preservation, and equality vs dense reference updates.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Write & mutation"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_lifecycle_device",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add full tensor lifecycle/device support by implementing `STensor.validate()`, `STensor.clone()`, and `STensor.to(device)` (with `cpu()`/`cuda()` convenience behavior). `validate()` should check shape/index/value invariants per format (e.g., CSR `crow_indices` length and monotonicity, coordinate bounds, mode-order consistency, and value/nnz alignment). `clone()` should deep-copy values and index arrays while preserving format, mode_order, and dtype. `to(device)` should move values and all index tensors to the target device, preserving semantics across format conversions. Extend execution paths in `ops.matmul`/`einsum` so CUDA tensors behave correctly: prefer native torch sparse/dense kernels when available, and otherwise use a clear fallback strategy (temporary host execution with explicit copy-back). Write comprehensive tests covering: validate pass/fail cases, clone isolation (mutating clone does not mutate source), CPU<->CUDA round-trips (when CUDA is available), matmul/einsum correctness on both devices, and preservation of dtype/format/mode_order after transfers.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Introspection"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_einsum_ellipsis",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Generalize `ops.einsum` parsing and shape handling to support PyTorch-style ellipsis and implicit output inference. The current implementation assumes single-character explicit index lists without ellipsis. Extend it to parse expressions like `...ij,...jk->...ik`, `bij,bjk->bik`, and implicit-output forms (no `->`) while preserving mode-order aware scheduling. Implement robust dimension alignment and broadcast checks for ellipsis-expanded axes, with informative errors for invalid expressions. Ensure format inference still works on expanded index sets and does not regress existing sparse behavior. Where sparse scheduling cannot yet handle a specific valid einsum form, implement an explicit fallback path that computes via dense torch and converts back (rather than silently producing wrong results). Write comprehensive tests covering: ellipsis matmul-style expressions, batched contractions, implicit output inference, invalid-expression diagnostics, parity with `torch.einsum` on dense references, and sparse+dense mixed inputs across COO/CSR formats.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Einsum"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_triangular_solve",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse triangular solve support: `ops.triangular_solve(A, B, upper=False, unit_diagonal=False, left=True)` for sparse `A` (CSR primary path, COO via conversion) and dense/sparse right-hand side `B`. Use forward substitution for lower-triangular and backward substitution for upper-triangular systems, including support for multiple RHS columns. Add structural checks that `A` is square and triangular under the selected flags, with clear handling of missing/zero diagonal entries when `unit_diagonal=False`. Keep sparse-aware iteration over row ranges (`crow_indices`) and avoid dense materialization on the hot path. Provide integration helpers on `STensor` as needed (e.g., `A.triangular_solve(B, ...)`). Write comprehensive tests covering: lower and upper solves, unit vs non-unit diagonal, single and multiple RHS, COO input conversion path, singular/invalid input errors, and numerical parity against `torch.linalg.solve_triangular(A.to_dense().to_torch(), B_dense, ...)`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Solvers"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_semiring_matmul",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add semiring-based sparse matrix multiplication as a first-class feature. Extend `ops.matmul` (and `einsum` where appropriate) with a `semiring` argument supporting at least: `plus_times` (default arithmetic), `min_plus`, `max_plus`, and `logical_or_and`. This requires extending operation/reduction plumbing in CIN/LLIR so non-standard accumulation and multiplication operators are representable, with correct identity initialization for each semiring (e.g., `+inf` for min-plus). Preserve sparse iteration optimizations and format inference for CSR/COO inputs. Where a semiring is unsupported in a specialized cached kernel, dispatch through generated CIN kernels rather than failing. Write comprehensive tests covering: correctness against dense reference implementations for all supported semirings, CSR and COO input combinations, behavior on negative values and infinities, empty-row/empty-matrix edge cases, and equivalence to standard matmul when `semiring='plus_times'`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_int64_indices",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement end-to-end 64-bit sparse index support for large tensors. Today `TensorIndex` coerces indices to `torch.int` (int32), which limits representable coordinates and can overflow for very large shapes. Add index-dtype awareness so mode indices can remain int32 or int64, propagate that through storage, lowering, and C++ codegen, and emit the correct pointer/index types (`int32_t*` vs `int64_t*`) in generated kernels. Update constructors/converters (`from_torch`, `from_coo`, `from_csr`, format conversions) to preserve index dtype instead of downcasting. Ensure mixed index dtypes are normalized or rejected with clear errors. Write comprehensive tests covering: int64 COO/CSR import and round-trip, matmul/einsum correctness with int64 indices, preservation of index dtype across conversions and mode-order changes, and large-shape sparse tensors with few nonzeros (to validate coordinate range handling without huge allocations).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Type System/Index dtypes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_multi_output_kernels",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement first-class multi-output CIN kernels and use them to add sparse `max`/`min` reductions with optional index returns. `cin_lowerer.py` currently assumes a single result tensor (`TODO: need to handle multiple result tensors` / `TODO: deal with multiple outputs`), which blocks APIs like `torch.max(..., dim=...)` that return both values and indices. Extend CIN->LLIR lowering and codegen to materialize multiple outputs from one kernel, including mixed output dtypes (value tensor dtype + index tensor int64). Add `ops.max`, `ops.min`, `STensor.max`, and `STensor.min` with `dim`, `keepdim`, and `return_indices` arguments; when `return_indices=True`, return `(values, indices)` with deterministic tie-breaking matching PyTorch (first index). Support `dim=None` scalar reduction and optional flat index output. Preserve sparse iteration efficiency for CSR/COO paths without forcing dense materialization when avoidable. Write comprehensive tests covering: dense/CSR/COO inputs, all `dim`/`keepdim` combinations, parity with `torch.max`/`torch.min` values and indices, tie behavior, empty-row handling, negative values, and index dtype correctness (`torch.int64`).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Argmax-style",
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_zero_copy_views",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement zero-copy tensor window views using the existing `Window` and `TensorStorageView` scaffolding. Add `STensor.narrow(dim, start, length)` and `STensor.slice_view(offset, shape, step)` that return lightweight views without copying storage whenever possible. Dense views should alias the base value buffer; CSR row-range views should alias `values`/`col_indices` and expose adjusted row pointers; COO views may fall back to deferred materialization when true aliasing is not representable. Add `STensor.is_view`, `STensor.base`, and `STensor.materialize()` helpers so callers can introspect/force copy semantics. Ensure core ops read through views correctly, and for supported writable view cases mutations propagate to the base tensor. Write comprehensive tests covering: aliasing behavior (view writes reflected in base), dense and CSR zero-copy row slicing, COO fallback correctness, chained views, bounds/step validation errors, and parity with equivalent PyTorch slicing on dense references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Views"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_torch_sparse_roundtrip",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add sparse-native PyTorch round-trip APIs without forcing dense conversion. Keep existing `to_torch()` behavior for dense export, and add `STensor.to_torch_sparse(layout='coo'|'csr')` to emit PyTorch sparse tensors directly from stored indices/values. Extend `STensor.from_torch` to robustly ingest both 2D and batched (3D+) `torch.sparse_coo` and `torch.sparse_csr` tensors while preserving dtype and mode-order metadata. For unsupported layouts or malformed indices, raise clear validation errors instead of silently densifying. Add a `preserve_layout` option so round-trips keep COO/CSR structure when possible. Write comprehensive tests covering: COO/CSR round-trips (`from_torch -> to_torch_sparse`), batched sparse inputs, dtype and index dtype preservation, mode-order preservation, empty tensors, and exact parity of dense materializations with original PyTorch sparse tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/External I/O"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_comparisons_where",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement comparison and masking operations for sparse tensors. Add elementwise comparisons (`==`, `!=`, `<`, `<=`, `>`, `>=`) for STensor-STensor and STensor-scalar inputs, returning boolean STensors, plus `ops.where(mask, x, y)` and `STensor.masked_fill(mask, value)`. Extend CIN/LLIR lowering to emit comparison expressions and bool outputs with correct format inference based on implicit zeros: if predicate(0)==False, preserve sparse structure when possible; if predicate(0)==True (for example `A == 0`), densify output to preserve semantics. Ensure broadcasting follows PyTorch rules for mask/x/y shapes. Write comprehensive tests covering: sparse-sparse and sparse-scalar comparisons, bool dtype correctness, `where`/`masked_fill` parity with torch dense references, sparse-vs-dense output behavior, COO/CSR combinations, and broadcast/shape mismatch error handling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Comparison & predicate"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dropout",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse dropout for training workflows. Add `ops.dropout(input, p=0.5, training=True, inplace=False, generator=None)` and `STensor.dropout(...)`. For sparse inputs, sample Bernoulli masks over explicitly stored values only, scale retained values by `1/(1-p)` in training mode, and preserve index structure (with optional zero-pruning after dropout). For `training=False`, return an exact no-op (or copy if requested) matching torch semantics. Preserve format, dtype, and mode_order metadata in all cases. Write comprehensive tests covering: statistical keep-rate validation, deterministic behavior with seeded generators, training vs eval semantics, inplace vs out-of-place behavior, COO/CSR inputs, zero-valued entries, gradient propagation through retained values, and parity against dense torch dropout at corresponding nonzero positions.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Regularization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_kernel_cache",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce a robust content-addressed kernel compilation cache for generated C++ kernels. Current compilation paths repeatedly call `load_inline(name='kernel', ...)` and cache mostly by CIN string, which can cause redundant compilations and potential collisions across dtype/format/mode-order variants. Add a `KernelCache` manager keyed by a stable hash of CIN/LLIR + dtypes + formats + mode orders + compiler flags/OpenMP config, and generate unique module names from that hash. Support optional disk persistence (for example under `~/.cache/scorch/kernels`) and add APIs like `scorch.clear_kernel_cache(memory_only=False)` and cache stats/introspection helpers. Ensure thread-safe cache access and graceful fallback when cache artifacts are missing/corrupt. Write comprehensive tests covering: cache hit/miss behavior, collision avoidance across dtype/format variants, correctness parity cached vs uncached execution, persistence across interpreter restarts (when enabled), and fallback behavior on corrupt cache entries.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_affine_gather_scatter",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Generalize computed indexing in CIN/LLIR and expose it through sparse gather/scatter APIs. In `cin.py`, `IndexVarExpr` currently only has `IndexVarAdd(lhs: IndexVar, rhs: IndexVar)`, which blocks common affine forms like `i + 1`, `i - k`, and `i * stride + offset`. Extend the IR with literal constants and affine index expressions (at least add/sub/mul by constant), then update `cin_lowerer.py`, `iterator.py`, and codegen so tensor accesses with computed indices lower correctly with explicit bounds guards. Build user-facing ops on top: `ops.gather(input, dim, index)` and `ops.scatter_add(input, dim, index, src)` for 1D and 2D tensors (dense and sparse). For duplicate indices in scatter-add, accumulation must match PyTorch semantics. Add clear error handling for invalid index dtype, rank mismatches, and out-of-bounds accesses. Write comprehensive tests covering: affine index lowering correctness, gather/scatter parity with `torch.gather` and `torch.scatter_add`, duplicate-index accumulation, negative and out-of-range index errors, COO/CSR inputs, and random fuzz tests against dense references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Computed indexing"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_spgemm_symbolic",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement symbolic+numeric SpGEMM planning with reusable sparsity structure for repeated sparse matmul. Today `ops.matmul` computes structure and values together each call. Add a symbolic planning path for CSR/COO multiplication that computes only output index structure and metadata once (for example row pointers and column index pattern), and a numeric path that reuses that plan for new values with the same operand sparsity pattern. Expose this as APIs such as `plan = ops.matmul_plan(A, B)` and `ops.matmul_with_plan(A, B, plan)`, and integrate optional auto-use into `ops.matmul(..., reuse_structure=True)`. Ensure plan validation checks shape, format, mode order, index dtype, and compatibility of sparsity patterns before reuse. Preserve exact numerical parity with the existing matmul path. Write comprehensive tests covering: correctness vs dense baseline, repeated-call speed-path behavior (plan reused, no structural recomputation), invalid/expired plan rejection, COO and CSR combinations, empty-row/empty-column edge cases, and parity between planned and unplanned execution.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_torch_function_dispatch",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add PyTorch operator dispatch integration for `STensor` so `torch.*` calls route to scorch implementations automatically. Implement `STensor.__torch_function__` (or an equivalent dispatch layer) for core ops already supported by scorch, including `torch.matmul`, `torch.einsum`, `torch.add`, `torch.sub`, and `torch.mul`, while preserving correct fallback behavior for unsupported ops. Add operator overload coverage for `__matmul__` / `__rmatmul__` and ensure mixed `torch.Tensor` + `STensor` inputs resolve predictably. Keep format/mode_order metadata stable across dispatched operations and avoid accidental densification unless required by semantics. Include a capability registry (or similar mechanism) so unsupported operator signatures fail clearly rather than silently returning incorrect results. Write comprehensive tests covering: direct `torch.*` invocation on `STensor`, mixed operand types, operator overload parity, fallback paths for unsupported ops, metadata preservation, and result parity with explicit scorch API calls plus dense torch references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Torch dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_serialization",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement robust serialization and checkpoint round-tripping for sparse tensors. Add `STensor.to_dict()` / `STensor.from_dict()` plus convenience `save(path)` / `load(path)` helpers that preserve shape, dtype, index dtype, format, mode_order, values, and all mode indices without dense conversion. Ensure compatibility with `torch.save`/`torch.load` and module checkpoint workflows (`state_dict` integration for modules that contain `STensor` fields). Add schema versioning so future format changes can be handled without breaking old checkpoints, and validate payloads on load with informative errors for malformed data. Provide optional `map_location` support for CPU/GPU remapping during load. Write comprehensive tests covering: round-trip correctness for dense/COO/CSR tensors, mode-order and dtype preservation, cross-device save/load (when CUDA is available), backward-compatibility handling for older schema versions, malformed-checkpoint error paths, and equality vs original tensor after reload.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Serialization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_schedule_autotuner",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add runtime schedule autotuning for generated sparse kernels. Scorch already has scheduling hooks (`Scheduler.auto_schedule`, tiling support, mode-order changes), but kernel choice is mostly heuristic and static. Implement an autotuner that explores candidate schedules (loop order, tiling factors, workspace choices, and optional OpenMP knobs) for a given CIN/einsum signature, benchmarks them on representative inputs, and caches the best-performing schedule in a reusable plan. Expose this through flags like `ops.einsum(..., autotune=True)` / `ops.matmul(..., autotune=True)` with controls for tune budget and cache scope. Ensure autotune decisions are keyed by shape, format, dtype, and hardware-relevant settings, and provide deterministic behavior when autotune is disabled. Add observability APIs for chosen schedule and tuning stats. Write comprehensive tests covering: correctness parity tuned vs untuned, cache hit behavior across repeated calls, deterministic no-autotune path, invalidated cache on signature change, and lightweight performance sanity checks that verify the tuned plan is selected and reused.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Tuning & user control"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_reshape_flatten",
"description": "Implement sparse shape-transformation APIs without dense materialization: `STensor.reshape(*shape)`, `STensor.flatten(start_dim=0, end_dim=-1)`, and `STensor.unflatten(dim, sizes)`, plus `ops.reshape` convenience wrappers. The transformation must preserve values and remap indices exactly (including negative dimensions and one inferred `-1` dimension). Implement coordinate remapping directly for COO-like layouts and provide a well-defined conversion path for compressed layouts (CSR/CSC/ELL/DIA) that avoids full dense conversion. Preserve dtype, mode_order semantics, and format metadata where possible; when the target layout cannot preserve the original level structure, fail clearly or convert through an explicitly documented canonical sparse format. Add robust validation for incompatible shapes. Write comprehensive tests covering: 1D/2D/3D reshape chains, flatten/unflatten round-trips, sparse and dense-backed STensors, mode-order edge cases, invalid `-1` inference errors, and parity vs `torch.reshape` on dense references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Shape & Layout/Reshape"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_pooling_2d",
"description": "Add sparse pooling operators for vision workloads: `ops.max_pool2d` and `ops.avg_pool2d` with parameters `kernel_size`, `stride=None`, `padding=0`, `dilation=1`, and `ceil_mode=False`, plus corresponding `STensor` method wrappers. Support NCHW inputs where activations are sparse and missing entries represent fill value (zero by default, or configured fill value once available). For max pooling, ensure windows with all implicit zeros produce correct outputs and indices when requested (`return_indices=True`). For average pooling, ensure divisor semantics match PyTorch (`count_include_pad`). Implement through scorch's CIN/lowering pipeline where practical, with fallbacks only when needed for correctness. Add tests covering: random sparse feature maps, negative values, different kernel/stride/padding/dilation combinations, return-indices behavior, edge windows at boundaries, and numerical parity vs `torch.nn.functional.max_pool2d/avg_pool2d` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Convolution & Pooling"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_fp16_bf16_dtype",
"description": "Extend dtype support to include `torch.float16` and `torch.bfloat16` end-to-end in CIN->LLIR->C++ execution, including cached kernels in `csrc`. Add dtype mappings in `utils.py`/`llir.py`, ensure generated C++ uses correct scalar and torch dtype constants, and implement mixed-precision accumulation controls (e.g., `accumulate_dtype=torch.float32` by default for fp16/bf16 reductions and matmul). Update `ops.einsum`/`ops.matmul`/format conversions so half-precision tensors execute without forced upcasting unless requested. Ensure deterministic casting rules for mixed input dtypes and explicit errors for unsupported combinations. Write tests covering: fp16 and bf16 correctness vs dense torch baselines (with tolerance), accumulation dtype effects on numerical error, mixed-dtype operand promotion rules, and kernel-cache keying by dtype to prevent incorrect module reuse.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Type System/Value dtypes"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_user_scheduling_api",
"description": "Expose a user-controlled scheduling API for CIN execution instead of relying solely on `Scheduler.auto_schedule`. Add a schedule object or kwargs surface (for `ops.einsum`, `ops.matmul`, and `lower_and_exec_cin`) that can explicitly set loop order, tile sizes per index variable, workspace insertion policy (dense/coo/none when legal), and optional OpenMP knobs. Integrate with existing scheduler passes in `scheduler.py` so manual directives are validated against index dependencies and illegal schedules fail with actionable errors. Keep current behavior as the default when no manual schedule is supplied. Also expose lightweight introspection (`explain_schedule` or equivalent) that returns the final transformed CIN used for lowering. Write tests covering: deterministic application of manual schedules, fallback to auto schedule, validation failures for invalid loop orders/tiles, correctness parity manual vs auto on representative kernels, and that requested schedule directives are actually reflected in emitted CIN/LLIR.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Runtime/Tuning & user control"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_matrix_market_io",
"description": "Add scientific sparse I/O interoperability for external datasets: Matrix Market (`.mtx`) and SciPy sparse NPZ (`.npz`) round-trips. Implement `STensor.from_matrix_market(path)`, `STensor.to_matrix_market(path)`, `STensor.from_scipy_npz(path)`, and `STensor.to_scipy_npz(path)` with optional dtype/index dtype controls and explicit format-selection behavior (COO/CSR/CSC where representable). This is distinct from internal checkpoint serialization: files must be interoperable with SciPy tooling and preserve shape, nnz, values, and structural indices exactly (up to canonical ordering). Handle malformed files and unsupported rank/layout combinations with informative errors. Write tests covering: round-trip integrity on random sparse matrices, cross-validation with SciPy loaders/savers, dtype/index-dtype preservation, large-index edge cases, and failure paths for malformed or unsupported files.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Constructors & I/O/External I/O"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_n_m_sparsity",
"description": "Implement semi-structured N:M sparsity support with an initial optimized path for 2:4 sparsity. Add constructors/converters on `STensor` (e.g., `from_semi_structured(values, metadata, pattern=(2,4), dim=-1)` and `to_semi_structured(pattern=(2,4), dim=-1)`), plus validation that each 4-element group has exactly 2 stored entries in canonical form. Extend `ops.matmul` to detect compatible semi-structured operands and dispatch to a dedicated kernel path that exploits the compressed metadata layout while preserving scorch semantics. Support conversion to/from dense and COO/CSR for interoperability. Add tests covering: pattern validation errors, conversion round-trips, correctness parity vs dense matmul, mixed semi-structured + dense operands, and random stress tests that verify pattern invariants are preserved through supported transformations.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_topk_kthvalue",
"description": "Implement sparse `topk`/`kthvalue` along a specified dimension: `ops.topk(input, k, dim=-1, largest=True, sorted=True)` and `ops.kthvalue(input, k, dim=-1, keepdim=False)` plus `STensor` method wrappers. Semantics should align with PyTorch, including how implicit fill values (zeros) compete with explicit non-zero entries and ties are resolved consistently. Return both values and indices; define and document output sparsity behavior (e.g., dense index tensor with sparse value tensor, or paired dense outputs) and keep it consistent across formats. Avoid densifying the full tensor for large sparse inputs when possible by using per-slice sparse selection algorithms. Write tests covering: largest/smallest modes, sorted/unsorted outputs, tie handling, implicit-zero competition, different dims/keepdim behavior, k boundary cases, and parity with `torch.topk`/`torch.kthvalue` on dense references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Argmax-style"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_simd_vectorization",
"description": "Add SIMD-aware vectorization to generated C++ kernels for dense innermost loops. Extend LLIR/codegen so when an inner loop is contiguous and arithmetic-only, emitted code uses vectorization-friendly constructs (`#pragma omp simd` and/or architecture-gated intrinsics) with safe scalar fallbacks. Ensure correctness for remainder loops and non-multiple vector widths, and gate aggressive paths behind capability checks so compilation remains portable on both macOS and Linux CI environments. Integrate vectorization decisions with existing scheduling/caching so cache keys include vectorization-relevant settings. Write tests covering: numerical parity vectorized vs scalar paths, compilation on supported/unsupported SIMD targets, fallback correctness when vectorization is disabled, and lightweight performance sanity checks demonstrating that vectorized kernels are selected and produce non-regressive throughput on representative dense-inner-loop sparse workloads.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Codegen/Vectorization"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_complex_dtype",
"description": "Extend scorch with end-to-end complex dtype support (`torch.complex64` and `torch.complex128`) across `STensor`, CIN->LLIR lowering, and generated C++ kernels. The current dtype plumbing in `llir.py`/codegen and cached native kernels is real-valued; add complex scalar mappings, kernel argument marshalling, and correct emission of complex literals/temporaries. Implement PyTorch-consistent type promotion for real+complex operands (for example float32 + complex64 -> complex64), preserve complex dtypes through `from_torch`, format conversions, and `to_torch_sparse`/`to_torch` paths, and ensure operations like `einsum`, `matmul`, and elementwise ops produce numerically correct complex outputs. Add optional conjugate-aware matmul semantics where required by API (`transpose` vs `conj().transpose`). Write comprehensive tests covering: COO/CSR complex tensor construction, dense/sparse mixed complex arithmetic, dtype promotion and casting rules, parity with PyTorch dense references for matmul/einsum/elementwise ops, and tolerance-based validation for both complex64 and complex128.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Type System/Value dtypes"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_cuda_backend",
"description": "Add a true CUDA codegen backend for CIN-generated kernels so sparse workloads can execute on GPU without host fallback. Today execution is centered around CPU C++ `load_inline`; extend lowering/codegen to emit CUDA-compatible kernels (or CUDA-specialized C++ with `cuda_sources`) and runtime dispatch that chooses CPU or CUDA based on tensor devices. Support dense, COO, and CSR inputs on CUDA where representable, preserve mode-order and format semantics, and maintain compatibility with existing scheduler transformations (tiling/workspaces). Ensure kernel cache keys include backend/device capability details to avoid cross-backend reuse bugs. Provide explicit fallback/error behavior for unsupported patterns instead of silent incorrect execution. Write comprehensive tests covering: CPU vs CUDA parity for matmul/einsum representative cases, mixed-device input validation errors, cache correctness across CPU/CUDA variants, autograd compatibility on CUDA tensors where supported, and graceful skip paths when CUDA is unavailable.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Codegen/Backend targets"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_bitmap_level",
"description": "Add bitmap sparse level support to the format system (`LevelType.BITMAP`) and compiler pipeline. A bitmap level stores a dense occupancy bitset plus compacted values, which is efficient for near-dense regions and predictable iteration. Implement bitmap parsing/serialization in `format.py`, storage representation in `TensorIndex`/`TensorStorage`, iterator lowering in `iterator.py`/`cin_lowerer.py`, and C++ codegen for scanning bitmap words and mapping set bits to coordinates. Add constructors/converters such as `STensor.from_bitmap(bitmap, values, shape)` and `to_bitmap(word_size=32|64)`, including robust validation of bit-count/value-count consistency. Integrate bitmap levels into format inference and mixed-format ops (bitmap+CSR/COO/dense) with correct semantics. Write comprehensive tests covering: conversion round-trips, correctness of bitmap iteration order, parity of matmul/einsum/elementwise operations vs dense references, edge cases with empty/full bitmaps, and malformed bitmap metadata error handling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_hyb_format",
"description": "Implement HYB (ELL+COO) sparse format support for matrices with skewed row densities. HYB stores up to `ell_width` entries per row in an ELL component and spills overflow entries into a COO tail. Add format/storage support plus `STensor.from_hyb(ell_indices, ell_values, coo_indices, coo_values, shape)` and `STensor.to_hyb(ell_width=None)` APIs; if `ell_width` is omitted, choose it via a configurable percentile heuristic over row nnz. Extend iteration/lowering so ops consume both HYB components correctly and avoid double-counting/ordering bugs. Integrate HYB into `ops.matmul` and `einsum` dispatch with reasonable fast paths for HYBxdense and HYBxHYB. Write comprehensive tests covering: HYB construction validity, conversion round-trips with CSR/COO/dense, correctness on uniform and heavy-tail sparsity patterns, heuristic width selection behavior, and numerical parity against dense PyTorch baselines.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Format/Block & ELL family"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_csf_format",
"description": "Add CSF (Compressed Sparse Fiber) format support for higher-order sparse tensors (3D+), enabling efficient tensor contractions without flattening to COO. Implement hierarchical compressed storage where each sparse mode contributes position/coordinate arrays (generalizing CSR to multiple sparse levels). Extend `TensorFormat`/`TensorIndex` to represent CSF levels, add `STensor.to_csf()` / `STensor.from_csf(...)`, and update iterator lattice + CIN lowering so nested sparse fibers are traversed correctly under arbitrary mode orders. Ensure compatibility with existing `einsum` scheduling for high-dimensional contractions (for example MTTKRP-style expressions) and preserve semantics under mode-order changes. Write comprehensive tests covering: 3D and 4D CSF construction, conversion parity with COO/dense, correctness for representative high-dimensional `einsum` expressions, empty-fiber edge cases, and invariants such as monotonic coordinates and valid position bounds at each compressed level.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Format/Hierarchical & multi-d"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_sparse_attention",
"description": "Implement fused sparse scaled-dot-product attention for block/coordinate masks: `ops.scaled_dot_product_attention_sparse(Q, K, V, attn_mask_sparse=None, dropout_p=0.0, is_causal=False, training=False)`. The key requirement is to avoid dense `QK^T` materialization: use sparse mask structure to compute only sampled score entries (SDDMM-style), apply numerically stable softmax over valid keys per query row, and accumulate outputs via sparse-weighted matmul with `V`. Integrate with existing CIN scheduling/lowering where possible, and add optimized paths for CSR masks (row-wise neighborhoods) and optional block-sparse masks. Preserve PyTorch semantics for scaling, dropout behavior, and causal masking precedence. Write comprehensive tests covering: parity with dense `torch.nn.functional.scaled_dot_product_attention` under equivalent masks, different sparsity levels/patterns, causal and non-causal modes, training vs eval dropout behavior, gradient checks on Q/K/V, and memory-usage sanity checks demonstrating no full dense attention-score allocation on sparse paths.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/ML Primitives/Attention & embedding"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_int8_quantized",
"description": "Add int8 quantized sparse inference as a first-class path. Implement value-only quantization APIs on `STensor` (`quantize_per_tensor`, `quantize_per_channel`, `dequantize`, and `from_torch_quantized`) that preserve sparse index structures while quantizing stored values. Add `ops.matmul_quantized(A, B, bias=None, out_dtype=torch.float32)` for sparse-dense and sparse-sparse cases using int8 inputs with int32 accumulation and fused dequantization. Extend dtype/lowering plumbing in `utils.py`, `llir.py`, `cin_lowerer.py`, and `codegen.py` so generated kernels can emit int8 loads, int32 accumulators, and scale/zero-point application, with cache keys including quantization parameters to avoid invalid kernel reuse. Keep fallback behavior explicit when unsupported quantization layouts are requested. Write comprehensive tests covering: quantized round-trip correctness, sparse matmul parity versus dequantized float baselines (with tolerance), per-tensor and per-channel quantization, saturation/rounding edge cases, cache separation across quant params, and mixed quantized/unquantized input validation.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/ML Primitives/Quantization"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_iterative_solvers",
"description": "Implement iterative sparse linear solvers for scientific workloads. Add `ops.cg(A, b, x0=None, tol=1e-6, maxiter=None, M=None)` for symmetric positive definite systems and `ops.bicgstab(A, b, x0=None, tol=1e-6, maxiter=None, M=None)` for general non-symmetric systems, with optional Jacobi preconditioning. Reuse existing sparse matvec/matmul execution paths instead of dense materialization, and return solver metadata (iterations, converged flag, residual norm) in addition to the solution. Add `STensor` convenience methods where appropriate and ensure mode-order/index-dtype handling is preserved end-to-end. Include robust stopping criteria and breakdown detection (for example zero denominator cases in BiCGSTAB) with clear error/status reporting. Write comprehensive tests covering: convergence on known SPD and non-symmetric systems, residual accuracy against dense `torch.linalg.solve` references, preconditioned vs unpreconditioned behavior, non-convergence paths, deterministic iteration counts for fixed seeds, and sparse COO/CSR input coverage.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Solvers"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_layer_rms_norm",
"description": "Add sparse normalization operators for transformer-style models: `ops.layer_norm_sparse` and `ops.rms_norm_sparse`, plus `STensor.layer_norm(...)` and `STensor.rms_norm(...)`. Support affine parameters (`weight`, `bias`) and epsilon controls with semantics matching PyTorch over full normalized dimensions, where implicit missing entries are treated as zeros. Integrate these as reduction-heavy CIN pipelines with workspace insertion for mean/variance (LayerNorm) and mean-square (RMSNorm), and define output format behavior explicitly: preserve sparse output only when mathematically valid, otherwise densify deterministically. Ensure dtype promotion and accumulation precision follow PyTorch expectations (for example fp16/bf16 accumulation in float32 when enabled). Write comprehensive tests covering: parity with `torch.nn.functional.layer_norm` and RMSNorm references on dense materializations, CSR/COO/dense inputs, affine/non-affine variants, numerical stability under large/small values, output format decisions (sparse vs dense), and gradient checks for trainable affine parameters.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/ML Primitives/Normalization"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_einsum_repeated_idx",
"description": "Generalize `ops.einsum` to fully support repeated-index semantics within a single operand (diagonal extraction/trace-style behavior) without dense fallback. The current parser/scheduling logic in `ops.py` assumes effectively unique per-operand indices and does not robustly handle expressions like `ii->i`, `bijj->bi`, or mixed repeated-index contractions across sparse layouts. Extend parsing, shape inference, and CIN construction so repeated labels map to the same index variable with correct constraints, and lower those constraints through iterator lattice/codegen while preserving sparse efficiency where possible. Keep compatibility with existing mode-order logic and output-format inference. Write comprehensive tests covering: repeated-index einsum parity with `torch.einsum` on dense references, COO/CSR mixed inputs, diagonal extraction and trace-style reductions, higher-rank repeated-index cases, invalid-expression error handling, and interaction with explicit output mode-order requests.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Einsum"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_prune_eliminate_zeros",
"description": "Implement explicit-zero management and structured pruning APIs for sparse tensors. Add `STensor.eliminate_zeros(inplace=False, atol=0.0)` to remove stored zeros/near-zeros and rebuild indices correctly for COO/CSR, plus `STensor.prune(threshold=None, topk=None, dim=None, keep_structure=False)` for magnitude-based pruning. Expose `ops.prune` wrappers and ensure pruning semantics are well-defined for dense-backed and sparse-backed tensors, including stability of mode order, dtype, and index dtype metadata. Integrate with existing conversion/coalesce paths so pruned outputs remain canonical and avoid duplicate coordinates. Add optional deterministic tie-breaking for top-k pruning. Write comprehensive tests covering: elimination of exact and near-zero values, COO/CSR index rebuild correctness, threshold vs top-k pruning behavior, deterministic tie handling, parity with dense reference pruning logic, and edge cases such as empty tensors or all-pruned outputs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Canonicalization"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_transpose_matmul",
"description": "Add transpose-aware matmul APIs that avoid physical tensor transposition. Extend `ops.matmul` with flags `transpose_a=False` and `transpose_b=False` (and matching `STensor.matmul` kwargs) so callers can request `A^T @ B`, `A @ B^T`, or `A^T @ B^T` directly. Implement this via index remapping in CIN/einsum lowering rather than materializing temporary transposed tensors, and preserve sparse format/mode-order metadata in the result. Ensure fast-path kernel dispatch in `ops.py` respects transpose flags for CSR/COO combinations and falls back cleanly when no specialized kernel applies. Write comprehensive tests covering: all transpose-flag combinations for sparse-dense and sparse-sparse inputs, parity with dense `torch.matmul` references, mode-order correctness in outputs, shape/compatibility validation errors, and performance sanity checks confirming no extra transpose materialization.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Matmul variants"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_log_softmax_nll",
"description": "Add sparse log-probability training primitives: `ops.sparse_log_softmax(input, dim=-1)` and `ops.sparse_nll_loss(log_probs, target, reduction='mean', ignore_index=-100)`, with `STensor` method wrappers. Build on sparse softmax infrastructure but compute and return log probabilities directly for numerical stability, and define semantics for implicit zeros explicitly (for example non-stored entries outside the sparse support should not be treated as learned logits unless densified by policy). Implement row/column sparse paths for CSR and COO, with clear fallback behavior when targets reference implicit entries. Preserve dtype/index metadata and support batched 2D/3D use cases where feasible. Write comprehensive tests covering: parity with dense masked-reference implementations, numerical stability on large-magnitude logits, reduction modes (`none`, `mean`, `sum`), `ignore_index` behavior, COO/CSR coverage, and validation errors for invalid target shapes or unsupported sparse semantics.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/ML Primitives/Activations & losses"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_block_diag",
"description": "Implement block-diagonal sparse packing utilities for variable-size mini-batch workloads. Add `STensor.from_block_diag(tensors)` to pack a list of 2D sparse tensors into a single block-diagonal sparse matrix and `STensor.to_block_diag(block_sizes)` to unpack. Add `ops.block_diag_matmul(A_blockdiag, X, block_sizes)` to execute per-block multiplications efficiently without scanning cross-block zeros, including optional parallelization across blocks. Preserve input per-block formats (COO/CSR where possible) and track block partition metadata through serialization/conversion APIs. This feature is distinct from fixed-size block-sparse formats: here blocks represent independent subproblems with variable shapes. Write comprehensive tests covering: round-trip pack/unpack correctness, parity versus `torch.block_diag` dense references, variable block-size edge cases, empty/singleton blocks, mixed COO/CSR block inputs, and correctness/performance sanity checks for blockwise matmul with many small graphs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Shape & Layout/Concat & pad"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_elementwise_div",
"description": "Implement element-wise division (`__truediv__` and `__rtruediv__`) on `STensor` through the full CIN compilation pipeline. `Operation.DIV` already exists in `src/scorch/compiler/cin.py` (line 897: `DIV = \"/\"`), `AssignOp.DIV_ASSIGN` exists in `src/scorch/compiler/llir.py` (line 81), and `IndexExpr.__sub__` at line 143 of `cin.py` shows the pattern for adding new operators to `BinaryOp` -- but there is no `__truediv__` method on `IndexExpr`, no `STensor.__truediv__`, and division is never invoked anywhere in the codebase. The key semantic challenge is that sparse-sparse division has **intersection semantics** (like multiplication): `C[i,j] = A[i,j] / B[i,j]` is only defined where both operands have stored entries, because dividing by an implicit zero is undefined. The CIN statement is `C[i,j] = A[i,j] / B[i,j]` using `Operation.DIV`. Implementation steps: (1) Add `__truediv__` to `IndexExpr` in `src/scorch/compiler/cin.py` that returns `BinaryOp(Operation.DIV, self, other)`, mirroring the existing `__mul__`/`__add__`/`__sub__` at lines 137-144. (2) Implement `STensor.__truediv__(self, other)` in `src/scorch/stensor.py`, following the same pattern as `__add__` (lines 165-256) but using `Operation.DIV` and intersection semantics for the iteration lattice. When `other` is a scalar (int/float), broadcast it as `A[i,j] / scalar` -- this can be done by wrapping the scalar in a dense tensor or by generating a CIN expression that divides values by a constant. (3) Implement `STensor.__rtruediv__(self, other)` for `scalar / STensor`, which computes the elementwise reciprocal scaled by the scalar on stored entries only. (4) The `lower_BinaryOp` method in `src/scorch/compiler/cin_lowerer.py` (line 615) already uses `bin_op.op.value` to emit the C++ operator string, so `/` will flow through automatically via `llir.BinOp`. However, verify that the iteration lattice in `src/scorch/compiler/iter_lattice.py` correctly produces intersection iteration for DIV (the lattice should merge the same way as MUL). (5) Handle the `TensorAssign.op` path in `lower_TensorAssign` (around line 735 of cin_lowerer.py) -- if `stmt.op == Operation.DIV`, emit `DIV_ASSIGN` (`/=`) for compound assignment. (6) Handle format inference in `ops.einsum`: division, like multiplication, should produce sparse output when either input is sparse (intersection yields subset). Write comprehensive tests in a new test file `tests/test_scorch/test_div.py` covering: sparse/sparse division with overlapping and non-overlapping patterns (non-overlapping positions should be absent from the result), sparse/dense, dense/dense, scalar division (`A / 2.0` and `2.0 / A`), division by zero detection or NaN/Inf propagation on stored entries, various format combinations (CSR via \"ds\", COO via \"oo\"), and verify all results against `A.to_dense().to_torch() / B.to_dense().to_torch()` using `torch.allclose` with appropriate NaN handling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Element-wise/Binary arithmetic"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_elementwise_pow",
"description": "Implement element-wise power (`__pow__`) on `STensor` that raises each stored value to a given exponent. Unlike add/sub/mul/div, power is not a binary operation between two equally-shaped sparse tensors in the typical case -- the primary use case is `A ** n` where `n` is a scalar (integer or float). This requires a new `Operation.POW` enum value and corresponding CIN handling. Implementation steps: (1) Add `Operation.POW = \"pow\"` to the `Operation` enum in `src/scorch/compiler/cin.py` (after line 897). Note that C++ does not have a `**` operator, so the codegen must emit `std::pow(x, n)` rather than an infix operator. (2) Add `__pow__` to `IndexExpr` in `cin.py` that returns an appropriate expression node. Since the existing `BinaryOp` structure uses infix operator strings and `pow` is a function call, the cleanest approach is to use `BinaryOp(Operation.POW, self, other)` and handle POW specially in the lowerer. (3) Update `lower_BinaryOp` in `src/scorch/compiler/cin_lowerer.py` (line 615) to emit `llir.FunctionCall(name=\"std::pow\", args=[left, right])` instead of `llir.BinOp` when the operation is POW. (4) The `LLIRLowerer` in `src/scorch/compiler/codegen.py` already handles `FunctionCall` lowering at line 128, so `std::pow(base, exp)` should render correctly. (5) Implement `STensor.__pow__(self, exponent)` in `src/scorch/stensor.py`. When `exponent` is a scalar, iterate only over stored entries and apply `pow`. The output has the same sparsity pattern as the input (non-zeros raised to a power remain non-zero, except for the edge case of `0**positive` which is 0 and already absent). Build a CIN assignment where a scalar literal TensorVar holds the exponent: `C[i,j] = pow(A[i,j], n)`. The format of the output should match the input format. (6) Also support `STensor ** STensor` for element-wise power between two sparse tensors (intersection semantics, like multiply -- only positions where both are stored). (7) Add `<cmath>` to the C++ header if not already included (check `csrc/header.cpp` -- it includes `<torch/extension.h>` which typically pulls in `<cmath>`, but verify). Write tests in `tests/test_scorch/test_pow.py` covering: integer exponents (squaring, cubing), fractional exponents (square root via `** 0.5`), negative exponents (reciprocal), `A ** 0` producing all ones at stored positions, `A ** 1` returning the original tensor, sparse-sparse element-wise power, various formats (CSR, COO, dense), and verify against `A.to_dense().to_torch() ** n` using `torch.allclose`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Element-wise/Binary arithmetic"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_matrix_power",
"description": "Implement `ops.matrix_power(A, n)` that computes the n-th matrix power of a square sparse matrix `A` by repeated matrix multiplication, reusing the existing `ops.matmul` infrastructure. This is a higher-level operation that does not require new CIN primitives but does require careful handling of sparse format propagation and efficiency. The algorithm is exponentiation by squaring: decompose `n` into binary and compute `A^n` via `O(log n)` matrix multiplications instead of `O(n)`. Implementation steps: (1) Add `matrix_power(A, n)` to `src/scorch/ops.py`. The function should accept an `STensor` (must be square, 2D) and a non-negative integer `n`. For `n=0`, return the sparse identity matrix of the same size and format (construct using `crow_indices = torch.arange(0, A.shape[0]+1)`, `col_indices = torch.arange(0, A.shape[0])`, `values = torch.ones(A.shape[0])` and `STensor.from_csr()`). For `n=1`, return a copy of `A`. For `n>=2`, use the binary exponentiation algorithm: initialize `result = identity`, `base = A`; while `n > 0`: if `n` is odd, `result = matmul(result, base)`; `base = matmul(base, base)`; `n = n >> 1`. (2) Each intermediate `matmul` call will go through the existing `ops.matmul` path (line 250 of ops.py), which dispatches to fast-path C++ kernels for CSR/COO formats or falls back to `einsum(\"ik,kj->ij\", ...)`. Ensure the intermediate results preserve a sparse format rather than densifying. (3) Add `STensor.matrix_power(self, n)` as a convenience method in `src/scorch/stensor.py` that calls `ops.matrix_power(self, n)`. (4) Validate that `A` is square (`A.shape[0] == A.shape[1]`) and `n >= 0`; raise `ValueError` for non-square or negative n. (5) Export `matrix_power` from `src/scorch/__init__.py`. Write tests in `tests/test_scorch/test_matrix_power.py` covering: `A^0` equals identity, `A^1` equals `A`, `A^2` equals `matmul(A, A)`, `A^5` via binary exponentiation matches naive sequential multiplication, diagonal matrix powers (easy to verify analytically), permutation matrix powers, both CSR (`\"ds\"`) and COO (`\"oo\"`) input formats, and numerical verification against `torch.linalg.matrix_power(A.to_dense().to_torch(), n)` using `torch.allclose`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_cholesky",
"description": "Implement sparse Cholesky factorization for symmetric positive definite (SPD) sparse matrices: `ops.cholesky(A, upper=False)` that returns a sparse lower-triangular `L` such that `A = L @ L^T` (or upper-triangular `U` with `A = U^T @ U` when `upper=True`). This is a two-phase algorithm: symbolic factorization (determine the sparsity pattern of `L` including fill-in) and numeric factorization (compute the actual values). This does not go through the CIN compilation pipeline -- it is implemented as a direct sparse algorithm in Python operating on `STensor` storage. Implementation steps: (1) Add `ops.cholesky(A, upper=False)` to `src/scorch/ops.py`. Accept an `STensor` that is square and SPD. Validate squareness via `A.shape[0] == A.shape[1]`. (2) **Symbolic phase**: Determine the sparsity pattern of `L`. Convert `A` to CSR format internally (use `A.to_sparse(\"ds\")` if not already CSR). For each column `j` from 0 to n-1, the non-zero rows in column `j` of `L` include row `j` (the diagonal) and all rows `i > j` where `A[i,j] != 0`, plus any fill-in entries propagated from earlier columns. Use the left-looking algorithm: for each row `i`, compute `L[i,:]` by solving a lower-triangular system using the already-computed rows of `L`. Represent the pattern using lists of (row, col) pairs or a CSC-like structure. (3) **Numeric phase**: For each row `i` (left-looking variant), compute `L[i,j] = (A[i,j] - sum_{k<j} L[i,k]*L[j,k]) / L[j,j]` for off-diagonal entries, and `L[i,i] = sqrt(A[i,i] - sum_{k<i} L[i,k]^2)` for the diagonal. Access the CSR arrays (`crow_indices`, `col_indices`, `values`) from `A.index.mode_indices` and `A.values` as defined in `src/scorch/storage.py`. (4) Construct the result `L` as a new `STensor` in CSR format using `STensor.from_csr()` (lines 271-315 of stensor.py) or by directly building `TensorStorage` with `TensorIndex` containing `crow_indices` and `col_indices` for the DENSE+COMPRESSED format. (5) When `upper=True`, compute `L` first then transpose to get `U`. (6) Add `STensor.cholesky(self, upper=False)` as a convenience method. (7) Detect non-positive-definite input (negative diagonal during factorization) and raise `ValueError` with a clear message. Write tests in `tests/test_scorch/test_cholesky.py` covering: small known SPD matrices (e.g., `[[4,2],[2,3]]` with known L), diagonal SPD matrices, tridiagonal SPD matrices (significant fill-in test), upper=True path, non-SPD input error handling, sparse identity matrix (trivial case), round-trip verification `torch.allclose(L.to_torch() @ L.to_torch().T, A.to_torch())`, comparison against `torch.linalg.cholesky(A.to_dense().to_torch())`, and both CSR and COO input formats.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_eigenvalue_solvers",
"description": "Implement sparse eigenvalue computation for finding dominant eigenvalues and eigenvectors without dense materialization. Add two methods: (1) `ops.power_iteration(A, num_iters=100, tol=1e-6)` for finding the largest-magnitude eigenvalue and its eigenvector, and (2) `ops.lanczos(A, k=6, tol=1e-8, maxiter=None)` for computing the `k` largest eigenvalues and eigenvectors of a real symmetric sparse matrix via the Lanczos algorithm. These are iterative algorithms that only require sparse matrix-vector products, which scorch already supports via `ops.matmul(A, x)` dispatching to `spmv` when `A` is 2D and `x` is 1D (see line 276 of `src/scorch/ops.py`). Implementation steps: (1) Add `ops.power_iteration(A, num_iters=100, tol=1e-6)` to `src/scorch/ops.py`. The algorithm: start with a random unit vector `v` (as a dense `STensor` from `STensor.from_torch(torch.randn(n))`), iterate `w = matmul(A, v)`, `eigenvalue = dot(v, w)`, `v = w / norm(w)` until convergence (change in eigenvalue below `tol`). The dot product and norm can use `torch.dot` and `torch.norm` on the dense `.values` of the result vectors since SpMV output is dense format (`\"d\"`). Return a tuple `(eigenvalue: float, eigenvector: STensor, converged: bool, num_iters: int)`. (2) Add `ops.lanczos(A, k=6, tol=1e-8, maxiter=None)` to `src/scorch/ops.py`. The Lanczos algorithm builds a tridiagonal matrix `T` of size `m x m` (where `m >= k`) from the Krylov subspace. At each step `j`: compute `w = matmul(A, q_j) - beta_{j-1} * q_{j-1}`, `alpha_j = dot(q_j, w)`, `w = w - alpha_j * q_j`, `beta_j = norm(w)`, `q_{j+1} = w / beta_j`. After `m` steps, compute the eigendecomposition of the tridiagonal `T` using `torch.linalg.eigh(T_dense)` and return the top-`k` eigenvalues and corresponding Ritz vectors (projections back via the Q basis). Set `maxiter` default to `min(n, max(2*k, 20))`. Implement full reorthogonalization against all previous q vectors for numerical stability. Return `(eigenvalues: torch.Tensor, eigenvectors: torch.Tensor, info: dict)` where info contains convergence metadata. (3) Validate that `A` is square. For Lanczos, validate that `k <= A.shape[0]`. (4) Add `STensor.eigs(self, k=1)` convenience method that dispatches to `power_iteration` for `k=1` and `lanczos` for `k>1`. Write tests in `tests/test_scorch/test_eigs.py` covering: power iteration on a matrix with known dominant eigenvalue (e.g., diagonal matrix), power iteration convergence check, Lanczos on symmetric SPD matrices verified against `torch.linalg.eigh(A.to_dense().to_torch())`, Lanczos on sparse identity (eigenvalues all 1), Rayleigh quotient verification (`v^T A v / v^T v` should equal eigenvalue), reorthogonalization correctness, convergence on graph Laplacians (relevant for GCN use cases), and both CSR and COO input formats.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_cumsum_cumprod",
"description": "Implement cumulative reduction operations `ops.cumsum(A, dim)` and `ops.cumprod(A, dim)` for sparse tensors along a specified dimension. These are prefix-scan operations where `cumsum(A, dim=1)[i,j] = sum(A[i, 0:j+1])` and `cumprod(A, dim=1)[i,j] = prod(A[i, 0:j+1])`. The key semantic design decision for sparse tensors is that implicit zeros participate in the accumulation: cumsum treats missing entries as 0, so the running sum carries forward over gaps; cumprod treats missing entries as 0, meaning the product becomes and stays 0 after the first gap. This means the output of cumsum on a sparse tensor may have more non-zeros than the input (entries between the first and last non-zero in each slice become non-zero), and the output of cumprod becomes zero from the first gap onward (potentially allowing early termination). Implementation steps: (1) Add `ops.cumsum(input, dim)` and `ops.cumprod(input, dim)` to `src/scorch/ops.py`. These are **not** CIN-compilable operations (prefix scans are inherently sequential and do not fit the current ForAll/Where/TensorAssign model). Instead, implement them as direct sparse algorithms operating on the `STensor` storage arrays. (2) For `cumsum` along `dim=1` (row-wise) on a CSR matrix: iterate over each row using `crow_indices`, walk the `col_indices` and `values` for that row, maintaining a running sum. For columns between stored entries, the running sum stays constant (since 0 is added). Build the output as a CSR tensor where each row's entries span from the minimum column index to the maximum column index in the input row (filling in the constant-sum gaps as stored entries). For a COO-format input, convert to CSR first or sort by row then column and process. (3) For `cumprod` along `dim=1`: similar iteration, but the running product drops to 0 at the first missing column index and stays 0. Optimization: once the product hits 0, all subsequent entries in that row are 0 and can be omitted from the sparse output if using fill_value=0. (4) Support `dim=0` (column-wise) by transposing, applying row-wise cumsum/cumprod, then transposing back -- or implement directly via column iteration. (5) Support 1D tensors (`dim=0`). (6) Add `STensor.cumsum(self, dim)` and `STensor.cumprod(self, dim)` convenience methods in `src/scorch/stensor.py`. (7) Construct output `STensor` using appropriate format -- for cumsum the output is typically denser than the input, so a dense format or CSR with more entries; for cumprod the output is typically sparser. Write tests in `tests/test_scorch/test_cumops.py` covering: 1D vector cumsum/cumprod, 2D matrix cumsum along dim=0 and dim=1, cumprod with gaps (verify zero propagation), cumsum of all-zero rows (result should be all zeros), cumsum of dense tensors (should match `torch.cumsum`), identity cumsum (`cumsum(A, 0)` of single-row tensor equals A), both CSR and COO inputs, and verify all results against `torch.cumsum(A.to_dense().to_torch(), dim)` and `torch.cumprod(A.to_dense().to_torch(), dim)` using `torch.allclose`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Scans & segment"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_graph_adjacency",
"description": "Implement graph adjacency matrix utility functions essential for graph neural network workloads, directly supporting the existing GCN example in `examples/gcn/scorch_gcn.py` which currently requires manual adjacency construction. Add the following functions to `src/scorch/ops.py`: (1) `ops.degree(A, dim=1)` -- compute the degree vector of a sparse adjacency matrix by summing along the specified dimension. For `dim=1` (out-degree/row-sum), iterate over each row of the CSR representation and sum the values. For `dim=0` (in-degree/column-sum), sum along columns. Return a 1D dense `torch.Tensor`. This can be implemented using the existing matmul infrastructure: `degree = matmul(A, ones_vector)` where `ones_vector = STensor.from_torch(torch.ones(A.shape[1]))`, leveraging the SpMV path at line 276 of ops.py. (2) `ops.add_self_loops(A, fill_value=1.0)` -- add `fill_value * I` to a sparse adjacency matrix, returning `A + fill_value * I`. Construct the identity in CSR format (`crow_indices = torch.arange(n+1)`, `col_indices = torch.arange(n)`, `values = torch.full((n,), fill_value)`) as an `STensor` via `STensor.from_csr()`, then use `STensor.__add__` (lines 165-256 of stensor.py) to add it to `A`. Handle the case where `A` already has self-loops (the addition will accumulate). (3) `ops.normalize_adjacency(A, mode='sym')` -- compute the symmetric normalized adjacency `D^{-1/2} (A + I) D^{-1/2}` commonly used in GCN (Kipf & Welling 2017). Steps: compute `A_hat = add_self_loops(A)`, compute `d = degree(A_hat, dim=1)`, compute `d_inv_sqrt = d ** (-0.5)` (handle zeros by setting `0^{-0.5} = 0`), then compute the normalized matrix by scaling each value `A_hat[i,j]` by `d_inv_sqrt[i] * d_inv_sqrt[j]` (direct value scaling without full matmul). For `mode='left'`, compute `D^{-1} A` (random walk normalization) by scaling each row by `1/degree[i]`. (4) `ops.to_undirected(A)` -- symmetrize a sparse adjacency matrix by computing `(A + A^T) / 2` or the boolean union. (5) Add corresponding `STensor` convenience methods: `STensor.degree()`, `STensor.add_self_loops()`, `STensor.normalize()`. Write tests in `tests/test_scorch/test_graph.py` covering: degree computation on known graphs (complete graph K4: all degrees = 3, star graph: center degree = n-1), self-loop addition idempotence properties, symmetric normalization eigenvalue bounds (normalized adjacency of connected graph has spectral radius <= 1), `to_undirected` on already-symmetric matrix (should be no-op), normalization of the Cora-like adjacency pattern used in `examples/gcn/`, verification against manual PyTorch computation `D_inv_sqrt @ (A + I) @ D_inv_sqrt` using dense tensors, both CSR and COO input formats, and handling of disconnected nodes (zero-degree rows).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Factories"
]
},
{
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"files": [],
"instance_id": "bobbyyyan__scorch-feature_hash_level",
"description": "Add hash-map based sparse level support (`LevelType.HASH`) to the format system and compiler pipeline. Unlike COMPRESSED (CSR-like sorted arrays with O(log n) or O(nnz) lookup) and COORDINATE (COO unsorted coordinate lists), a HASH level provides O(1) amortized random access to individual entries via a hash table mapping coordinates to positions in the value array. This is useful for random-access workloads like sparse tensor construction, incremental updates, and operations where the access pattern is unpredictable. Implementation steps: (1) Add `HASH = \"h\"` to `LevelType` in `src/scorch/format.py` (after line 11). Add corresponding entries to `_STR_TO_LEVEL_TYPE` (line 15): `\"hash\": LevelType.HASH`, `\"h\": LevelType.HASH`. (2) Update `parse_format` in `src/scorch/utils.py` (line 285) to handle `\"h\"` as a valid format character mapping to `LevelType.HASH`. (3) Define the storage layout for HASH levels. A HASH level at level `l` stores: (a) a flat value array (same as other formats, shared via `TensorStorage.value`), (b) a coordinate array `{tensor}{l}_crd` mapping position to coordinate (like COORDINATE), (c) a hash table mapping coordinate to position (stored as an additional index tensor -- a fixed-size tensor of int32 with open addressing, using sentinel values like -1 for empty slots). The hash table tensor should be stored in `mode_indices[l]` alongside the coordinate array: `mode_indices[l] = [hash_table_tensor, crd_tensor]`. (4) Add conversion methods: `STensor.to_hash()` converts any format to hash-based, and `STensor.from_hash(hash_table, crd, values, shape)` constructs from raw arrays. For conversion from COO/CSR, build the hash table by hashing each coordinate and inserting into the table with linear probing. (5) Add a new `ModeIterator` variant in `src/scorch/compiler/iterator.py` for HASH levels. Iteration over a HASH level for sequential scan follows the coordinate array (like COORDINATE). For random access (locate), generate C++ code that computes `hash(coord) % capacity`, probes linearly until finding the coord or an empty slot, and returns the position. The C++ hash function can be simple: `coord % capacity` with linear probing. (6) Update `CINLowerer` in `src/scorch/compiler/cin_lowerer.py` to handle `LevelType.HASH` in the iteration code generation. For ForAll loops over a HASH level, iterate over the coordinate array (size = nnz) and read coordinates from `{tensor}{l}_crd[p]`. For locating into a HASH level (when the hash tensor is on the RHS of a union/intersection), generate the hash-probe lookup code. (7) Add a C++ helper class `hash_level` in `csrc/header.h` that encapsulates the hash table with insert, lookup, and iteration methods, similar to the existing `coo_workspace` class (lines 245-461). (8) Update `codegen.py` if new LLIR node types are needed for hash operations. Write tests in `tests/test_scorch/test_hash_format.py` covering: construction of a 2D tensor with format `\"dh\"` (dense rows, hash columns) and `\"hh\"` (both levels hashed), conversion round-trips between hash and COO/CSR/dense, element-wise addition of hash-format tensors, SpMV with hash-format matrix, random access correctness (insert and lookup specific entries), handling of hash collisions (test with entries that hash to the same slot), load factor behavior (many entries relative to capacity), empty tensor handling, and numerical verification against dense equivalents using `torch.allclose`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sparse_dim_tiling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement true sparse-dimension tiling in the CIN scheduler and lowering pipeline. Today `Scheduler.auto_schedule` explicitly removes sparse index vars from tiling and `IndexVar.size_llir_var` assumes a dense access. Extend `Scheduler.add_tile` and `auto_schedule` so COMPRESSED/COORDINATE dimensions can be strip-mined by iterator-position tiles rather than dense coordinate-range tiles. For CSR/COMPRESSED levels, tile the position interval `[pos[parent], pos[parent+1])` with outer and inner tile vars; for COO/COORDINATE levels, tile the coordinate-array position range. Add the required IR plumbing so a tiled sparse index can reconstruct the logical coordinate from `*_crd[p]` while still using tiled position variables for loop bounds. Update `iter_lattice.py`, `iterator.py`, and `cin_lowerer.py` to generate valid begin/end initialization, coordinate resolution, and iterator advancement for tiled sparse loops. Preserve existing dense tiling behavior. Write comprehensive tests covering: SpMV and SpMM with CSR and COO inputs, correctness parity versus dense PyTorch references, generated C++ containing sparse position-tile loops, and mixed dense+sparse index expressions where sparse dimensions are tiled.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_tile_remainder_predication",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add remainder-safe predication for tiled loops across dense and sparse domains. Current tiling assumes fixed tile-size loop bounds, which can overrun when dimension sizes (or row-local sparse fiber lengths) are smaller than or not divisible by tile size. Introduce per-tile end bounds: dense loops should use `tile_end = min(global_end, tile_begin + tile_size)`, and sparse position tiles should use row/fiber-local `tile_end = min(parent_end, tile_begin + tile_size)`. Ensure all generated ForLoop/WhileLoop conditions and workspace consumer loops use these bounded ends, including nested tiled reductions. Update lowering so inner-index resolution (`k = k_out + k_in`) is guarded correctly for tail tiles. Write comprehensive tests for edge cases: dimension smaller than tile size, dimension not divisible by tile size, empty sparse rows/fibers, highly irregular row nnz, and regression checks that no out-of-bounds accesses occur while numerical results still match dense references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_segmented_sparse_tiling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement segmented sparse tiling for CSR/COO reductions to improve cache locality in sparse matmul kernels. Add a scheduling transformation that tiles sparse reduction dimensions by nonzero-count segments (position-space tiles) inside each parent fiber (for CSR: per-row `p` segments; for COO: per-leading-coordinate buckets). Extend CIN scheduling so these sparse segments can drive workspace accumulation and partial flushes without changing semantics. In `iter_lattice.py` and `cin_lowerer.py`, generate loop nests that iterate segment-by-segment and correctly merge multiple segments into one logical output row/coordinate. Add an API surface in `ops.matmul`/`einsum` (for example `sparse_segment_tile=<int>`) to enable the feature explicitly, with default off for backward compatibility. Write comprehensive tests covering: CSR and COO SpMM correctness, deterministic output with and without segmentation, cases with long and short rows, repeated coordinates/duplicate COO entries, and generated-loop structure checks showing segment loops in emitted LLIR/C++.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dual_axis_tiling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add dual-axis tiling for mixed sparse-dense kernels (for example SpMM `C[i,n] += A[i,k_sparse] * B[k_sparse,n_dense]`). Support applying two independent tile transforms in one schedule: sparse reduction tiling in position space and dense output-column tiling in coordinate space. Extend scheduler validation so multiple `TileSizeVar`s can coexist safely, with legal loop ordering and workspace insertion rules. Update lowering/codegen to emit nested tile loops with correct index reconstruction for both dimensions, including tile-local workspace allocation and flush logic for the dense axis. Ensure this works for both CSR (`ds`) and COO (`oo`) sparse operands with dense RHS matrices. Write comprehensive tests covering: correctness parity vs untiled kernels, generated code containing both sparse and dense tile loops, non-divisible dense tile tails, irregular sparse fibers, and performance sanity checks on medium matrices showing fewer full-width workspace writes.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_nnz_balanced_partition",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement nnz-balanced sparse tile partitioning for parallel execution. Row-wise parallelization is often imbalanced on skewed sparse matrices; add an inspector step that partitions work into tiles/blocks with roughly equal nonzero counts instead of equal row counts. For CSR, build block boundaries from cumulative `crow_indices`; for COO, bucket by leading coordinate and cumulative bucket nnz. Integrate these partitions into generated kernels so OpenMP (or equivalent parallel loops) iterates over balanced blocks, then executes tile-local sparse loops within each block. Keep semantics identical to existing kernels and preserve deterministic output ordering where required. Write comprehensive tests covering: correctness on skewed and uniform sparsity, partition metadata validity, deterministic results across multiple runs, comparison against baseline row-partition behavior, and generated-code assertions that block-based outer loops are emitted when balancing is enabled.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_workspace_touched_tracking",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "33532a3",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add sparse-tile workspace optimization with touched-entry tracking to avoid full-tile clears each iteration. For tiled sparse reductions that use dense workspaces, replace unconditional tile-wide initialization/flush/clear with a touched-index list (and optional small bitmap) that records only entries updated in the current sparse tile. During consumer/flush, iterate the touched set, write results, and clear only touched workspace slots. Integrate this in `cin_lowerer.py` and C++ support code so it works with both dense-tiled and sparse-position-tiled schedules, including duplicate updates within a tile. Provide a safe fallback to existing behavior when touched tracking is disabled. Write comprehensive tests covering: numerical correctness vs baseline kernels, duplicate-hit correctness, empty-tile behavior, touched-set reset correctness across tiles, and performance sanity checks demonstrating reduced workspace-clear overhead on high-dimensional dense outputs with sparse reductions.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Workspace transforms"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_repr_str",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement informative `__repr__` and `__str__` methods on `STensor` that replace the current placeholder returning `\"Tensor\"`. The output must display the tensor's shape, per-mode format annotations (e.g. `[d, s]` for a dense-then-sparse 2D tensor), number of stored non-zeros (`nnz`), density as a percentage, dtype, and mode_order. For tensors with a small number of non-zeros, include a truncated preview of the stored values; for larger tensors, show only the first and last few entries with an ellipsis. The implementation must generalize to N-dimensional tensors (not just 2D). Write comprehensive tests covering: repr of fully dense tensors, 2D CSR and COO sparse matrices, 3D and 4D sparse tensors with mixed level types, empty tensors (all zeros), tensors of different dtypes (float32, float64, int64), tensors with custom mode_order, and round-trip eval-ability of the repr string where feasible.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Introspection"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_metadata_introspection",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a sparse metadata and introspection API to `STensor`. Implement the following: (1) `nnz` property returning the count of explicitly stored non-zero entries. (2) `density` property returning nnz divided by the total number of elements. (3) `sparsity` property returning 1 minus density. (4) `nonzero()` method returning a tuple of coordinate tensors (one per dimension) for all stored non-zero entries, regardless of the underlying storage format. (5) `nnz_per_fiber(dim)` returning a 1D tensor whose i-th element is the number of non-zeros in the i-th fiber along the given dimension (e.g., nnz per row when dim=0 for a 2D CSR matrix). (6) `is_coalesced()` returning whether COO indices are sorted and deduplicated (always True for non-COO formats). (7) `storage_ratio()` returning the ratio of memory used by the sparse representation to the memory that a dense representation would require. All methods must work correctly for 1D through 5D tensors with any combination of level types. Write comprehensive tests covering: known nnz values for hand-constructed tensors, density and sparsity calculations for various fill ratios, nonzero coordinate extraction for CSR/COO/dense formats in 2D and 3D, nnz_per_fiber on CSR matrices and 3D tensors, is_coalesced on sorted vs unsorted COO tensors, storage_ratio for dense vs sparse tensors, and edge cases like fully dense and fully empty tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Introspection"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_mode_n_product",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement mode-n tensor-matrix product operations in `ops.py`. Add `ops.mode_n_product(X, M, n)` that multiplies an N-dimensional sparse tensor X by a dense matrix M along mode n. The implementation should dynamically generate einsum subscript strings for arbitrary dimensionality rather than hard-coding cases for specific numbers of dimensions. Also add `ops.multi_mode_product(X, matrices, modes)` that applies multiple mode-n products sequentially (the Tucker product). Write comprehensive tests covering: mode-0, mode-1, and mode-2 products on 3D tensors, mode-n product on 4D tensors, identity matrix as M (result should equal input), non-square M that changes the dimension along the contracted mode, multi_mode_product applying matrices to all modes of a 3D tensor, verification of results against `torch.einsum` on dense equivalents for numerical correctness, error handling for incompatible shapes, and COO vs CSR input formats producing identical results.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_squeeze_unsqueeze",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `STensor.squeeze(dim=None)` and `STensor.unsqueeze(dim)` for sparse tensors. `squeeze(dim)` removes a dimension of size 1 at the specified position, updating shape, format (removing the corresponding level type), mode_indices, and mode_order. When `dim=None`, squeeze all dimensions of size 1. `unsqueeze(dim)` inserts a dimension of size 1 at the specified position, adding a dense level type for the new dimension and updating all internal metadata accordingly. Both must generalize to N-dimensional tensors and support negative dimension indexing. Write comprehensive tests covering: squeeze a (1,N) matrix to a 1D vector, unsqueeze a 1D vector to (1,N) and (N,1), squeeze(None) on a shape (1,5,1,3,1) tensor removing all singleton dims, round-trip identity (unsqueeze then squeeze returns original), format preservation for COO and CSR inputs, 4D tensors with mixed level types, negative dim arguments, error on squeezing a non-singleton dimension, and verification against torch.squeeze/unsqueeze on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Reshape"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_unfold_refold",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse tensor unfolding (matricization) and refolding. Add `STensor.unfold(mode)` that converts an N-dimensional sparse tensor into a 2D matrix by unfolding along the specified mode: the given mode becomes the row dimension and the remaining modes are combined (in order) into the column dimension. For COO format, this should be a pure index remapping without touching stored values. Add `STensor.refold(mode, shape)` that reverses the unfolding, reconstructing the original N-dimensional tensor from the unfolded 2D matrix given the original mode and shape. Write comprehensive tests covering: mode-0 and mode-1 unfolding of 3D tensors, correct output shapes (mode-k unfolding of shape (I,J,K) yields (I, J*K) for mode 0), round-trip unfold then refold for 3D and 4D tensors, COO and CSR input formats, 5D tensors, tensors with singleton dimensions, numerical correctness verified against manual numpy/torch unfolding of dense equivalents, and error handling for invalid mode values.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Reshape"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_lu_decomposition",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse LU decomposition. Add `ops.lu(A, pivoting=True)` that decomposes a 2D sparse matrix A into lower triangular L, upper triangular U, and permutation matrix P such that P @ A = L @ U. Use a left-looking sparse algorithm that processes columns left to right, computing each column of L and U by solving a sparse triangular system with previously computed columns. Handle fill-in by dynamically allocating new non-zero entries in L and U. When `pivoting=True`, apply partial pivoting by selecting the largest magnitude element in the current column of L as the pivot and swapping rows accordingly. Detect singular matrices (zero pivot with no available swap) and raise an appropriate error. Return L, U, and P as STensor objects in CSR format. Write comprehensive tests covering: small known matrices with hand-verified L/U factors, verification that P @ A equals L @ U numerically (within tolerance), diagonal matrices (no fill-in), tridiagonal matrices, the pivoting=False code path, singular matrix detection raising an error, comparison of results against torch.linalg.lu_factor for numerical accuracy, and matrices with varying sparsity levels.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_index_select",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse `index_select` for N-dimensional tensors. Add `ops.index_select(input, dim, index)` that selects slices from the sparse tensor `input` along dimension `dim` according to the entries in `index` (a 1D integer tensor). This is distinct from `__getitem__` (which uses int/slice indexing) and from `gather`/`scatter` (which use CIN-level computed indexing). The result is a new STensor whose size along `dim` equals `len(index)`, with all other dimensions unchanged. The implementation must filter and remap coordinates for the selected indices along the given dimension while preserving the storage format. Write comprehensive tests covering: 2D row selection and column selection, 3D selection along each of the three dimensions, 4D tensors, duplicate indices in the index tensor (same slice selected multiple times), single-element index, reversed-order index, out-of-bounds index values raising IndexError, both COO and CSR input formats producing correct results, and numerical verification against `torch.index_select` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Read indexing"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_expand_repeat",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse `expand` and `repeat` operations on `STensor`. Add `STensor.expand(*sizes)` for broadcasting-style logical expansion of singleton dimensions to larger sizes without physically duplicating stored values (the expanded dimension's entries are logically shared). Size -1 means keep the current size. Also support expanding to more dimensions than the original by prepending dimensions. Add `STensor.repeat(*sizes)` for physical tiling that creates new copies of all stored entries, multiplying coordinates appropriately. The repeat count for each dimension specifies how many times to tile along that dimension. Both must work for N-dimensional tensors. Write comprehensive tests covering: expand (1,N) to (K,N), expand with -1 to keep a dimension, repeat a 2D matrix 2x3 times, 3D and 4D expand and repeat, nnz scaling verification (expand should not increase nnz, repeat should multiply it), prepending new dimensions via expand, verification against `torch.Tensor.expand` and `torch.Tensor.repeat` on dense equivalents, error on expanding a non-singleton dimension to a different size, and COO vs CSR format handling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Reshape"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_symmetric_matrix",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add sparse symmetric matrix support. Extend `TensorFormat` to include an optional `symmetric` flag indicating that only the lower (or upper) triangle is stored. Add `STensor.from_symmetric(indices, values, shape)` class method that constructs a symmetric sparse matrix storing only one triangle. Add `STensor.to_symmetric()` instance method that converts a general sparse matrix to symmetric format by keeping only the lower triangle and verifying the matrix is actually symmetric (raising an error if not). Implement symmetric-aware sparse matrix-vector multiplication (SpMV) that reads from half-storage but produces the full result by implicitly mirroring entries across the diagonal. Support batched 3D symmetric matrices with shape (B, N, N) where each slice along the batch dimension is symmetric. Write comprehensive tests covering: construction of symmetric matrices from coordinate data, `to_dense()` correctly mirroring the stored triangle, symmetric SpMV producing results matching full-storage SpMV, nnz halving compared to full storage, symmetric + symmetric addition preserving the symmetric flag, round-trip from_symmetric then to_dense, 3D batched symmetric matrices, error on to_symmetric for non-symmetric input, and dtype preservation.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Semantic extensions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cp_decomposition",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement CP (CANDECOMP/PARAFAC) tensor decomposition via Alternating Least Squares (ALS). Add `ops.cp_decomposition(X, rank, max_iter=100, tol=1e-8)` that decomposes an N-dimensional sparse tensor X into a sum of rank-one components. The algorithm alternates over each mode, computing the Matricized Tensor Times Khatri-Rao Product (MTTKRP) using the existing einsum infrastructure, then solving the least-squares update for that mode's factor matrix. Return a `CPTensor` namedtuple (or dataclass) containing a 1D weights vector (lambdas) and a list of factor matrices, one per mode. Support two initialization strategies selectable via an `init` parameter: `'random'` (Gaussian random factors) and `'svd'` (initialize each factor from the leading singular vectors of the mode-n unfolding). Add a `cp_to_tensor(cp)` utility that reconstructs the full dense tensor from the CP representation. Write comprehensive tests covering: exact rank-1 decomposition recovery, 3D and 4D tensor decomposition, convergence within max_iter for known low-rank tensors, reconstruction accuracy (norm of difference between original and reconstructed tensor below tolerance), MTTKRP correctness verified against explicit dense computation, both COO and CSR inputs, the SVD initialization path, and error handling for invalid rank values.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_nd_advanced_indexing",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement N-dimensional advanced indexing for `STensor.__getitem__` that supports mixed indexing modes across arbitrary dimensions of 3D, 4D, and 5D sparse tensors. The existing `__getitem__` (feature_10) only handles 2D CSR/COO row and column slicing. Extend it to support the following indexing types applied to any dimension of an N-D sparse tensor: (1) integer indexing to select a single hyperplane along a dimension, (2) slice indexing with arbitrary start/stop/step including negative steps, (3) boolean mask indexing where a 1D boolean tensor selects entries along a given dimension, and (4) fancy (tensor) indexing where a 1D integer tensor specifies which indices to gather along a dimension. Support mixed tuples combining these types, e.g. `tensor[2, :, [0,3,5]]` on a 3D sparse tensor. The implementation must correctly filter and remap coordinates, handle negative indices, and preserve the COO storage format. Write comprehensive tests covering: 3D integer indexing along each of 3 dimensions, 4D slice indexing with step > 1 and negative step, 5D boolean mask indexing, fancy indexing with duplicate indices, mixed-mode indexing tuples (e.g. int + slice + fancy), negative indices, out-of-bounds error handling, empty result from boolean mask selecting nothing, and numerical verification against equivalent `torch.Tensor.__getitem__` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Read indexing"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_clamp_clip_round",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse element-wise clamp, clip, and rounding operations for N-dimensional sparse tensors. The existing unary operations (feature_7) cover abs, neg, relu, sqrt, exp, log, tanh, and sigmoid via CIN. This task adds operations with distinct sparsity-preservation semantics that are not covered: (1) `ops.clamp(input, min=None, max=None)` that clamps stored values to [min, max] range, handling the case where clamping can turn stored zeros into non-zeros (e.g. clamp(min=1)) or non-zeros into zeros (e.g. clamp(max=0) on positive values); (2) `ops.clip` as an alias for clamp; (3) `ops.floor(input)` rounding stored values down to nearest integer; (4) `ops.ceil(input)` rounding stored values up; (5) `ops.round(input, decimals=0)` rounding to specified decimal places; (6) `ops.fmod(input, divisor)` computing element-wise floating-point remainder. All operations must work on N-dimensional sparse tensors (2D through 5D) and preserve the storage format (COO or CSR). After applying operations that may create new zero values (e.g. round on small fractional entries), optionally coalesce to remove explicit zeros. Write comprehensive tests covering: clamp with only min, only max, both min and max, clamp(min=0) equivalence to relu, floor/ceil/round on tensors with fractional values, fmod with scalar and tensor divisors, 3D and 4D tensors, COO and CSR formats, dtype preservation (float32/float64), and verification against PyTorch dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Unary math"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_einsum_multi_operand",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the existing sparse einsum to support multi-tensor contractions with 3 or more operands in a single expression. The current einsum implementation only handles pairwise (2-tensor) contractions. Add support for expressions involving 3+ sparse tensors such as `einsum('ijk,jl,km->ilm', A, B, C)`. The implementation must: (1) parse the einsum string to identify all input subscripts and the output subscript; (2) plan a pairwise contraction order using a greedy strategy that minimizes the size of intermediate results (contract the pair with the smallest estimated intermediate first); (3) automatically infer the storage format for intermediate tensors; (4) manage workspace allocation for intermediate results; (5) handle the case where some operands share no indices (outer product chains). Support both explicit output notation ('->') and implicit output (alphabetically sorted free indices). Write comprehensive tests covering: 3-operand contraction (matrix-matrix-matrix chain), 4-operand contraction, a chain where intermediates have higher order than any input, outer product of 3 vectors, mixed sparse-dense operand chains, contraction producing a scalar, 3D tensor contracted with two matrices, verification of contraction order optimality for known cases, and numerical verification against `numpy.einsum` on dense equivalents for 3D, 4D, and 5D tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Einsum"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_qr_decomposition",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse QR decomposition for 2D and batched 3D sparse matrices. Add `ops.qr(input, mode='reduced')` that computes A = Q * R where Q is orthogonal and R is upper triangular. Use a Householder reflection-based approach adapted for sparse storage: apply Householder transformations column by column, tracking fill-in in R while representing Q implicitly as a product of Householder reflectors, then optionally expand Q into explicit sparse or dense form. Support two modes: 'reduced' (thin QR, Q is mxk and R is kxn where k=min(m,n)) and 'complete' (full QR, Q is mxm). For batched 3D tensors of shape (B, M, N), decompose each slice independently. Return a named tuple `QRResult(Q, R)`. Handle rank-deficient matrices gracefully by detecting near-zero pivots with a configurable tolerance. Write comprehensive tests covering: square matrix QR, tall-skinny (m >> n) and short-wide (m << n) matrices, verification that Q is orthogonal (Q^T Q ~= I), verification that R is upper triangular, reconstruction accuracy (||A - QR|| < tol), batched 3D input, rank-deficient matrix, identity matrix, permutation matrix, both COO and CSR input formats, and comparison against `numpy.linalg.qr` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_truncated_svd",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement truncated Singular Value Decomposition (SVD) for sparse matrices using iterative methods that only require sparse matrix-vector products. Add `ops.truncated_svd(input, k, n_iter=5, method='randomized')` that computes the top-k singular triplets (U, S, V) of a sparse matrix without densifying it. Support two methods: (1) 'randomized' - randomized SVD using a random projection, power iteration for subspace refinement, and a final dense SVD of the small projected matrix; (2) 'lanczos' - Lanczos bidiagonalization that builds a Krylov subspace via repeated sparse matvec and matmul-transpose operations (reusing existing `ops.matmul` infrastructure). Return a named tuple `SVDResult(U, S, V)` where U is mxk, S is k, and V is kxn (or nxk transposed). For batched 3D sparse tensors of shape (B, M, N), decompose each slice independently. Write comprehensive tests covering: rank-1 matrix exact recovery, rank-k matrix with known singular values, tall and wide matrices, reconstruction accuracy (||A - U diag(S) V^T|| for top-k), orthogonality of U and V, singular values in descending order, batched input, convergence with increasing n_iter, comparison with `numpy.linalg.svd` top-k values on dense equivalents, both COO and CSR formats, and error on k > min(m,n).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_tucker_decomposition",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement Tucker decomposition for N-dimensional sparse tensors using Higher-Order Orthogonal Iteration (HOOI). Add `ops.tucker_decomposition(X, ranks, max_iter=100, tol=1e-8, init='random')` that decomposes an N-dimensional sparse tensor X into a core tensor G and a list of factor matrices [U_1, ..., U_N] such that X ~= G *_1 U_1 *_2 U_2 ... *_N U_N. The `ranks` parameter is a tuple specifying the desired rank for each mode. The HOOI algorithm alternates: for each mode n, compute the mode-n product of X with all factor matrices except the n-th, then set U_n to the leading left singular vectors of the mode-n unfolding of this result. Use existing mode-n product (feature_89) and unfold operations. Support 'random' and 'hosvd' initialization. Add `tucker_to_tensor(core, factors)` to reconstruct the full tensor. Write comprehensive tests covering: exact Tucker-rank tensor recovery for 3D and 4D tensors, convergence within max_iter, reconstruction accuracy (Frobenius norm of difference below tolerance), core tensor dimensions matching specified ranks, factor matrix orthogonality, HOSVD initialization path, both COO and CSR inputs, comparison with dense Tucker via explicit mode-n products, and error handling for invalid rank tuples.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Decompositions"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_khatri_rao",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement the Khatri-Rao product (column-wise Kronecker product) for sparse matrices. Add `ops.khatri_rao(matrices)` that takes a list of 2D sparse matrices (or a mix of sparse and dense) all having the same number of columns R, with shapes (I_1, R), (I_2, R), ..., (I_N, R), and returns the Khatri-Rao product of shape (I_1 * I_2 * ... * I_N, R). For each column r, the result column is the Kronecker product of all input columns: col_r = kron(A_1[:,r], A_2[:,r], ..., A_N[:,r]). This is distinct from the full Kronecker product (feature_18) which operates on entire matrices. The implementation must handle sparse inputs efficiently by only computing products of non-zero entries, producing a sparse result. Also add `ops.khatri_rao_t(matrices)` for the transposed Khatri-Rao product where inputs are (R, I_k) shaped. Write comprehensive tests covering: Khatri-Rao of two matrices, Khatri-Rao of three and four matrices, identity matrices producing block-diagonal structure, single-column matrices reducing to Kronecker product of vectors, relationship to Kronecker product for R=1, mixed sparse-dense inputs, very sparse inputs (nnz << size), result shape verification, numerical verification against explicit column-wise Kronecker computation, and both COO and CSR input formats.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Tensor products"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_elementwise_min_max",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse element-wise binary `minimum` and `maximum` operations for N-dimensional sparse tensors. Add `ops.minimum(input, other)` and `ops.maximum(input, other)` that compute the element-wise min and max of two sparse tensors. These are distinct from `ops.min` and `ops.max` (feature_40), which are reduction operations that compute the minimum/maximum across a dimension with argmin/argmax indices. The binary min/max operations require union semantics: if a position has a stored value in one tensor but is implicitly zero in the other, the result depends on comparing the stored value with zero (e.g., `minimum(5, 0) = 0` means the position is zero in the result; `minimum(-3, 0) = -3` means it must be stored). The implementation must: (1) compute the union of sparsity patterns from both inputs; (2) compare values at matched positions and against implicit zeros at unmatched positions; (3) optionally drop explicit zeros from the result. Support broadcasting for tensors with compatible shapes. Work for 2D through 5D sparse tensors. Write comprehensive tests covering: element-wise min and max of same-shape 2D tensors, 3D and 4D tensors, tensors with disjoint sparsity patterns, tensors with overlapping patterns, negative values (where implicit zero is the max), broadcasting (e.g. (3,4) with (1,4)), scalar second argument, both COO and CSR formats, and verification against `torch.minimum`/`torch.maximum` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Comparison & predicate"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_equality_compare",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse tensor equality and approximate comparison operations for N-dimensional sparse tensors. Add the following methods and functions: (1) `STensor.__eq__(other)` returning a sparse boolean tensor with True at positions where both tensors have equal values (including positions where both are implicitly zero); (2) `ops.equal(input, other)` returning a single boolean True if two sparse tensors are identical in shape, sparsity pattern, and values; (3) `ops.allclose(input, other, rtol=1e-5, atol=1e-8)` returning True if all corresponding elements satisfy |a - b| <= atol + rtol * |b|, treating implicit zeros correctly; (4) `ops.isclose(input, other, rtol=1e-5, atol=1e-8)` returning a sparse boolean tensor indicating element-wise approximate equality. The implementation must handle the case where two tensors have different sparsity patterns but represent the same logical tensor (e.g., one has explicit zeros stored). Comparison must work across COO and CSR formats transparently. Write comprehensive tests covering: equal tensors with same sparsity pattern, equal tensors with different sparsity patterns (one has explicit zeros), unequal tensors, allclose with values within and outside tolerance, isclose producing correct boolean sparse tensor, 3D and 4D tensors, comparison of COO tensor with CSR tensor, shape mismatch returning False, dtype mismatch handling, comparison with scalar, and empty sparse tensors.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Comparison & predicate"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_pad_nd",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement N-dimensional sparse tensor padding that adds entries to the coordinate structure without dense materialization. Add `ops.pad(input, pad_widths, fill_value=0)` where `pad_widths` is a sequence of (before, after) tuples, one per dimension, specifying how many zeros (or fill_value entries) to add on each side of each dimension. For the default fill_value=0, padding simply expands the tensor shape and shifts existing coordinates - no new stored entries are needed since the padded regions are implicitly zero. For non-zero fill_value, the padding region's entries must be explicitly stored in the sparse structure. Also add `ops.unpad(input, pad_widths)` to remove padding by filtering out coordinates that fall in the padded regions and adjusting coordinates and shape accordingly. The implementation must work for 2D through 5D tensors and both COO and CSR formats. Write comprehensive tests covering: symmetric padding (same before/after) of a 2D matrix, asymmetric padding, padding only along one dimension, zero-fill padding checking that nnz is unchanged, non-zero fill padding checking that nnz increases correctly, unpad reversing a previous pad (round-trip), 3D and 4D tensor padding, padding with negative values (error), padding an already padded tensor, shape correctness after padding, and numerical verification against `torch.nn.functional.pad` on dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Concat & pad"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sparsity_pattern_ops",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparsity pattern operations that extract, compare, and manipulate the structural (boolean) sparsity patterns of sparse tensors, independent of their values. Add the following functions: (1) `ops.sparsity_pattern(input)` returning a boolean sparse tensor (values all True) with the same coordinate structure as the input; (2) `ops.pattern_union(a, b)` returning a boolean sparse tensor whose non-zero positions are the set union of the patterns of a and b; (3) `ops.pattern_intersection(a, b)` returning a boolean sparse tensor whose non-zero positions are the set intersection; (4) `ops.pattern_difference(a, b)` returning positions in a but not in b; (5) `ops.pattern_symmetric_difference(a, b)` returning positions in exactly one of a or b; (6) `ops.pattern_equal(a, b)` returning True if both tensors have identical sparsity patterns regardless of values. All operations must work on N-dimensional sparse tensors (2D through 5D) and handle inputs in both COO and CSR formats. Write comprehensive tests covering: extracting pattern from a tensor and verifying all values are True/1, union of disjoint patterns, union of overlapping patterns, intersection of partially overlapping patterns, intersection of disjoint patterns (empty result), difference and symmetric difference, pattern_equal for matching and non-matching patterns, 3D and 4D tensor patterns, mixed COO/CSR inputs, nnz counting after each operation, and verification that pattern operations ignore values entirely.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Linear Algebra/Pattern algebra"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_segment_reduction",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse segment reduction operations for GNN-style message passing workloads. Add `ops.segment_coo(src, index, dim_size=None, reduce='sum')` that aggregates values from a sparse tensor `src` into segments defined by a 1D integer `index` tensor, where `index[i]` specifies which segment the i-th stored value belongs to. Support reduction types: 'sum', 'mean', 'min', 'max'. Also add `ops.segment_csr(src, indptr, reduce='sum')` that uses a CSR-style index pointer array where segment j contains entries from `indptr[j]` to `indptr[j+1]`. These operations are distinct from gather/scatter (feature_46), which use CIN-level computed indexing for individual element access. Segment reductions aggregate contiguous or indexed groups of entries, which is the core primitive for GNN neighbor aggregation. Support N-dimensional value tensors where segmentation is along the first dimension. Write comprehensive tests covering: segment_coo with sum reduction on 1D and 2D value tensors, mean/min/max reductions, unsorted indices, duplicate indices, empty segments (some segments receive no values), segment_csr equivalence with segment_coo for sorted indices, 3D value tensors, dim_size larger than max(index)+1, single-element segments, large number of segments, and numerical verification against a naive Python loop implementation.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Scans & segment"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_apply_callable",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sparse element-wise apply/map interface that applies user-defined callable functions to the stored non-zero entries of N-dimensional sparse tensors. This is distinct from feature_7 (which adds specific hardcoded unary operations to CIN). Add `ops.apply(input, func)` that takes an STensor and a Python callable, applies `func` to each stored value (or vectorized over the values tensor), and returns a new STensor with the transformed values and the same sparsity pattern. Also add `STensor.apply(func)` as an instance method. The callable receives the values tensor and should return a tensor of the same shape. After applying the function, optionally coalesce to remove any new explicit zeros. Support an `apply_with_coords(input, func)` variant where the callable receives both the coordinate tuples and values, enabling coordinate-dependent transformations (e.g., zeroing out entries on the diagonal). All operations must work for 2D through 5D sparse tensors in both COO and CSR formats. Write comprehensive tests covering: simple lambda (e.g. x * 2), math functions (torch.sin, torch.exp), function that introduces zeros (x - x), apply_with_coords zeroing diagonal entries, apply_with_coords implementing a distance-based filter, 3D and 4D tensors, dtype-changing function (float to bool via x > 0), identity function preserving values exactly, COO and CSR formats, and chaining multiple apply calls.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Element-wise/Unary math"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sort_entries",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement sparse tensor sorting operations that sort stored entries by value or by coordinate indices along specified dimensions for N-dimensional sparse tensors. Add the following functions: (1) `ops.sort_values(input, descending=False)` that returns a new STensor with stored entries sorted by their values (and coordinates reordered accordingly), along with a permutation index tensor; (2) `ops.sort_indices(input, dim=None)` that sorts stored entries by their coordinate indices - if dim is None, sort lexicographically by all coordinates (coalesce order); if dim is specified, sort by coordinates along that dimension with ties broken by subsequent dimensions; (3) `ops.argsort_values(input, descending=False)` returning only the permutation indices without constructing a new tensor; (4) `ops.topk(input, k, largest=True)` returning the top-k stored entries by value along with their coordinates. All operations must work for 2D through 5D sparse tensors in both COO and CSR formats. Write comprehensive tests covering: sort_values ascending and descending on 2D tensors, sort_indices along each dimension of a 3D tensor, lexicographic sort_indices matching coalesce order, argsort correctness, topk with k < nnz, topk with k = nnz, topk on 4D tensor, stability of sort (equal values preserve original order), permutation index correctness (applying permutation reproduces sorted result), both COO and CSR formats, and comparison with manual sorting of the values tensor.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Indexing & Mutation/Canonicalization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_sparse_embedding",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sparse embedding layer with sparse gradient support for NLP and recommendation system workloads. Add `SparseEmbedding(num_embeddings, embedding_dim, sparse=True)` as a module class that maintains a dense weight matrix of shape (num_embeddings, embedding_dim) but produces sparse gradients during backpropagation. The forward pass takes a 1D or 2D integer index tensor and returns the corresponding embedding vectors (dense output). The backward pass must produce a sparse gradient for the weight matrix where only the rows corresponding to the looked-up indices have non-zero gradient entries, represented as an STensor in COO format. Support padding_idx (an index whose embedding is fixed at zero and excluded from gradient updates), max_norm (renormalize embeddings whose L2 norm exceeds this value), and scale_grad_by_freq (scale gradients by the inverse of the frequency of the index in the mini-batch). Integrate with scorch's autograd by registering a custom backward function. Write comprehensive tests covering: forward pass correctness against `torch.nn.Embedding`, sparse gradient shape and sparsity pattern, gradient values matching dense embedding gradient at non-zero rows, padding_idx producing zero embedding and zero gradient, max_norm clipping, scale_grad_by_freq scaling, 2D index input (batch of sequences), gradient accumulation across multiple forward passes, embedding update via sparse SGD step, both float32 and float64 dtypes, and large num_embeddings with small batch verifying gradient sparsity.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Attention & embedding"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cin_autodiff",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a compiler-side source-to-source reverse-mode automatic differentiation pass that operates on the CIN IR. This is distinct from a `torch.autograd.Function` wrapper at the Python level - the goal here is that, given a forward CIN statement, the compiler emits a *new* CIN statement that computes the partial derivative with respect to a chosen input tensor variable, and then lowers it through the existing CIN->LLIR->C++ pipeline. (1) In `src/scorch/compiler/cin.py`, add a `differentiate(cin: IndexStmt, wrt: TensorVar) -> IndexStmt` function that walks the CIN tree and produces a new CIN tree representing the partial derivative with respect to `wrt`. Support all four `Operation` enum values (ADD, SUB, MUL, DIV), `BinaryOp`, `TensorAccess`, `WorkspaceAccess`, `TensorAssign`, `ForAll`, and `Where`. The chain rule for a contraction such as `C[i,j,k] = A[i,j] * B[j,k]` must produce `dA[i,j] += dC[i,j,k] * B[j,k]` and `dB[j,k] += dC[i,j,k] * A[i,j]` - the index-variable structure of the gradient CIN follows directly from the forward CIN, with the gradient tensor variable swapped into the LHS and the partial expression on the RHS. (2) In `src/scorch/ops.py`, add a generic helper `ops.autograd_op(forward_cin, inputs, output)` that wraps any CIN computation in a `torch.autograd.Function` subclass. Forward lowers and runs `forward_cin`; backward calls `differentiate` once per input, lowers each gradient CIN, runs it, and returns the gradients in the order PyTorch expects. (3) Wire `STensor.__add__`, `__mul__`, `matmul`, and `einsum` so that when any operand has `requires_grad=True` (the field already exists in `stensor.py`), the op routes through `autograd_op` and the resulting `STensor` participates in the surrounding PyTorch autograd graph. (4) The differentiation pass must be rank-agnostic - it relies only on the CIN's index-variable structure and must not specialize on 2D operands. Produce gradient CIN that lowers correctly for inputs of rank 1, 2, 3, and higher. Document the gradient-sparsity rule: if the forward output is sparse with pattern P_out, the gradient w.r.t. an input whose forward access reads positions Q can have a denser sparsity pattern; emit either dense or COO gradients depending on a `gradient_format` argument. Write tests that compare gradients against `torch.autograd.gradcheck` on equivalent dense reference tensors for at least one 1D, one 2D, and one 3D test case per supported op.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Autograd",
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cin_ifthenelse",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a CIN-level `IfThenElse` IR node together with a comparison expression family for conditional sparse computation. Today CIN has no conditional construct - control flow is exclusively spatial (via `ForAll`) or producer-consumer (via `Where`). The LLIR has `IfThenElse` (`llir.py:510`) but it is only synthesized inside the lowerer for runtime guards; user-authored CIN cannot express conditional bodies. (1) In `src/scorch/compiler/cin.py`, add an `IfThenElse(cond, then_stmt, else_stmt)` IndexStmt subclass and an `IndexCondition` expression type supporting comparisons (`<`, `<=`, `==`, `>=`, `>`, `!=`) between `IndexVar`/`IndexVarExpr`/`IndexConstant` operands and logical combinations (`And`, `Or`, `Not`). Add an `IndexConstant(value)` IR node so literal scalars can appear in conditions. (2) In `src/scorch/compiler/cin_lowerer.py`, add `lower_IfThenElse` and `lower_IndexCondition` methods that emit the existing `llir.IfThenElse` and `llir.BinOp` constructs. The lowered branches must respect the surrounding iterator state - index variables defined by enclosing `ForAll`s remain in scope and the iteration over sparse operands inside the branches must continue to obey the lattice. (3) Update `IterationLattice` in `src/scorch/compiler/iter_lattice.py` so that the iteration space of an `IfThenElse` is the *union* of the spaces required by both branches, not just the union of their accessed tensors - a sparse iterator referenced only inside one branch must still iterate over its full extent, but the body executes only when the condition holds. (4) Add a high-level helper `ops.where(cond_stensor, a_stensor, b_stensor)` in `src/scorch/ops.py` that builds a CIN with `IfThenElse` for arbitrary-rank inputs (do not restrict to 2D - the index-variable list must match the rank of the operands; broadcasting follows the same rules as the existing elementwise binary ops). Also support a structural form `ops.where(cond_index_expr, a, b)` where `cond_index_expr` is a Python expression on `STensor.index_vars` (e.g. `i > j` for upper-triangular masking). (5) Write tests covering: branching on an index expression (e.g. `i > j` for upper-triangular masking) for ranks 1, 2, and 3 (the 2D case is illustrative - both 1D and 3D variants must exist); branching on a sparse boolean tensor; verifying the lowered C++ contains the expected `if (...)` block; and an end-to-end correctness comparison of `ops.where(c, a, b)` against `torch.where` on densified equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_rle_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `LevelType.RLE` (run-length encoding) to the format system and the full compiler pipeline. RLE is the right format for sparsity patterns where consecutive entries share a coordinate (segmented graphs, time-series with aligned events, certain post-permutation matrices). It compresses runs of identical coordinates into `(value, run_length)` pairs, which neither `COMPRESSED` (CSR-like) nor `COORDINATE` (COO-like) does today. (1) In `src/scorch/format.py`, extend `LevelType` with `RLE = \"rle\"` and update `_STR_TO_LEVEL_TYPE` and `_parse_level_type`. Define the storage layout: two parallel arrays `crd_values[k]` (the distinct coordinates, monotone non-decreasing) and `crd_runs[k]` (the run length of each coordinate). Implement `LevelPack.get_arrays()` for RLE so runtime code can read out the underlying torch tensors. (2) In `src/scorch/compiler/iterator.py`, add an RLE branch to `ModeIterator.get_init_stmts()` that emits initialization for both arrays plus a per-iteration counter that decrements through the current run before advancing to the next run. (3) In `src/scorch/compiler/iter_lattice.py`, extend `LatticePoint`'s advance logic: an RLE iterator at coordinate `c` represents `crd_runs[k]` consecutive logical positions, all at coordinate `c`. Co-iteration with another sparse format must skip the entire run when the other tensor's coordinate moves past `c`, and must align all logical positions within a run when the other format is DENSE. (4) In `src/scorch/compiler/cin_lowerer.py`, add coordinate-decode and run-advance LLIR emission so that index variables bound to an RLE level take the value of `crd_values[k]` for every logical position within the run. (5) In `src/scorch/stensor.py`, add `STensor.from_rle(crd_values, crd_runs, values, ...)` and `STensor.to_rle(dim)` that converts an existing format to RLE along the specified mode. The format must work for any tensor rank - the RLE level can sit at any depth in the level pack, and must compose correctly with DENSE/COMPRESSED/COORDINATE levels both above and below it. Write tests covering: RLE-only 1D; RLE-inner 2D (CSR-style outer with RLE inner) used as an illustrative 2D case; RLE-inner 3D; SpMV against the same tensor stored as COMPRESSED (results must agree); element-wise add across two RLE-inner tensors with different run boundaries.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_morton_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `LevelType.MORTON` to the format system. A Morton level encodes multiple logical dimensions into a single linear coordinate using a Morton (Z-order) bit-interleaving of those dimensions, producing a space-filling-curve traversal that improves cache locality across multi-dimensional sparse access patterns. The new level type is a structural compression of multiple consecutive logical dimensions into one stored level. (1) In `src/scorch/format.py`, extend `LevelType` with `MORTON = \"z\"`, update parsing, and add a `MortonLevelFormat` subclass of `LevelFormat` recording which logical dimensions it interleaves and their per-dimension bit-widths. The pack stores a single `morton_codes[k]` array (sorted), implicitly encoding all the interleaved dimensions. (2) Add `morton_encode(coords: List[int], bit_widths: List[int]) -> int` and `morton_decode(code: int, bit_widths: List[int]) -> List[int]` Python helpers (used during storage construction) and inline-C++ equivalents for codegen. Generate the codegen helpers as inline functions in `csrc/header.h`. The encoder must support arbitrary numbers of interleaved dimensions, not just two. (3) In `src/scorch/compiler/iterator.py`, add MORTON iteration logic: for each Morton code, emit a decode step that recovers the per-dimension coordinates for use in the body, binding each interleaved index variable to its decoded value. (4) In `src/scorch/compiler/cin_lowerer.py`, ensure that any consumer level above a MORTON level reads decoded coordinates (not the raw code) for index-variable bindings, and that co-iteration with another sparse level whose coordinate space matches one of the Morton-interleaved dimensions remains correct. (5) In `src/scorch/stensor.py`, add `STensor.to_morton(dims: List[int])` that compacts the listed adjacent levels into a single MORTON level, and `STensor.from_morton_dense(dense, dims)` for direct construction from a dense tensor. The encoding/decoding must work for arbitrary k >= 2 interleaved dimensions; do not specialize the implementation on k=2. Write tests covering: a 2D matrix stored Morton vs CSR with the same nnz running the same SpMM (verify equal results - 2D is illustrative); a 3D tensor stored with a Morton level over two inner modes and a DENSE outer mode; a 4D test case where Morton interleaves dims 1, 2, 3 and dim 0 is DENSE; round-trip equality of `from_morton_dense` followed by `to_dense`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_ragged_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `LevelType.RAGGED` to the format system to natively represent jagged sublist structures (one variable-length list per parent coordinate) without padding to a maximum length. RAGGED differs from `COMPRESSED` in two ways: (a) the offset array points into a *value* array directly rather than into a coordinate array - there are no per-element coordinates because position within the sublist *is* the coordinate - and (b) the inner dimension is logically unbounded (no fixed shape constraint along the ragged mode). This is the right format for variable-length sequence batches, neural-network sparse activations, and any tensor whose sublists genuinely have no upper bound. (1) In `src/scorch/format.py`, extend `LevelType` with `RAGGED = \"r\"`, update `_STR_TO_LEVEL_TYPE` and `_parse_level_type`, and add a `RaggedLevelFormat` subclass capturing the parent-relative `offsets` array. Implement `LevelPack.get_arrays()` for the RAGGED case. (2) In `src/scorch/compiler/iterator.py`, extend `ModeIterator` with a RAGGED branch: the iterator walks `[offsets[parent], offsets[parent+1])`, and the index variable for this level is the *intra-sublist position* (not a stored coordinate). (3) Update `IterationLattice` in `src/scorch/compiler/iter_lattice.py` so that a RAGGED level participates in co-iteration only with DENSE or other RAGGED levels at the same depth (RAGGED + COMPRESSED would require coordinate alignment which is not defined for RAGGED) - emit a clear lowering error when an unsupported combination appears. (4) In `src/scorch/compiler/cin_lowerer.py`, generate level-array initialization and offset-driven loop headers for RAGGED. (5) In `src/scorch/stensor.py`, add `STensor.from_ragged(offsets, values, format)`, `STensor.ragged_lengths(dim)`, and `STensor.from_nested_list(list_of_lists, ...)` for ergonomic construction. The level must be usable at any depth of any rank - the RAGGED level can sit at any depth, with an arbitrary number of DENSE/COMPRESSED levels above it. Write tests covering: a 2D ragged (a list of variable-length 1D vectors) used as the illustrative 2D case; a 3D ragged-inner (a batch of ragged matrices); a co-iteration case where a 2D RAGGED is added element-wise to a 2D DENSE tensor of compatible padded shape; and an attempted RAGGED + COMPRESSED operation that must raise a clear error rather than silently producing wrong results.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Hierarchical & multi-d"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_packed_coords_bitwidth",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement variable-bit-width packed coordinate arrays for sparse levels. The `_bit_width: Optional[int]` field on `LevelFormat` already exists in `src/scorch/format.py:45` but is unused throughout the storage and codegen pipeline; wire it through end-to-end so that a sparse level whose dimension fits in 8 bits uses a `uint8_t` coordinate array rather than the current uniform `torch.int` (32-bit). (1) In `src/scorch/format.py`, add `LevelFormat.choose_bit_width(dim_size: int) -> int` that picks the smallest of {8, 16, 32, 64} sufficient to index the level (e.g. dim_size <= 256 -> 8, <= 65536 -> 16). When constructing a `Format`, default `_bit_width` per level to the chosen width if not explicitly given. Add a public `LevelFormat.bit_width` read-only property. (2) In `src/scorch/storage.py`, update `TensorIndex` so per-level coordinate arrays are allocated with the dtype dictated by `_bit_width` rather than uniformly `torch.int`. Update the `to_dense`/`to_sparse` round-trip helpers and any density-based factory methods to respect the per-level dtype. (3) In `src/scorch/compiler/cin_lowerer.py`, propagate the per-level coordinate dtype through lowering - emit `uint8_t*` / `uint16_t*` typed kernel parameters and emit explicit casts to `int64_t` wherever a coordinate participates in arithmetic that may overflow (multiplications by dimension strides; offsets into other tensors). Update `CINLowerer.get_level_arrays` (`cin_lowerer.py:478`) and the parameter-type calculation in the kernel signature emitter. (4) In `src/scorch/compiler/codegen.py`, ensure the new narrow integer types are emitted correctly from the `DataType` enum (extend `from_dtype` / add `uint8_t`, `uint16_t` cases if missing). (5) The implementation must support tensors of any rank and any combination of per-level bit widths - different levels of the same tensor may carry different widths. Make sure the kernel-cache key includes the per-level bit widths so a kernel compiled for `uint16` coordinates is not silently reused for a `uint32` operand. Write tests covering: a 3D tensor with three levels of differing bit widths; a SpMM kernel where one operand has uint8 and the other uint16 coordinates (results must agree with the uniform-int32 baseline); round-trip equality between the narrow-width and uniform-width paths on 1D, 2D (illustrative), and 4D test tensors; and assertions on the emitted C++ that the parameter types are the narrow types.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Compressed-style levels"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cache_hierarchy_tiling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a multi-level cache-aware tiling pass to the scheduler that produces nested tiles sized to fit L1, L2, and L3 caches respectively. Today, `Scheduler.add_tile()` (`scheduler.py:839`) performs a single level of tiling at a fixed `tile_size`; extend so that for a chosen index variable the scheduler can apply *multiple* tile levels with sizes derived from the cache hierarchy and the per-iteration working-set estimate. (1) In `src/scorch/compiler/scheduler.py`, add `Scheduler.add_cache_tile(cin, index_var)` that introspects the operands accessed under that index variable, computes a per-iteration working set in bytes, and emits two or three nested `add_tile` calls with sizes targeted to fit the L1, L2, and L3 working sets respectively. Cache sizes default to 32 KiB / 256 KiB / 8 MiB but must be overridable via a `CacheModel` dataclass (added to the same file) with fields `l1_bytes`, `l2_bytes`, `l3_bytes`, `cacheline_bytes`, and `num_threads`. (2) Extend `_CostModelConstants` (`scheduler.py:23`) with `cache_model: CacheModel`, `cache_miss_cost_l1: float`, `cache_miss_cost_l2: float`, `cache_miss_cost_l3: float`, and `bytes_per_value: int` (derived from the dtype of the involved tensors). (3) Add `Scheduler._estimate_working_set(cin, index_var)` that computes the total bytes of tensor data accessed within a single iteration of `index_var` for all enclosed tensor accesses - account for both value arrays and per-level position/coordinate arrays for sparse operands. The estimator must walk the full mode list of every accessed tensor, not assume two modes. (4) Integrate the new `cache_tile` transform into `auto_schedule` (`scheduler.py:1461`) gated by a heuristic: only apply when the working set per outer iteration exceeds the L3 size and the index variable has at least 1024 effective iterations under the chosen schedule. (5) The implementation must be rank-agnostic - index variables can come from a tensor of any number of modes; the working-set computation must handle rank 1, 2, 3 (the 2D matmul case is one of many - do not specialize), and higher. Write tests covering: a 2D SpMM where cache-tiling is enabled (assert the lowered CIN contains nested `is_tiled` index variables for multiple levels); a 3D mode-n contraction where cache-tiling fires on the contracting mode; an explicit cost-model unit test that asserts `_estimate_working_set` returns the right byte count for several rank-N inputs; and a case where the heuristic correctly declines to apply cache-tiling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_workspace_pooling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement workspace lifetime analysis and memory pooling across multiple `Where` clauses in a CIN program. Today, every `Where` lowers via `cin_lowerer.py:lower_Where` (line 860) into an independent workspace allocation; chained kernels (e.g. `D = (A @ B) + (A @ C)` lowering into two workspaces) allocate two separate buffers even when only one is live at a time. (1) Create `src/scorch/compiler/workspace_analysis.py` with a `WorkspaceLifetimeAnalysis` class that walks a CIN tree and produces `{workspace_id: (def_point, last_use_point)}` for every workspace in the program. The analysis must handle nested `Where` clauses and shared subexpressions, and must produce sound results for workspaces of arbitrary rank. (2) Add a `MemoryPool` class with `allocate(size_bytes, alignment) -> pool_offset` and `free(pool_offset)` operations. Use an interval-graph coloring algorithm to assign pool offsets such that workspaces with non-overlapping lifetimes can share the same pool region. The pool size is the maximum number of simultaneously-live bytes plus padding for alignment. (3) Extend `CINLowerer` (`cin_lowerer.py:444`) with an `enable_workspace_pooling: bool = False` constructor argument. When enabled, replace per-`Where` `llir.Allocate(...)` calls with pool offsets into a single shared `coo_workspace` allocation sized by the analysis. Free at the end of the lowered region. The shared pool must be a kernel parameter plumbed through `lower_function_definition` in `codegen.py:265`. (4) The pooling logic must work for workspaces of arbitrary rank - the lifetime tracking is structural (CIN tree positions), and the pool allocator works in bytes; do not specialize on 1D or 2D shapes. Workspaces of differing per-element dtypes (e.g. one float32, one float64) must be sized correctly. (5) Make sure the kernel cache key includes a flag indicating whether pooling was applied so a pooled and an un-pooled lowering of the same CIN do not collide. Write tests covering: a chain of three `Where` clauses where the analysis identifies that two can share storage; correctness of lowered kernels with pooling enabled vs disabled (must produce numerically identical outputs across 1D, 2D, and 3D test inputs); unit tests for the interval-graph coloring on synthetic lifetime sets including a 4-workspace case where exactly two pool slots suffice; and a 3D-tensor end-to-end test asserting the pool size matches the analytical maximum.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Workspace transforms"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_auto_transpose_insertion",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add an automatic transpose-insertion pass to the scheduler that, when the cost model favors it, physically permutes the mode order of an input operand so the chosen loop order matches its storage order. Today the scheduler picks a loop order based on sparsity (`Scheduler.sort_by_sparsity_descending`, `Scheduler.optimize_loop_order` at `scheduler.py:520`) but never modifies operand layout - kernels with mismatched mode order pay a strided-access cost forever. (1) In `src/scorch/compiler/scheduler.py`, add `Scheduler.maybe_insert_transpose(cin, tensor_var, new_mode_order)` that produces a new CIN with a synthetic `Workspace`-backed transpose-copy of `tensor_var` permuted to `new_mode_order` and rewrites all subsequent `TensorAccess`es of that variable to read from the workspace. The synthesized transpose CIN must lower correctly through the existing pipeline - for an N-dimensional tensor it lowers to an N-deep nested loop nest writing into a freshly-allocated workspace. (2) Add `Scheduler._estimate_strided_access_penalty(cin, tensor_var, loop_order) -> float` returning an estimated cost (in floating-point ops per element) of accessing the operand in `loop_order` while it is stored in its current mode order. The penalty must scale with both the rank and the per-level types - penalty for traversing a `COMPRESSED` level inner-out is much higher than traversing a `DENSE` level. (3) In `auto_schedule` (`scheduler.py:1461`), after loop-order optimization, walk the CIN's tensor accesses and, for each operand whose strided-access penalty exceeds `_compute_transposition_cost()` (`scheduler.py:429`), rewrite via `maybe_insert_transpose`. Update `_CostModelConstants` if extra constants are needed. (4) The pass must work for tensors of any rank - for a 4D operand, rotating any cyclic permutation of the four modes must produce a correct transpose. Do not specialize on 2D matrices. (5) Make sure the kernel-cache key includes the *post-transpose* operand mode order so two CINs that look identical on the surface but produce different transpose decisions don't share a cache slot. Write tests covering: a 2D mode-mismatched matmul (illustrative) where the pass inserts a transpose; a 3D mode-n product where the contracting mode is initially innermost in B's layout and the pass moves it; a 4D batched tensor contraction; an assertion that the pass correctly declines to insert a transpose when the strided penalty is below threshold; and an end-to-end correctness check against the un-transposed kernel.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_atomic_parallel_scatter",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add atomic operations to the LLIR for safe parallel scatter into shared sparse outputs. Today the scheduler heuristically refuses to parallelize loops whose inner body writes to a shared output position (`CINLowerer._should_parallelize_outer_forall` at `cin_lowerer.py:2048`, `_is_openmp_compatible_for_loop` at `cin_lowerer.py:2066`); add atomics so those loops can be parallelized. (1) In `src/scorch/compiler/llir.py`, add an `AtomicOp(Stmt)` node with fields `op: AssignOp` (the existing enum at line 74), `target: Expr`, `value: Expr`, and `memory_order: str` (default `\"relaxed\"`, also support `\"acq_rel\"`, `\"seq_cst\"`). Restrict `op` to associative-commutative `AssignOp` cases (`+=`, `-=`, `*=`, `/=` - note `-=` and `/=` are not commutative; document which `memory_order` settings are required for each to be safe). (2) In `src/scorch/compiler/codegen.py`, extend `lower_llir` to emit `#pragma omp atomic update` for `+=`/`*=` cases under `relaxed` ordering, and emit `__atomic_fetch_add`/`__atomic_compare_exchange_n` for non-relaxed orderings. Add the necessary `<atomic>` include to the kernel preamble in `csrc/header.h`. (3) In `src/scorch/compiler/cin_lowerer.py`, replace the conservative parallelization-refusal in `_is_openmp_compatible_for_loop` (line 2066) with: when the loop is otherwise parallelizable but writes to a shared output position, rewrite the assignment to an `AtomicOp` and parallelize. The detection of \"shared output position\" must handle outputs of arbitrary rank - for a rank-N output where modes 0..K participate in the parallel loop and modes K..N do not, mode K..N may or may not introduce conflicts depending on the iteration pattern; reason about this structurally rather than special-casing 2D. (4) Add `Scheduler.parallelize_with_atomics(cin, index_var)` in `scheduler.py` that explicitly enables atomic-based parallelization for a chosen loop and updates the cost model to account for atomic-operation overhead (extend `_CostModelConstants` with `c_atomic`). Auto-select atomic parallelization in `auto_schedule` when the conservative `is_safely_parallelizable` test fails but an atomic rewrite would unlock a previously-serial outer loop with high estimated speedup. (5) Write tests covering: SpMV with atomic-based row-parallel output; an N-d (use 3D) reduction along an inner mode parallelized with atomics; correctness comparison against the non-atomic baseline across 1D, 2D, and 3D inputs; and assertion that the lowered C++ contains the expected `#pragma omp atomic` directive.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Codegen/Parallelism"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_loop_distribution",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a loop distribution (loop fission) pass to the scheduler that splits a single multi-statement loop nest into multiple loop nests, enabling the existing fusion/parallelization machinery to operate on cleaner units. Today the scheduler only does loop reordering and tiling - there is no transformation that breaks one loop into many. (1) In `src/scorch/compiler/scheduler.py`, add `Scheduler.distribute_loop(cin, index_var)` that takes a CIN where multiple statements share a `ForAll(index_var)` parent and produces a new CIN with one `ForAll(index_var)` per statement (each cloned), provided the statements have no inter-statement loop-carried dependencies. (2) Add `Scheduler._has_loop_carried_dependence(stmt_a, stmt_b, index_var)` that performs a Bernstein-style read/write set analysis: compute the set of tensor positions written by `stmt_a` and read or written by `stmt_b` (and vice-versa) at iterations `i` and `i+1` of `index_var`, and return `True` if those sets intersect. The analysis must be sound for arbitrary tensor rank and for both dense and sparse tensors; for sparse, the read/write set is over the *abstract* coordinate space, not the stored positions. (3) Provide a user-facing `Scheduler.fission(cin, index_var, statement_groups: List[List[int]])` that takes an explicit grouping of statements per loop instead of the all-singleton split - this enables partial distribution. The grouping must respect dependence direction (statements in earlier groups must not depend on statements in later groups). (4) Integrate distribution into `auto_schedule` (`scheduler.py:1461`) as a precondition for fusion: apply distribution first to break wide bodies, then apply existing fusion logic to merge compatible adjacent loops. The combined pass must converge - implement a fixed-point iteration with a maximum of 10 rounds and document the rationale. (5) The pass must work for loops at any depth in any rank - distribution at an outer loop of a 4D nest must correctly clone all inner ForAlls. Do not specialize on 1D or 2D nests. Write tests covering: a CIN with three `TensorAssign` siblings under one `ForAll` where exactly two can be safely distributed (one has a true dependence on the third, asserted via the dependence test); a 3D tensor case to exercise the rank-agnostic dependence analysis; an integration test where distribution then re-fusion yields better generated code than either alone (compare lowered statement counts).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_simd_intrinsics",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend codegen to emit explicit SIMD intrinsics rather than relying solely on `#pragma omp simd` directives. Today, `ForLoop.simd: bool` (`llir.py:479`) only emits the SIMD pragma; extend so that vectorizable loops can be lowered to AVX-512 (x86) or NEON (ARM) intrinsics with a scalar epilogue and a fallback to the pragma when no target ISA is available. (1) In `src/scorch/compiler/llir.py`, add a `SIMDLoop(Stmt)` node with fields `vector_width: int`, `scalar_iter_var`, `simd_body: List[Stmt]`, `epilogue_body: List[Stmt]`, and `target_isa: str` (one of `\"avx512\"`, `\"avx2\"`, `\"neon\"`, `\"scalar\"`). The `simd_body` operates on `vector_width` consecutive iterations at once; the `epilogue_body` handles the scalar tail. (2) In `src/scorch/compiler/codegen.py`, extend `lower_llir` to emit ISA-specific intrinsic calls for `SIMDLoop`. For AVX-512, emit `_mm512_loadu_ps`/`_mm512_fmadd_ps`/`_mm512_storeu_ps`; for NEON, emit `vld1q_f32`/`vfmaq_f32`/`vst1q_f32`. The emitted code must include the appropriate header (`<immintrin.h>` or `<arm_neon.h>`) gated by an `#ifdef __AVX512F__` / `__ARM_NEON` block, with the scalar epilogue executing for the tail (and serving as the fallback when no SIMD ISA is available). Update `csrc/header.h` accordingly. (3) In `src/scorch/compiler/cin_lowerer.py`, add a post-pass `_promote_to_simd_intrinsics(stmts, target_isa)` that walks the lowered LLIR, identifies tight inner loops over contiguous dense values (no sparse coordinate dereferences in the body), and rewrites them as `SIMDLoop`. The promotion must handle bodies of arbitrary tensor rank - only the innermost loop is vectorized, but the surrounding loops may have any depth. Float32 and float64 must both be supported (different vector widths per dtype: 16 vs 8 for AVX-512, 4 vs 2 for NEON). (4) Add a `target_isa` parameter to `CINLowerer` (default detected from `platform.machine()` and a one-time probe that compiles a tiny test program with `-march=native`). The detected ISA is part of the kernel cache key. (5) Write tests covering: SpMV inner loop vectorized to AVX-512 intrinsics on x86 (skip on ARM via `pytest.mark.skipif`); element-wise add of two dense STensors with the vectorized inner loop; assertion that the generated C++ includes the right intrinsic names and the `#ifdef` guard; correctness comparison against the scalar pragma-only path on 1D, 2D (illustrative), and 3D inputs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Codegen/Vectorization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cin_simplify",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add an algebraic simplification and canonicalization pass over the CIN IR that runs before scheduling and lowering. Today no such pass exists - every `BinaryOp` lowers verbatim, even when one operand is a multiplication by zero or addition of zero. (1) Create `src/scorch/compiler/cin_simplify.py` with a `SimplificationPass` class that walks a CIN tree and applies a fixed-point set of rewrites: (a) `A * 0 -> 0` (eliminate the entire `BinaryOp` subtree, replace with the zero-tensor expression - for non-fully-determined operands, the zero must propagate through enclosing reductions correctly); (b) `A + 0 -> A`, `A - 0 -> A`, `A * 1 -> A`, `A / 1 -> A` (eliminate the operation node); (c) constant folding for fully-constant subtrees (using a new `IndexConstant(value)` IR node); (d) associativity canonicalization for commutative ops (always order operands by a stable structural hash); (e) common-subexpression elimination across `BinaryOp`s with structurally identical children, materialized as a shared `Workspace`; (f) `Where` clause elimination when the workspace is unused after the producer body. (2) Add the `IndexConstant(value)` IR node to `src/scorch/compiler/cin.py` for scalar literals usable as a CIN operand; the lowerer must emit it as an `llir.Literal`. (3) Wire `SimplificationPass.run(cin)` into the lowering entry point `ops.lower_and_exec_cin` (`ops.py:837`) before the scheduler is invoked, controlled by a `simplify: bool = True` argument. (4) The pass must be sound for all tensor ranks - none of the rewrite rules may assume a specific number of modes. The canonicalization order must produce stable kernel hashes (so the existing kernel cache still hits on semantically-equivalent inputs). For sparse outputs, zero elimination must respect format inference: replacing `A * 0` with the zero tensor of the same shape and a sparse format with no stored entries (not a dense buffer of zeros). (5) Write tests covering: each individual rewrite rule on minimal CIN snippets; a property test that simplification preserves numerical results across 1D, 2D (illustrative), 3D and 4D test tensors; a cache-stability test that semantically-equivalent input CINs (e.g. `A+B` vs `B+A`) produce the same kernel hash after simplification; and a CSE test where two identical sub-expressions become a single workspace.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Algebraic rewrites"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_streaming_backend",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a streaming (out-of-core) backend for sparse tensors that exceed available memory by processing them in coordinate-aligned chunks read from disk. (1) In `src/scorch/storage.py`, add a `StreamingTensorStorage(TensorStorage)` subclass that backs its value/coordinate arrays with a memory-mapped binary file plus a sequential chunk reader. The chunk size is set by `chunk_nnz: int` (default 2^20 nonzeros). The reader yields `(level_arrays_chunk, values_chunk)` tuples whose coordinates are guaranteed monotone within and across chunks for the outermost compressed/coordinate level. (2) Add `STensor.from_disk(path, format, shape, chunk_nnz)` and `STensor.to_disk(path, chunk_nnz)` in `src/scorch/stensor.py`. The on-disk format is a small JSON header (shape, dtype, format string, chunk offsets) followed by a concatenation of per-chunk coordinate and value blobs. (3) Extend `CINLowerer` in `src/scorch/compiler/cin_lowerer.py` with an outer chunk-driver loop that wraps the kernel when any operand is a `StreamingTensorStorage`: for each chunk, materialize a chunk-local tensor view, run the inner kernel, and accumulate the output into a sink. The sink may itself be streaming or in-memory depending on the output's storage; emit appropriate accumulation code for both cases. (4) The pipeline must work for tensors of any rank - chunking partitions only the outermost level, but the inner kernel must process the per-chunk slice of arbitrary shape correctly. Operations that require global state across chunks (e.g. `softmax` along the chunked axis, sort, top-k along the chunked axis) must raise a clear `StreamingNotSupported` exception that names the offending op and chunked axis. (5) Make the chunk-driver loop play correctly with the existing kernel cache - the chunk-driver wrapper has its own code shape but the inner kernel cache key must match the equivalent in-memory kernel so chunk processing doesn't recompile per chunk. Add a small benchmark in `bench/` that constructs an N-d (use 3D) sparse tensor whose materialized form would exceed 2 GiB, runs SpMV-equivalent and element-wise add, and verifies the result against a small in-memory subset. Write unit tests covering: chunk-boundary correctness for a 2D matrix split into three chunks (illustrative); a 3D streaming element-wise add against a 3D in-memory operand; round-trip `to_disk` / `from_disk` equality across multiple shapes; and the `StreamingNotSupported` raise path for `softmax` along the chunked axis.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Codegen/Backend targets",
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_work_stealing_scheduler",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a task-based work-stealing scheduler for sparse kernels with severely imbalanced row work distributions. Today parallelism is exclusively OpenMP `parallel for` with static or guided scheduling, decided at the LLIR `ForLoop.omp_schedule` level (`llir.py:476`). Skewed sparse workloads (power-law-distributed row nnz, common in graph and recommendation workloads) suffer significant load imbalance that neither static nor guided OpenMP schedules fully resolve. (1) In `src/scorch/compiler/llir.py`, add a `TaskLoop(Stmt)` node with fields `task_iter_var`, `task_init`, `task_cond`, `task_update`, `task_body: List[Stmt]`, `grain_size: int`, and `worker_count: Union[int, str]`. `worker_count` can be a literal int or `\"auto\"` (resolved to `omp_get_max_threads()` at runtime). (2) In `src/scorch/compiler/codegen.py`, lower `TaskLoop` to a Cilk/TBB-style work-stealing skeleton that does *not* depend on TBB or Cilk: emit a fixed-size thread pool (one `std::thread` per worker), per-thread work deques, a Chase-Lev steal protocol, and grain-size-bounded splitting. The runtime support code lives in a new `csrc/work_stealing.h` header included by generated kernels. (3) In `src/scorch/compiler/scheduler.py`, add `Scheduler.use_work_stealing(cin, index_var, grain_size)` that marks a parallel ForAll for work-stealing emission rather than OpenMP, and update `_compute_comp_cost` (`scheduler.py:278`) with a separate cost branch when work-stealing is selected (lower constant overhead per task, better imbalance tolerance). Auto-select work-stealing in `auto_schedule` when the row-nnz coefficient of variation exceeds a threshold (default 4.0); extend the cost-model constants with `c_task_overhead` and `cv_threshold`. (4) The scheduler decision and the task-loop body must be rank-agnostic - for a rank-N input where the parallel loop is on an outer mode, the inner body of arbitrary depth runs unmodified inside the task. Do not assume 2D matrices. (5) Write tests covering: a synthetic skewed 2D matrix (illustrative) where work-stealing beats OpenMP guided in wall time (use a robust assertion: the work-stealing version must complete within 1.5x of the best OMP version on a power-law row-nnz distribution); a 3D mode-n product with work-stealing on the outer mode; unit tests of the Chase-Lev deque (push, pop, steal) in isolation, including the well-known wraparound and ABA hazards; and assertions on the emitted C++ that the work_stealing.h header is included.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Work scheduling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_contraction_order_opt",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the einsum implementation with a cost-model-driven contraction-ordering optimizer for expressions over three or more sparse operands. Today `ops.einsum` (`ops.py:377`) handles multi-tensor contractions by sequential pairwise reduction in equation order; extend to choose a near-optimal contraction tree. (1) In a new file `src/scorch/contraction_order.py`, implement the dynamic-programming algorithm of Pfeifer et al. for tensor-network contraction: for each subset of operand indices, compute the optimal pairwise grouping that minimizes total floating-point ops summed over all pairwise contractions. The cost of a single pairwise contraction is `prod(dim_sizes_of_index_union)` weighted by an estimated sparsity factor `nnz / dense_size` for each operand. (2) Add a `ContractionPlan` dataclass with fields `tree: nested tuple of operand indices`, `intermediate_shapes: List[Tuple[int, ...]]`, `intermediate_formats: List[Format]`, `estimated_flops: int`, `estimated_bytes: int`. (3) The optimizer also picks a `Format` for each intermediate from `{COO, COMPRESSED, DENSE per level}` based on estimated nnz density of that intermediate - the format choice is cost-modeled jointly with the contraction order. (4) In `ops.einsum`, when the equation has >=3 operands, build the `ContractionPlan` and execute the contractions in tree order, materializing each intermediate as an `STensor` of the chosen format. The optimizer must work for any per-operand rank - there is no 2D restriction. The DP state space is exponential in the number of operands; cap the exact DP at 8 operands and fall back to a greedy lowest-cost-pair heuristic above that, with a clear `INFO`-level log message. (5) Make sure the chosen contraction order is part of the kernel cache key so two einsum calls over identical operands but written in different equation order share a cached kernel after planning. Write tests covering: a 3-operand contraction `ij,jk,kl->il` where the optimal order differs from left-to-right (illustrative 2D case among many); a 4-operand contraction with mixed ranks (a 1D vector, a 2D matrix, a 3D tensor, and a 2D matrix); a 5-operand contraction with a non-trivial optimal tree; a property test that the DP planner agrees with brute-force enumeration on small inputs (<=4 operands); and a fallback-heuristic test for a 10-operand expression.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Contraction planning"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_scalar_param_specialize",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add compile-time scalar-parameter specialization to the CIN->LLIR pipeline. Many sparse ops accept scalar parameters (e.g. `addmm`'s `alpha`/`beta`, `dropout`'s `p`, `pow`'s `exponent`, `mul`/`div` by a scalar, `clamp`'s bounds); today these are passed as runtime kernel arguments. Specializing on the scalar value at compile time enables stronger downstream C++ compiler optimizations (constant propagation, dead-code elimination, strength reduction) at the cost of more cache entries. (1) In `src/scorch/compiler/cin.py`, add a `ScalarParam(name, dtype, runtime_value, specialize: bool)` IR node for scalar parameters. The `runtime_value` is captured at CIN-construction time; `specialize` is a planner-set flag. (2) In `src/scorch/ops.py`, refactor `addmm`, `dropout`, `pow`, scalar `mul`, scalar `div`, and `clamp` to use `ScalarParam` nodes rather than hard-coding the scalar in the CIN. (3) In `src/scorch/compiler/cin_lowerer.py`, when `specialize=True`, embed the scalar's runtime value as an `llir.Literal` and emit no kernel parameter for it; when `specialize=False`, emit a kernel parameter as today. (4) Implement specialization heuristic `_should_specialize(scalar_param) -> bool` in a new `src/scorch/compiler/specialization.py` module: specialize when the scalar's value is exactly 0 or 1 (enables algebraic simplification interaction with feature_124); specialize when the kernel's CIN includes a tight inner loop and the scalar appears inside it (enables constant folding in the inner loop); otherwise don't (avoids cache fragmentation). (5) The kernel cache key must include the specialized scalar values so that a kernel compiled for `alpha=0.5` is reused on subsequent calls with `alpha=0.5` but not on `alpha=0.7`. Make sure the cache-key contribution is the *hash of specialized values*, not the raw values, so floating-point near-equality is handled deterministically. (6) The mechanism must be rank-agnostic; do not hard-code the scalar as a 1D or 2D special case. Write tests covering: `addmm(alpha=1.0, beta=0.0)` produces a kernel that elides the `beta * input` add; `dropout(p=0.0)` produces an identity kernel; `mul(scalar=2.0)` produces a kernel with a literal `2.0` constant in the inner loop; cache hit/miss correctness across two `addmm` calls with the same and different specialized values; and round-trip numerical equality with the non-specialized path on 1D, 2D, and 3D inputs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Specialization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dependence_analysis",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sound dependence-analysis pass that decides whether a candidate loop reordering preserves semantics for sparse computations. Today `Scheduler.optimize_loop_order` (`scheduler.py:520`) reorders loops by a sparsity heuristic; while this often produces faster code, it can silently mis-reorder for non-trivial reductions because no dependence test is performed. The result is correctness depending on heuristic-chosen loop order, which is a real footgun for advanced users. (1) Create `src/scorch/compiler/dependence.py` with a `DependenceAnalysis` class supporting (a) read/write set extraction per CIN statement (returns sets of `(tensor_var, mode_indices)` tuples), (b) loop-carried-dependence detection between any two index variables for a given statement (true if the statement writes a position at iteration `i` of the outer index var and reads or writes the same position at iteration `i+1` for some achievable inner-iter pattern), and (c) a `can_reorder(cin, ivar_a, ivar_b) -> bool` predicate that returns `True` iff swapping the two index variables in the loop nest preserves semantics for *all* tensor positions. (2) The analysis must handle reductions correctly - a `+=` accumulation into an output position is reorderable iff the accumulating operation is associative and commutative (true for `+`, `*`; false for `-`, `/`); add an `Operation.is_assoc_commutative` helper to `cin.py`. For non-AC accumulators, reordering is rejected outright. (3) Make the analysis sound for arbitrary tensor rank: dependence vectors are per-mode, not per-tensor. For sparse tensors, the abstract dependence is over the coordinate space, not the stored positions. (4) In `src/scorch/compiler/scheduler.py`, gate `optimize_loop_order`'s candidate reorderings on `DependenceAnalysis.can_reorder` and emit a `DEBUG`-level log when a reorder is rejected for dependence reasons. (5) Add `Scheduler.is_safely_parallelizable(cin, index_var) -> bool` that uses the same machinery: a loop is parallelizable iff there are no loop-carried dependences for that index variable. Replace the heuristic in `_should_parallelize_outer_forall` (`cin_lowerer.py:2048`) with this sound test. (6) Write tests covering: a CIN where the heuristic would (incorrectly) reorder a non-associative reduction and the new analysis correctly rejects it; a 3D tensor case demonstrating per-mode dependence vectors; integration tests showing that `auto_schedule` produces the same loop order as before for cases that are dependence-free (regression test) and a different loop order for cases that aren't; and a unit test of `is_assoc_commutative` for all four `Operation` enum values.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Dataflow analyses"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_octree_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `LevelType.OCTREE` to the format system: a hierarchical level type that groups multiple consecutive dimensions of a sparse tensor under a single tree-structured index, accelerating range queries and locality-preserving traversal for high-dimensional sparsity. Despite the name, the level generalizes to k-d trees for any k consecutive dimensions >= 2 - not just 3D octrees. (1) In `src/scorch/format.py`, extend `LevelType` with `OCTREE = \"octree\"` and add an `OctreeLevelFormat` subclass of `LevelFormat` that records (a) which logical dimensions it covers (any number >= 2), (b) the per-dimension fanout `branching: List[int]` (default 2 per dim), and (c) the maximum depth. The on-disk layout is a flat array of nodes, each storing `(child_offsets[branching_factor_product], leaf_value_offset)` packed into a `cvector<int64_t>`. Implement `LevelPack.get_arrays()` for OCTREE. (2) In `src/scorch/compiler/iterator.py`, add an OCTREE branch to `ModeIterator` that emits depth-first traversal of the tree with stack-based state. The index variables for the dimensions covered by the OCTREE level are bound to the per-level coordinate path through the tree. (3) In `src/scorch/compiler/iter_lattice.py`, extend co-iteration: an OCTREE level can co-iterate with another OCTREE level of compatible structure (same dimensions, same branching) by descending both trees in lockstep; OCTREE + COMPRESSED requires materializing the OCTREE coordinates into a flat sorted list before co-iteration (emit this materialization in the lowerer with a clear comment). (4) In `src/scorch/compiler/cin_lowerer.py`, generate the depth-first iteration LLIR with explicit stack representation (use the existing `cvector<int>` workspace pattern for the stack). (5) In `src/scorch/stensor.py`, add `STensor.to_octree(dims: List[int], branching: List[int])` that compacts the listed adjacent dimensions into an OCTREE level, and `STensor.from_octree(...)` for direct construction. The implementation must support k-dimensional cases for any k >= 2; do not hardcode k=3. Write tests covering: 3D points (k=3 octree) with element-wise add (illustrative); 4D wavelet-style sparsity (k=4) round-trip; a co-iteration test of two OCTREE-3D tensors with identical branching; a fall-through test where OCTREE + COMPRESSED triggers the materialization fallback and produces correct results; and a 2D quadtree (k=2) case to demonstrate the rank-2 instance.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Hierarchical & multi-d"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_format_coercion_pass",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a compile-time format-coercion pass that decides for each operand of a CIN computation whether to convert its storage format mid-pipeline (before the kernel runs) when the cost model favors it. Today, `ops.matmul` (`ops.py:250`) dispatches to prebuilt kernels when input formats match a registered spec (`prebuilt_kernels.py:37`) and falls back to compiler-generated code otherwise; there is no mechanism that automatically converts e.g. `B: COO -> CSR` because the chosen schedule iterates B in row-major order. The agent must extend the scheduler to consider format coercion as a first-class transformation. (1) In `src/scorch/compiler/scheduler.py`, add `Scheduler.coerce_formats(cin) -> CIN` that, for each `TensorVar` operand in the CIN, computes (a) the iteration order chosen by `optimize_loop_order`, (b) the cost of accessing the operand in its current format under that order (using `_estimate_strided_access_penalty` from feature_120 if available; otherwise reproduce the analysis here), and (c) the cost of converting to a better-aligned format using `_compute_transposition_cost` (`scheduler.py:429`). Insert a synthetic conversion `Where` clause if and only if `conversion_cost + better_aligned_access_cost < unaligned_access_cost`. (2) The chosen target format may be any element of the LevelType cross-product that is implementable for the operand's rank. The pass must enumerate at most a fixed candidate set per level (e.g. `{DENSE, COMPRESSED, COORDINATE}`) to keep the search bounded. Implement a `FormatLatticeSearch` helper that enumerates these candidates, scores each, and returns the best, with pruning of clearly-bad combinations (e.g. all-DENSE for a tensor with density < 0.01). (3) The search must be rank-agnostic - for an N-d operand with N levels, candidates are an N-tuple of level types and the scoring respects level interactions. Do not specialize on 2D matrices. (4) Hook coercion into `auto_schedule` (`scheduler.py:1461`) as the last pass before lowering, gated by a `coerce_formats: bool = True` option. Make sure the chosen post-coercion format is part of the kernel-cache key so a coerced kernel and an un-coerced kernel for the same logical CIN do not collide. (5) Write tests covering: a 2D SpMM with `A: CSR, B: COO` where coercion converts B -> CSR (illustrative); a 3D mode-n contraction where coercion converts the contracting mode of one operand from COMPRESSED to DENSE; a 4D-tensor case to exercise the rank-agnostic candidate enumeration; a no-op test where coercion correctly decides not to convert (operands already aligned); and a kernel-cache test asserting that the coerced-kernel hash differs from the un-coerced version for the same CIN.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Format adaptation"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_loop_skewing",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a polyhedral-style loop-skewing transformation in the scheduler that\nrewrites an index pair `(i, j)` into `(i, i + j)` (and the general unimodular\naffine case `(i, c1 * i + c2 * j)` for small constants) so that inherently\nserial wavefronts can be exposed as parallel hyperplanes. This is a\nstrictly-more-general transformation than loop interchange and tiling and must\ncompose with both. Implementation steps: (1) In\n`src/scorch/compiler/scheduler.py`, add `Scheduler.skew(cin, outer_ivar,\ninner_ivar, factor: int = 1) -> CIN` that replaces `inner_ivar` in the loop\nnest with a new `IndexVar` whose logical coordinate is `factor * outer_ivar +\ninner_ivar` and rewrites all `TensorAccess` nodes that reference `inner_ivar`\naccordingly. Keep the outer loop unchanged. The transformation must preserve\nthe CIN's `inserted_workspace` and `no_tile_list` state. (2) Add the\n`IndexVarExpr` infrastructure in `src/scorch/compiler/cin.py` needed to\nrepresent `factor * outer + inner` as a first-class index expression -\ngeneralize the existing `IndexVarAdd` (line 494) to an `IndexVarAffine(coeffs:\nDict[IndexVar, int], const: int)` form and provide compatibility shims so\nexisting callers of `IndexVarAdd` continue to work. (3) Add\n`Scheduler.is_skew_legal(cin, outer_ivar, inner_ivar) -> bool` that consults\nthe dependence-analysis pass (`dependence.py`) to verify the skew preserves\nthe meaning of the program for arbitrary tensor rank - in particular, for\nevery tensor access that includes either ivar, verify the rewritten access\nstill indexes into the tensor within bounds and that no loop-carried\ndependence is violated. (4) Extend `src/scorch/compiler/cin_lowerer.py` and\n`src/scorch/compiler/iterator.py` to resolve the skewed coordinate into the\ncorrect dense-level index (subtract `factor * outer_ivar` before indexing) and\nto clamp or guard the inner-loop bounds so that skewed iterations that fall\noutside the original rectangular iteration space are skipped. (5) Generalize\n`Scheduler.auto_schedule` to optionally try skewing for nests where the\ndependence test rules out a desired interchange but a skew would make it\nlegal; guard behind an `enable_skew: bool` flag (default False). The skew\nimplementation must work for loop nests of arbitrary depth (not just the\n2D/3D examples in existing tests) and for tensors of any rank. Write\ncomprehensive tests covering: correctness parity for SpMV, SpMM, and a 4D\ntensor contraction against dense PyTorch references, generated C++ that\ncontains the expected skewed index expressions, rejection of illegal skews,\ninteraction with existing tiling (skew-then-tile and tile-then-skew), and\ncomposition with the transpose-insertion pass.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_llir_ssa",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce an SSA (Static Single Assignment) form for the LLIR so downstream\noptimization passes (dead-code elimination, common subexpression elimination,\nloop-invariant code motion) have a principled substrate to operate on. Today,\n`src/scorch/compiler/llir.py` uses an imperative model where `Assign` and\n`VarInit` freely redefine variables, which makes use-def reasoning ad-hoc.\nImplementation steps: (1) Add an `SSAForm` module at\n`src/scorch/compiler/ssa.py` exposing a `to_ssa(stmts: List[llir.Stmt]) ->\nList[llir.Stmt]` function that walks the LLIR, renames every redefinition of\na variable to a fresh versioned name (`x` -> `x_1`, `x_2`, ...), and inserts\n`Phi` nodes at control-flow join points (loop headers and `IfThenElse`\nmerges). (2) Add an `llir.Phi(var, incoming: List[Tuple[str, Expr]])` node to\n`llir.py` representing a phi function with one incoming value per predecessor\nblock. Extend `src/scorch/compiler/codegen.py` to lower phi nodes at the\nentries of `ForLoop`, `WhileLoop`, and after `IfThenElse` via assignments on\nthe incoming edges (phi elimination in `from_ssa`). (3) Add `from_ssa(stmts)\n-> List[llir.Stmt]` that performs standard phi elimination by inserting\ncopies at the ends of predecessor blocks so the emitted C++ remains\nimperative. (4) Teach the SSA builder about the existing loop constructs\nincluding their `unroll`/`simd`/`omp_parallel_for` flags, which must be\npreserved across the to_ssa/from_ssa round-trip. (5) Add a\n`CINLowerer.ssa_mode: bool` flag (default False for back-compat) that, when\nset, runs `to_ssa` on the lowered LLIR and then runs `from_ssa` before\ncodegen. When False, the pipeline bypasses SSA entirely. (6) Ensure the SSA\nbuilder is correct for programs lowered from CIN of arbitrary tensor rank -\nthe number of loops, workspaces, and lattice branches scales with rank, and\nthe SSA pass must not assume any fixed depth. Write comprehensive tests\ncovering: SSA round-trip preserves program semantics (execute before/after on\nSpMV, SpMM, 3D tensor contraction, and a 4D einsum; outputs must be\nbyte-identical); phi insertion is correct across nested loops and\nif-then-else; from_ssa produces code that compiles and is equivalent to the\nnon-SSA path; verification that `ssa_mode=False` leaves the old pipeline\nuntouched; and correctness for all existing sparse formats (dense, CSR, COO,\nCSC).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/LLIR form"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_unroll_and_jam",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a loop unroll-and-jam optimization pass for dense inner-loop bodies\nof tensor operations of arbitrary rank. Unroll-and-jam simultaneously\nunrolls an outer loop by a factor U and fuses the U copies of the inner loop\nbody together, exposing register-level reuse opportunities that plain loop\nunrolling cannot. Implementation steps: (1) Add\n`Scheduler.unroll_and_jam(cin, outer_ivar, inner_ivar, factor: int) -> CIN`\nin `src/scorch/compiler/scheduler.py`. The transformation strip-mines\n`outer_ivar` by `factor`, interchanges the new inner strip-mine loop with\n`inner_ivar`, and jams `factor` copies of the inner-loop body into a single\nbody with renamed induction variables. (2) Promote scalar accumulators that\nappear on the LHS of `+=` inside the unrolled body into `factor` separate\nregister variables (`acc_0`, `acc_1`, ..., `acc_{factor-1}`), combined at the\nend of the inner loop. For sparse reductions that use a workspace scalar,\nexpand the workspace into a small fixed-size array of length `factor`. (3)\nThe pass must handle arbitrary nest depths - beyond the outer and inner ivar\nnamed in the API, surrounding loops may be of any depth, and the pass must\npreserve their structure. It must also handle dense-innermost-loop bodies\nthat reference tensors of any rank (mode count >= 1). (4) Add a scalar\nremainder epilogue that runs the last `outer_size % factor` iterations\nunchanged so correctness is preserved when the outer loop's trip count is not\na multiple of `factor`. (5) Add a legality check `_is_unroll_jam_legal(cin,\nouter, inner)` that consults the dependence analyzer to verify: no\nloop-carried dependence along `outer_ivar` that would be violated by the\njam, and no aliased writes to the same tensor position across the unrolled\ncopies. (6) Integrate with `auto_schedule`: when the innermost loop is dense\nand small (size estimable from `IndexVar.size_llir_var`) and the outer loop\nis dense with trip count >= `factor`, try unroll-and-jam with\n`factor in (2, 4, 8)` under a cost-model guard. Write comprehensive tests\ncovering: correctness parity on a 2D example (explicitly labeled as an\nexample, not a restriction of the pass), a 3D `einsum`, and a 4D tensor\ncontraction; verification that the emitted C++ contains `factor`-way unrolled\nbodies and separate register accumulators; remainder epilogue correctness\nwhen `outer_size` is not divisible by `factor`; legality rejection for a\nsynthetic CIN with a carried dependence; and numerical parity vs the\nunrolled=1 baseline.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_software_prefetch",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a software-prefetch insertion pass that emits `__builtin_prefetch` (and\n`_mm_prefetch` on x86 where supported) calls ahead of sparse coordinate-array\nand value-array loads so the CPU can hide pointer-chasing latency. Today the\ncompiler emits no explicit prefetches; sparse kernels with indirect access\npatterns are bandwidth-bound and leave performance on the table.\nImplementation steps: (1) Add `src/scorch/compiler/prefetch.py` with a\n`PrefetchInserter` visitor that walks the lowered LLIR, identifies sparse\nindirect loads (array-accesses of the form `{tensor}{level}_crd[p]`,\n`{tensor}{level}_pos[...]`, and value-array loads downstream of them), and\ninserts prefetch statements `prefetch_distance` iterations ahead. (2) Add\n`llir.Prefetch(target: Expr, locality: int, rw: str)` where `locality` is\n0..3 (matching `__builtin_prefetch` semantics) and `rw` is `\"r\"` or `\"w\"`.\nExtend `LLIRLowerer.lower_llir` in `src/scorch/compiler/codegen.py` to emit\n`__builtin_prefetch((void*)(&{target}), {rw_flag}, {locality});`. (3) The\npass must compute `prefetch_distance` from a `CacheModel`-derived cost model\n- small enough that the prefetched line survives until consumption, large\nenough to hide memory latency. Default to a distance that fills half of L1,\nwith an override in `Scheduler`. (4) Add `csrc/header.h` compatibility shim:\na `scorch_prefetch(addr, locality, rw)` inline function that falls through to\n`__builtin_prefetch` on GCC/Clang and to `_mm_prefetch` on MSVC-like\ncompilers; emitted code calls `scorch_prefetch` rather than the raw builtin.\n(5) The pass must work for tensors of arbitrary rank. Sparse accesses may be\nnested many levels deep (each level contributes one `{tensor}{l}_pos` and one\n`{tensor}{l}_crd` array); the pass must identify indirect loads regardless of\nthe level they occur at and must not assume a 2D structure. (6) Interact\ncorrectly with the existing `omp_parallel_for` loops: prefetches must be\nemitted per-thread, which is already correct as long as they appear inside\nthe parallel region. (7) Guard behind `CINLowerer.insert_prefetches: bool`\n(default False). When False the pipeline is unchanged. Write comprehensive\ntests covering: presence of prefetch calls in emitted C++ for SpMV, SpMM,\n3D tensor contraction, and 4D einsum; absence of prefetches when the flag is\nFalse; correctness parity with and without prefetches; and correct prefetch\ndistance propagation from the cache model.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Classical passes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_loop_invariant_code_motion",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement position-level loop-invariant code motion (LICM) that hoists sparse\nposition/coordinate array loads out of inner loops whenever the enclosing\nposition variable is invariant with respect to the inner loops. Today the\nlowered LLIR frequently re-reads `A1_crd[pA1]`, `A1_pos[pA0+1]`, and\n`A2_size` on every iteration of an inner loop even when the position is\nprovably constant across those iterations, leading to redundant memory\ntraffic. Implementation steps: (1) Add `src/scorch/compiler/licm.py` with a\n`PositionLICM` visitor that walks the lowered LLIR and classifies every\n`ArrayAccess` into `{tensor}{level}_crd`, `{tensor}{level}_pos`, and\n`{tensor}{level}_size` categories. (2) For each such access, compute the set\nof loop-induction variables it depends on (by symbolic evaluation of the\nindex expression over `IndexVar` references in scope). For each enclosing\nloop in the nest, check whether the access depends on that loop's induction\nvariable; if not, hoist the `VarInit` for the access's temporary to the\nnearest enclosing block where all dependencies are in scope. (3) The hoist\nmust preserve ordering with respect to other statements that may have\nside-effects (workspace writes, function calls); conservatively refuse to\nmove a load above any statement that mutates the same array. (4) Generalize\nto arbitrary tensor rank: the pass must work on loop nests whose depth\nmatches the sum of tensor ranks across all operands, which can be large\n(>=10 for 4D-by-4D contractions). Do not hardwire any particular depth. (5)\nExtend `CINLowerer` with an `apply_licm: bool = True` flag. When True, call\n`PositionLICM.run(stmts)` after cin-lowering and before codegen. (6) Ensure\ncorrectness under `omp_parallel_for`: loop-invariant loads that occur inside\na parallel region must either stay inside (if they depend on a loop index\nbound to the parallel loop) or be hoisted outside the parallel region as\nshared reads. Compute and honor this distinction. (7) Interoperate with\nworkspace-bearing `Where` clauses: do not hoist loads across a producer\nwriting to a workspace that the consumer reads. Write comprehensive tests\ncovering: reduction in the number of `{tensor}{l}_crd[p]` loads emitted for\nSpMV, SpMM, a 3D tensor contraction, and a 4D einsum; correctness parity vs\nthe `apply_licm=False` baseline; verification that side-effecting statements\nare not reordered; and correctness when the pass runs after kernel fusion\nand after tiling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Classical passes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_coord_cse",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a sparse-coordinate common subexpression elimination (CSE) pass\nthat deduplicates coordinate-address arithmetic across the branches of a\nlowered iteration lattice. In the lowered C++ today, the same position-to-\nlinear-address computation (for example `pA0 * A1_size + iA1` or\n`A1_pos[pA0]` + offset chains) is recomputed inside each lattice branch\n(union, intersection, sparse-only, dense-only), wasting cycles and cache.\nImplementation steps: (1) Add `src/scorch/compiler/cse.py` with a\n`CoordinateCSE` visitor that walks the lowered LLIR, canonicalizes every\nexpression that looks like an affine combination of coord/pos/size vars\n(`a * var_1 + b * var_2 + ...`), and assigns a unique temporary (`tmp_0`,\n`tmp_1`, ...) to each distinct canonical form. Replace every occurrence with\nthe temporary and emit a single `VarInit` at the nearest common dominator.\n(2) The canonicalization must be rank-agnostic: an expression like\n`((pA0 * A1_size + iA1) * A2_size + iA2) * A3_size + iA3` (4D dense\nflattening) must canonicalize to the same form regardless of which\nassociativity the LLIR happens to have produced. Implement a sum-of-products\nnormal form. (3) The pass must correctly scope temporaries: a canonical form\nthat references `iA1` must live inside the loop that defines `iA1`. Compute\nthe LCA (lowest common ancestor) over the loop-nest tree and emit the\ntemporary there. (4) Handle `ArrayAccess` nodes: a repeated load\n`A1_crd[pA1]` in two distinct lattice branches should be hoisted to a single\nload if the lattice construction guarantees both branches reach the same\nvalue of `pA1` (for example in the coiteration merge). Cross-reference the\niteration-lattice to determine when two lattice points share a position var.\n(5) Integrate at `CINLowerer.apply_cse: bool = True` after LICM (feature_136)\nand before codegen. (6) Generalize to any tensor rank, any level-type\ncombination, and any loop depth. Do not assume 2D. Write comprehensive tests\ncovering: counting `VarInit` statements in generated C++ before and after\nCSE for SpMV, SpMM, 3D tensor contraction, and 4D tensor contraction (CSE\nshould strictly reduce the count); correctness parity; interaction with\nloop fusion and with tiled loops (CSE must respect tile-induction variables);\nand a negative test where two superficially-equal expressions reference\nvariables from different scopes and must not be merged.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Classical passes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_affine_canonicalize",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement an affine-index canonicalization pass over the CIN IR that rewrites\nevery index expression into a canonical sum-of-products form so that\ndownstream passes (strength reduction, CSE, dependence analysis, alias\nanalysis) can rely on syntactic equality of equivalent expressions. Today\n`IndexVarAdd` (`src/scorch/compiler/cin.py:494`) represents only binary\naddition, and higher-arity combinations such as `2*i + j + k + 3` are\nexpressed by nested `IndexVarAdd`s with no guarantee of a canonical order.\nImplementation steps: (1) Introduce a new `AffineIndexExpr(coeffs:\nDict[IndexVar, int], const: int)` dataclass in\n`src/scorch/compiler/cin.py`. It represents a single affine combination\n`sum_v coeffs[v] * v + const`. Provide `__add__`, `__sub__`,\n`__mul__` (by an `int` only), and `__neg__` that produce the canonical form.\nKeep `IndexVarAdd` as a thin wrapper that instantiates an `AffineIndexExpr`\ninternally for back-compat. (2) Add `canonicalize_affine(expr: IndexExpr) ->\nAffineIndexExpr` that recursively walks any `IndexExpr` tree whose leaves\nare `IndexVar` and integer literals and produces the unique canonical form\n(coefficient dict keyed by ivar name alphabetical order, zero-coefficient\nentries omitted, combined constant term). (3) Extend `TensorAccess.__str__`\nand repr to print the canonical affine form when present, so generated\nkernels have stable, predictable indexing expressions. (4) Add a\n`CINCanonicalizer` visitor in `src/scorch/compiler/canonicalize.py` that\nwalks a CIN statement and rewrites every occurrence of `IndexVarAdd` and\nany other `IndexVarExpr` subclass into the canonical `AffineIndexExpr`. The\nvisitor must descend through `ForAll`, `Where`, `TensorAssign`, `BinaryOp`,\nand `UnaryOp` (feature_7) nodes. (5) The canonicalizer must be correct for\nCIN programs over tensors of arbitrary rank: affine combinations involving\n5+ index variables (not uncommon in 4D + 4D tensor contractions) must\ncanonicalize to a single canonical form regardless of input construction\norder. (6) Run the canonicalizer as the first step of `CINLowerer` before\nany other transformation. Ensure idempotence - running it twice produces the\nsame IR. Write comprehensive tests covering: canonicalization produces the\nsame output for syntactically different but semantically equal expressions;\nidempotence on 100 randomly-generated affine expressions over 6 index\nvariables; correctness of lowered code against the pre-canonicalization path\non SpMV, 3D tensor contraction, and 4D tensor contraction; and a test that\nverifies the canonical form is deterministic across Python runs (dict\nordering).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Algebraic rewrites"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_ragged_level_unsorted",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a ragged (jagged) sparse level type `LevelType.RAGGED` that represents a\nvariable-length dimension without the sorted/compressed invariant of\n`LevelType.COMPRESSED`. Each group at the parent position has an integer\nlength and a flat run of entries; unlike COMPRESSED, the coordinates within a\ngroup need not be sorted and the level acts as a lightweight \"list of lists\"\nat any depth of the format. This format is essential for NLP (variable-\nlength sequences) and graph (variable-degree neighborhoods) workloads.\nImplementation steps: (1) In `src/scorch/format.py`, add `RAGGED = \"r\"` to\n`LevelType` (after line 11) and wire up `_STR_TO_LEVEL_TYPE` with aliases\n`\"ragged\"`, `\"jagged\"`, `\"r\"`. (2) In `src/scorch/utils.py`, extend\n`parse_format` to accept `\"r\"` as a valid format character. (3) Define the\nstorage layout: for a RAGGED level `l`, store (a) `{tensor}{l}_lens`: an\nint32 array of length equal to the number of parent-level entries, giving\nthe length of each group, and (b) `{tensor}{l}_crd`: a flat coord array whose\nentries are concatenated in parent-position order. The level omits the\n`pos` prefix-sum array that COMPRESSED uses; runtime iteration computes\nrunning offsets from `lens` on the fly (or lazily materializes a pos array\nas a workspace). (4) In `src/scorch/compiler/iterator.py`, add a\n`ModeIterator` variant for RAGGED levels: iteration maintains a running\noffset into `crd`, increments by `lens[parent_pos]` when advancing the\nparent, and scans in insertion order (unsorted). Provide a locate operation\nthat does a linear scan since coordinates are unordered. (5) In\n`src/scorch/compiler/cin_lowerer.py`, handle `LevelType.RAGGED` in all\nplaces that currently switch on `LevelType` (iteration bounds, coord\nresolution, result assembly, merge-lattice branch selection). (6) In\n`src/scorch/compiler/iter_lattice.py`, include RAGGED levels in the sparse\nbranch of every union/intersection and treat them like COORDINATE levels\nexcept for the offset-into-crd computation. (7) Add `STensor.from_ragged`,\n`STensor.to_ragged()` conversions. (8) Add a C++ helper in `csrc/header.h`\nthat encapsulates RAGGED iteration for use by generated kernels. The\nimplementation must be correct when RAGGED appears at any level of a tensor\nof arbitrary rank (not only as the innermost level and not only in 2D\ntensors) and must compose with any parent level type (DENSE, COMPRESSED,\nCOORDINATE, RAGGED-of-RAGGED, etc.). Write comprehensive tests covering:\nround-trip conversion to/from COO and dense on 2D, 3D, and 4D tensors with\nRAGGED at different levels (as examples, not as limits); element-wise add of\nRAGGED tensors; SpMV with a RAGGED row-level matrix; nested RAGGED (format\n`\"rr\"` or `\"drr\"`) correctness; empty-group handling; and numerical parity\nagainst dense equivalents.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Hierarchical & multi-d"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_nested_level",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a nested (recursive) format level type `LevelType.NESTED` whose entries\nare themselves sparse sub-tensors with their own `TensorFormat`, enabling\nhierarchical blocking beyond the fixed block-sparse format (feature_1). This\nis the sparse analogue of a B+tree or an arbitrarily-nested ragged array\nand is required for adaptive-mesh and multiresolution workloads.\nImplementation steps: (1) In `src/scorch/format.py`, add `NESTED = \"n\"` to\n`LevelType` with alias `\"nested\"`/`\"n\"`. Extend `LevelFormat` with an\n`inner_format: Optional[TensorFormat]` that describes the format of the\nsub-tensor stored at each parent position; validate that `inner_format` is\nnon-None iff `mode == NESTED`. (2) In `src/scorch/compiler/cin.py`, extend\n`TensorVar.levels` and related metadata so a NESTED level reports its inner\nlevel count; generalize `get_level_types()` to return a flat list of level\ntypes for iteration purposes (inner levels are appended). (3) Define the\nstorage layout: a NESTED level at position `l` stores per-parent offsets\ninto a contiguous blob of concatenated inner tensors, each carrying its own\n`mode_indices` at its levels and a shared value array slice. Add a\ncompact header per inner tensor that points into the shared blob.\n(4) In `src/scorch/compiler/iterator.py`, add a `NestedModeIterator` that\ndescends into the inner tensor's levels when iterating past the NESTED\nlevel, recursively delegating to inner-format iterators. (5) In\n`src/scorch/compiler/cin_lowerer.py`, generate C++ that at each parent\niteration constructs the inner-tensor descriptor and loops over its levels\nusing the already-generalized level-type switches. The implementation must\nhandle an arbitrary nesting depth and an arbitrary number of levels per\ninner tensor - do not hardcode 2. (6) In `src/scorch/compiler/codegen.py`,\nupdate result-assembly so nested outputs are written correctly: construct\nthe inner tensor, then splice it into the parent's blob via the offset\narray. (7) Add `STensor.from_nested(outer_format, inner_tensors: List[List[\nSTensor]])` for construction and `STensor.flatten_nested()` for a\nround-trip to flat COO. (8) Extend `csrc/header.h` with a `nested_tensor`\nhelper that encapsulates the offset-plus-blob layout for use by generated\nkernels. The tests must exercise a tensor of outer rank >= 2 with inner\ntensors of rank >= 2, and must include a case where the inner format is\nitself hybrid (for example `\"ds\"` inside a `\"dn\"` outer). Write\ncomprehensive tests covering: construction and flatten round-trip for 2D\nouter + 2D inner, 3D outer + 1D inner, and 2D outer + 3D inner nesting\n(explicitly labeled as examples); element-wise add and SpMV where one\noperand is NESTED; correctness vs the dense-PyTorch baseline produced by\n`flatten_nested().to_dense()`; and behavior when some inner tensors are\nempty.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Format/Hierarchical & multi-d"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_bidirectional_iteration",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Extend the compiler to support bidirectional (descending and arbitrary-\npermutation) iteration over compressed and coordinate sparse levels.\nCurrently `ModeIterator` in `src/scorch/compiler/iterator.py` emits strictly\nascending `for (int pA1 = A1_pos[pA0]; pA1 < A1_pos[pA0+1]; pA1++)` loops;\nalgorithms such as reverse-lexicographic merge, reverse Cuthill-McKee\nreordering, tail-peeling, and backward-sweep Gauss-Seidel require iterating\nsparse coordinates in descending or user-specified order. Implementation\nsteps: (1) Add `IterationDirection` enum (`ASC`, `DESC`,\n`PERMUTED`) in `src/scorch/compiler/iterator.py`. Extend `ModeIterator` with\na `direction: IterationDirection = ASC` field and a\n`permutation: Optional[List[int]] = None` field (used only when direction is\nPERMUTED). (2) Generalize `get_init_stmt`, `get_cond`, and `get_update`\n(see existing methods near line 72 onward) to emit the correct bounds and\nstep: for DESC, generate `for (int pA1 = A1_pos[pA0+1] - 1; pA1 >=\nA1_pos[pA0]; pA1--)`; for PERMUTED, emit a loop over a secondary\npermutation array `perm_A1` so that `pA1 = perm_A1[step]`. (3) In\n`src/scorch/compiler/cin.py`, extend the `ForAll` node with a `direction`\nfield and update accept/visit methods. In `src/scorch/compiler/scheduler.py`\nadd `Scheduler.set_iter_direction(cin, index_var, direction, permutation=\nNone) -> CIN` to set the direction for a particular loop. (4) In\n`src/scorch/compiler/cin_lowerer.py`, teach the lowerer to call the\ndirection-aware iterator methods when generating the loop; propagate the\ndirection through the iteration lattice for all branches that include that\nindex variable. (5) In `src/scorch/compiler/iter_lattice.py`, ensure that\ncoiteration merges remain correct when participants iterate in different\ndirections - conservatively refuse to merge a DESC branch with an ASC\nbranch, and insert a runtime reverse-scan buffer if forced. (6) The\ntransformation must work at arbitrary tensor rank and at any level of the\nformat (not only innermost). Generalize size/offset computations\naccordingly. (7) Add a legality check that forbids DESC iteration when a\nloop-carried `+=` reduction depends on the ascending order (for example,\nnon-associative reductions; associativity info comes from the existing\n`Operation.is_assoc_commutative` helper used by feature_129). Write\ncomprehensive tests covering: correctness parity between ASC and DESC\niteration for SpMV, SpMM, and a 4D tensor contraction (reduction ordering\nmust yield the same result within floating-point tolerance); verification of\nthe emitted C++ shape (decrement vs increment, `>=` vs `<`); a PERMUTED\niteration correctness test with a random permutation; rejection of illegal\ndirection changes under non-AC reductions; and correctness when combined\nwith tiling and fusion.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/Iteration semantics"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_zero_propagation",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a structural-zero propagation pass over the CIN IR that proves\nportions of an iteration space produce provably-zero output and eliminates\nthe corresponding lattice branches and loops before lowering. Today the\ncompiler emits code for every branch of the iteration lattice regardless of\nwhether any branch's operand is known to be structurally zero at that\niteration. Implementation steps: (1) Add `src/scorch/compiler/zero_prop.py`\nwith a `StructuralZeroAnalyzer` that infers, per tensor access and per\nreachable lattice point, a boolean \"may be nonzero\" flag using only format\ninformation (no runtime values). Rules: intersections of two `COMPRESSED`\nlevels are never both-nonzero when one operand's pos array is empty; a\nmultiplication `A[i,j] * B[j,k]` is zero whenever either operand is zero at\nthe shared index; a sum `A + B` is zero only when both are zero. The\nanalyzer must generalize to unary ops via the zero-preservation flag from\nfeature_7 (e.g., `abs(0) == 0`, `exp(0) != 0`). (2) Add a\n`CINZeroEliminator` pass in the same file that walks a CIN statement, drops\n`TensorAssign` nodes whose RHS is provably zero (and whose LHS is an output\ntensor being zero-initialized), drops lattice branches that are proven\nempty, and collapses `Where` producer/consumer pairs where the producer is\nempty. (3) Integrate into the `CINLowerer` pipeline ahead of lattice\nconstruction; guard behind `apply_zero_prop: bool = True`. (4) The pass must\nbe sound for tensors of arbitrary rank and for all level-type combinations\n(DENSE, COMPRESSED, COORDINATE, SINGLETON, and any new types such as\nfeature_139's RAGGED and feature_140's NESTED once implemented).\nImportantly, for hybrid formats (for example `\"ds\"` or `\"dss\"`) the sparse\nlevel governs the nonzero pattern while the dense levels do not - encode\nthis correctly. (5) Preserve correctness in the presence of workspaces: a\nworkspace is not structurally zero if any producer is not structurally zero\nfor any iteration. (6) Add `CIN.structural_zero_report()` returning a\ndiagnostic list of eliminated branches for debugging. Write comprehensive\ntests covering: counts of lattice branches emitted before and after the\npass for an `A * 0` identity, a `sparse_A intersect sparse_B` with disjoint\nindex patterns, a broadcast-multiply where one operand is an all-sparse\ntensor with known-empty modes; correctness parity on SpMV, SpMM, a 3D\ncontraction, and a 4D einsum; soundness on a chain of unary ops that do\nnot preserve zero (exp then log); and an interaction test with the\nalgebraic canonicalizer (feature_124) that verifies canonicalization does\nnot erase zero-propagation opportunities.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Iter-space pruning"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_shared_traversal",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce a first-class shared-traversal CIN construct that expresses\n\"compute k output tensors from a single sparse traversal of the input\noperands\" and emit one fused kernel per group, saving redundant index\nmanipulation and memory traffic. This generalizes feature_40's multi-output\nmax/argmax and feature_21's binary-op fusion into an arbitrary-arity,\nuser-constructible primitive. Implementation steps: (1) In\n`src/scorch/compiler/cin.py`, add a `MultiAssign(IndexStmt)` node with\nfields `lhs_list: List[TensorAccess]`, `rhs_list: List[IndexExpr]`, and\n`ops: List[Optional[Operation]]`. All LHS accesses must share the same\nfree-index-var set; RHS expressions may share sub-expressions. Extend the\nvisitor pattern in `CINVisitor`, `CINVisitorAccept`, `IndexStmt.get_result_\ntensor_accesses`, and `get_rhs_tensor_accesses` to traverse both lists. (2)\nIn `src/scorch/compiler/cin_lowerer.py`, lower a `MultiAssign` by building a\nsingle iteration lattice for the union of RHS accesses, then emitting one\nstore per LHS inside the innermost body. Common RHS sub-expressions must be\nemitted once and reused via LLIR temporaries (relies on feature_137 CSE or\nis independent of it). (3) Add the user-facing helper\n`ops.shared_traversal(inputs, exprs: List[IndexExpr], outputs: List[\nTensorVar]) -> List[STensor]` in `src/scorch/ops.py` that builds a\n`MultiAssign` CIN and executes it. (4) The construct must work for tensors\nof arbitrary rank. In particular, the example workload `mean, var =\nshared_traversal([A], [reduce_sum(A, axis=-1), reduce_sum(A**2, axis=-1)],\n...)` over a 4D input must produce both tensors in a single traversal\nwithout materializing an intermediate `A**2` tensor. (5) Generalize\n`Scheduler.auto_schedule` to choose a schedule that minimizes cost across\nthe union of RHS accesses rather than per-output. (6) Extend the algebraic\ncanonicalizer (feature_124) to recognize opportunities for\n`MultiAssign`-formation: if two adjacent `TensorAssign`s under the same\n`ForAll` free vars use RHS expressions that share >= 50% of tensor\naccesses, merge them into a `MultiAssign`. (7) Add introspection:\n`MultiAssign.get_shared_tensor_accesses()` returns the set of operand\ntensors that appear in more than one RHS. Write comprehensive tests\ncovering: a 2D mean+var traversal (example only) reduces kernel count from\n2 to 1 while preserving numerical parity; a 3D and a 4D input produce\nidentical results to the per-output path; correctness when one RHS is zero\nfor certain iterations (must interact correctly with feature_142 zero-\npropagation); correctness with workspaces (each output owning its own\nworkspace); and rejection when the LHS free-var sets differ.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_empty_intersection_prove",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a symbolic intersection-emptiness prover that inspects operand\nformats, shapes, and any available structural metadata to decide whether a\nsparse-sparse intersection is provably empty - and when it is, eliminates\nthe corresponding lattice branch at compile time before any C++ is emitted.\nToday, even when two operands are disjoint (for example one is an upper-\ntriangular mask and the other is a strictly-lower-triangular mask) the\ncompiler still emits the full coiteration merge. Implementation steps: (1)\nAdd `src/scorch/compiler/intersection.py` with an `IntersectionProver` that\naccepts a pair of `TensorAccess` nodes sharing one or more index vars and\nreturns one of `{PROVABLY_EMPTY, POSSIBLY_NONEMPTY, PROVABLY_NONEMPTY}`.\nRules to implement: (a) if either operand's `TensorFormat` has a level with\nshape 0 for the shared ivar, return PROVABLY_EMPTY; (b) if both operands\ncarry an explicit structural annotation (for example a `symmetric=True`\nflag from feature_95 combined with a known strict-triangular mask), reason\nsymbolically about the sparsity supports; (c) if both operands have\nconcrete nnz metadata attached at `TensorVar` construction time and their\ncoord ranges are disjoint, return PROVABLY_EMPTY. (2) Extend `TensorVar`\nwith optional `coord_range_hint: Optional[List[Tuple[int, int]]]` (per\nlevel lo/hi) populated at construction time from `torch`-level metadata;\nthe prover consumes this. (3) In `src/scorch/compiler/iter_lattice.py`,\nbefore materializing each lattice point, query the prover for the\nintersection of its required accesses. When it returns PROVABLY_EMPTY,\ndrop the lattice point and its subtree. When PROVABLY_NONEMPTY, skip the\nouter-bound check (a minor optimization). (4) The analysis must be sound\nfor tensors of arbitrary rank: intersections across 3+ operands must\niterate over all pairs and return PROVABLY_EMPTY if any pair is; the\nstructural reasoning must work at any level of the format. (5) Interact\nwith the canonical-form pass (feature_138) so that the prover sees\ncanonicalized affine index expressions when reasoning about coord ranges.\n(6) Add `CINLowerer.apply_intersection_prover: bool = True`. Write\ncomprehensive tests covering: provably-empty intersection between a strict\nupper- and strict lower-triangular mask on a 2D tensor yields zero lattice\nbranches; a 3D tensor contraction where two operands have disjoint\ncoord-range hints compiles to a no-op; correctness parity on SpMV and SpMM\n(nonempty intersections must still lower correctly); soundness when\ncoord-range hints are absent (prover returns POSSIBLY_NONEMPTY and the\ncompile path is unchanged); and interaction with kernel fusion.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Iter-space pruning"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_density_specialization",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a density-class trip-count specialization pass that compiles multiple\nvariants of each CIN kernel - one per \"density class\" of the sparse operands\n- and emits a runtime dispatcher that selects among them based on the\nobserved nnz-to-size ratio at call time. Sparse kernel performance varies by\norders of magnitude across density regimes (mostly-dense vs ultra-sparse);\na single kernel optimized for one regime is a poor choice for the other.\nImplementation steps: (1) Add a `DensityClass` enum in\n`src/scorch/compiler/dispatch.py` with values `ULTRA_SPARSE` (nnz/size <=\n1e-4), `SPARSE` (1e-4 < nnz/size <= 1e-2), `MEDIUM` (1e-2 < nnz/size <=\n0.5), and `MOSTLY_DENSE` (nnz/size > 0.5). (2) Add a `Scheduler.specialize_\nfor_density(cin, density_class) -> CIN` that returns a density-class-\noptimized CIN: for ULTRA_SPARSE prefer coordinate-outer iteration, no\ndense-fill workspaces, omit SIMD hints; for MOSTLY_DENSE prefer dense-outer\niteration, full-size workspaces, emit `#pragma omp simd`. (3) In\n`src/scorch/ops.py`, add a `compile_specialized(cin, classes=[ULTRA_SPARSE,\nSPARSE, MEDIUM, MOSTLY_DENSE])` helper that compiles one kernel per class\nand returns a dict keyed by class. (4) Add a `DensityDispatcher` class in\n`src/scorch/compiler/dispatch.py` with a single entry point `dispatch(\noperands: List[STensor]) -> Callable` that inspects the nnz/size ratio of\neach operand, computes an aggregate class (the minimum of individual\nclasses to err on the sparse side), looks up the specialized kernel in a\nper-signature cache, and calls it. (5) Integrate into all major ops:\n`ops.matmul`, `ops.einsum`, `ops.spmv`, `ops.spmm`, and fused ops - each\nshould opt in via a `specialize_by_density: bool = False` kwarg. (6) The\ndispatch must be correct for tensors of arbitrary rank: the operand-\naggregation must include every tensor regardless of its mode count, and the\ndensity-class thresholds must apply at the whole-tensor level (total nnz\ndivided by total number of positions). (7) Ensure cache correctness: the\ncache key must include dtype, shape, format, mode_order, and density class.\n(8) Record dispatch decisions in a thread-safe log that tests can consume.\nWrite comprehensive tests covering: correctness parity between dispatched\nand non-dispatched paths on SpMV, SpMM, 3D tensor contraction, and 4D\neinsum; measurement of the dispatched class matches the operand nnz; cache\nreuse across repeated calls with the same signature; cache miss when class\nchanges; and soundness when one operand is dense and another is ultra-\nsparse.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Sparse-specific passes/Format adaptation"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_bump_pool_arena",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a thread-local bump-pool arena allocator for kernel workspaces so\nthat generated kernels stop paying `malloc`/`free` costs on every\ninvocation. Today the emitted C++ calls `malloc` for each workspace and\n`free` at kernel end (see `csrc/header.cpp`); under OpenMP each thread's\nworkspaces incur separate system allocator calls, which serialize on the\nallocator lock and dominate runtime for small kernels. Implementation\nsteps: (1) Add a `thread_arena` class to `csrc/header.h` that maintains a\nper-thread bump-allocator with a configurable initial capacity (default 4\nMiB) and a grow-by-2x policy. Provide `void* thread_arena::alloc(size_t\nbytes, size_t align)` that returns an aligned pointer into the thread-local\nregion and `void thread_arena::reset()` that rewinds the bump pointer to\nzero. Mark the arena `thread_local` at file scope so each OpenMP thread\ngets an independent copy. (2) Add `src/scorch/compiler/workspace_arena.py`\nwith a `WorkspaceArenaPass` that walks the lowered LLIR, identifies\n`Allocate`/`Free` pairs whose target is a workspace (rather than a long-\nlived output tensor), and rewrites them to `thread_arena::alloc` and an\narena-reset at kernel end. (3) Perform workspace lifetime analysis\n(building on feature_119): for each workspace, compute its live range in\nthe lowered statements. When the workspace is single-use and scoped to a\nkernel, route it through the arena. Multi-use workspaces that outlive a\nsingle kernel invocation must continue to use `malloc`/`free`. (4) Guard\nthe pass behind `CINLowerer.use_arena_allocator: bool = True`. When False,\nemit the old `malloc`/`free` path. (5) The pass must be correct for\ntensors of arbitrary rank - workspaces may be 1D scalar or N-D dense, and\nthe arena must correctly handle alignment requirements of any dtype\n(float32, float64, complex, int8). (6) Ensure interoperation with the\nexisting `coo_workspace<T, D>` template: the coo workspace's internal\nbuffers must also be arena-backed when the pass fires. Extend `coo_\nworkspace` with an arena-aware constructor. (7) Generalize over the\nexisting `cvector<T>` too: when a cvector lives within a kernel scope, back\nit by the arena. Write comprehensive tests covering: performance parity (no\nregression) and correctness on SpMV, SpMM, a 3D contraction, and a 4D\neinsum with and without the arena; correctness under OpenMP with\n`num_threads` in {1, 2, 4, 8}; arena reset between calls leaves no cross-\ncall residue; and a negative test where a workspace explicitly outlives a\nkernel scope (the pass must leave it on `malloc`).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dead_code_elimination",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a dead-code elimination (DCE) pass over the lowered LLIR that removes\nstatements whose defined variables are never used and whose only side\neffect is the definition itself. Today the compiler emits many dead\ntemporaries because each pass (cin_lowerer, iter_lattice, iterator)\nconservatively introduces local variables that may or may not be consumed\nby downstream lattice branches; the combined output contains significant\ndead code that slows the C++ compile path. Implementation steps: (1) Add\n`src/scorch/compiler/dce.py` with a `DeadCodeEliminator` visitor that\nwalks an LLIR statement list and builds a def-use graph: each `VarInit` or\n`Assign` defines a variable, each `Var` reference uses it. The graph must\nbe built correctly across nested `ForLoop`, `WhileLoop`, `IfThenElse`, and\n`Function` bodies of arbitrary depth. (2) Classify each definition as\nside-effectful or pure. Side-effectful definitions include: calls to\nfunctions (for example `malloc`, `memset`, `push_back`, user-defined\nkernels), writes to arrays (`ArrayAccess` on the LHS of an `Assign`),\n`Print` nodes, `omp`-annotated updates, and any `RawStmt` whose code\ncontains `[`-assignments or function calls. Classify all other definitions\n(`VarInit` of an affine expression, a literal, a pure expression) as pure.\nPure defs whose variable is not used are removed; side-effectful defs are\nkept even when unused. (3) Iterate to fixpoint: removing one def may render\nother defs unused. (4) The pass must be correct for SSA (if feature_133 is\napplied first) and for imperative LLIR (if not). (5) Handle loop\nconstructs specially: a variable defined inside a loop is considered used\nif any iteration uses it; the pass must conservatively keep such defs even\nwhen a single iteration appears not to use them. (6) Integrate as the last\npass before codegen, guarded by `CINLowerer.apply_dce: bool = True`. (7)\nEnsure correctness for tensors of arbitrary rank - the number of\ncoordinate/position temporaries grows with rank, and the DCE must scale\nlinearly with LLIR size. Write comprehensive tests covering: reduction in\nline count and in `VarInit` count between DCE-on and DCE-off lowering on\nSpMV, SpMM, 3D contraction, and 4D einsum; correctness parity with DCE off;\na negative test where a side-effectful call's unused return value must NOT\nbe removed; a test verifying the pass reaches fixpoint in one sweep for\nnon-pathological programs; and an interaction test with LICM (feature_136)\nand CSE (feature_137) run in sequence.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/IR analyses & scalar opts/Classical passes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_value_layout_rewrite",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a value-array layout rewrite pass that chooses per-operand\nStructure-of-Arrays (SoA) vs Array-of-Structures (AoS) for multi-output\nkernels, based on downstream access patterns. Today every tensor stores its\nvalue array as a single contiguous `cvector<T>`; when a kernel writes to\nmultiple output tensors that share a traversal (see feature_143), the\nlayout choice per output materially affects cache behavior - writing `k`\nvalues to `k` separate arrays (SoA) gives best streaming performance for\nwrite-once outputs, while interleaving them (AoS) is better when a\ndownstream kernel reads all `k` at the same position. Implementation\nsteps: (1) In `src/scorch/compiler/cin.py`, extend `TensorVar` with an\noptional `layout_hint: Literal[\"SoA\", \"AoS\", \"auto\"] = \"auto\"` field. (2)\nAdd `src/scorch/compiler/layout.py` with a `LayoutRewriter` pass that walks\na fused multi-output CIN (from feature_143) and decides per-output group:\nif `k` outputs of the same shape and dtype are always written together and\nread together, set their `layout_hint` to AoS and emit a single\n`cvector<struct_of_k>` buffer; otherwise set them to SoA and emit k\nseparate `cvector<T>`s. (3) In `src/scorch/compiler/cin_lowerer.py`,\ngenerate LLIR that respects the `layout_hint`. For AoS, define a struct\ntype `{kernel_name}_aos_t` with k fields, emit a single buffer, and route\nwrites through `buf[p].field_i = ...`. For SoA, emit k buffers with\nseparate names and normal array writes. (4) In\n`src/scorch/compiler/codegen.py`, emit the struct type ahead of the kernel\nfunction when AoS is chosen. Propagate the struct definition through\npybind bindings so the caller can reconstruct individual output tensors\nfrom the shared buffer. (5) The pass must be correct for tensors of\narbitrary rank. The struct layout includes alignment padding to the next\npower of two; for ranks where the value-array entries are complex dtypes\n(from feature_59), handle alignment per-field. (6) Add a diagnostic\n`TensorVar.actual_layout()` that returns the layout ultimately chosen (not\nthe hint) so tests and profilers can verify. Write comprehensive tests\ncovering: a 2D mean+var multi-output kernel's emitted code contains an AoS\nstruct when the downstream consumer reads both; an SoA layout is chosen\nwhen only one consumer reads one output; correctness parity on a 3D and a\n4D multi-output kernel; round-trip from generated output arrays back into\nindividual `STensor` instances; and a negative test where layout_hint is\nforced to AoS but legality fails (different dtypes across outputs) and the\npass falls back to SoA.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Layout rewrite"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_user_reduction_op",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce a first-class user-defined `ReductionOp` CIN IR node with a\ndeterministic parallel tree-reduction lowering so that arbitrary associative\nand commutative combiners (not just `+` and `*`) can be expressed and\nparallelized correctly under OpenMP. The existing semiring matmul\n(feature_38) piggybacks on the hard-wired `+=` reduction and cannot express\n`logsumexp`, `max-plus`, `or-and` (boolean semiring), or user-supplied\n(combine, identity) pairs for general reductions. Implementation steps: (1)\nIn `src/scorch/compiler/cin.py`, add `ReductionOp(IndexExpr)` with fields\n`combine: Callable[[IndexExpr, IndexExpr], IndexExpr]`, `identity:\nIndexExpr`, `operand: IndexExpr`, and `reduction_vars: List[IndexVar]`. The\ncombiner is either a built-in `Operation` (for backward compatibility) or a\nuser-provided LLIR-level emitter `combine_llir: Callable[[llir.Expr,\nllir.Expr], llir.Expr]`. (2) In `src/scorch/compiler/cin_lowerer.py`,\nlower `ReductionOp` into a two-phase tree reduction when the enclosing\nloop is `omp_parallel_for`: each thread maintains a private partial\naccumulator initialized to `identity`; at the end of the parallel region,\npartial accumulators are combined via a deterministic pairwise tree (not\nvia OpenMP `reduction()` clause, which has non-deterministic floating-\npoint rounding). Emit the tree-reduce explicitly as nested loops over\npartial-accumulator indices `[0, num_threads)`. (3) For non-parallel\nreductions, lower to a straight-line fold with `identity` start. (4) The\ncombiner emitter must be rank-agnostic: the reduction variable list may\nhave any length, and the enclosing loop nest may have any depth. Do not\nassume a 1D or 2D reduction. (5) Provide built-in combiners for\n`Operation.MAX_PLUS`, `Operation.MIN_PLUS`, `Operation.LOGSUMEXP`,\n`Operation.OR_AND` (logical semiring), and `Operation.USER` (accepts a\nPython-level lambda). (6) Update `ops.matmul` and `ops.einsum` to optionally\ntake a `reduction: Union[Operation, Tuple[Callable, IndexExpr]]` kwarg; the\nexisting `semiring=...` kwarg from feature_38 should map through this new\ninfrastructure. (7) Guarantee reproducibility: the same input and the same\nnumber of threads must produce byte-identical output across runs. Write\ncomprehensive tests covering: `max-plus` matmul correctness on 2D and 3D\nexamples; `logsumexp` reduction over a 4D sparse tensor; reproducibility\nacross 100 repeated calls with `num_threads in {1, 2, 4, 8}`;\ninteroperation with feature_38's semiring matmul; and correctness of the\nidentity element across all built-in combiners (for `MAX_PLUS`, identity\nis `-inf`; for `MIN_PLUS`, `+inf`; for `LOGSUMEXP`, `-inf`; for `OR_AND`,\n`false`).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_codegen_refactor",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Refactor `src/scorch/compiler/codegen.py` from a monolithic\n`LLIRLowerer.lower_llir` dispatch into a `CodegenBackend` abstraction with\na default `CppOpenMPBackend` that preserves existing behavior, plus a\n`CppScalarBackend` (no OpenMP, useful for debugging and reference) and a\nstub `IRPrinter` backend (emits a pretty-printed LLIR rather than C++).\nThe existing pipeline must continue to work for every sparse format and\nevery tensor rank without modification. Implementation steps: (1) Create a\nnew abstract base class `CodegenBackend` in\n`src/scorch/compiler/backends/__init__.py` with methods `lower_stmt(self,\nstmt: llir.Stmt, indent: int) -> str`, `lower_expr(self, expr: llir.Expr)\n-> str`, `file_preamble(self) -> str`, and `file_postamble(self) -> str`.\nEvery existing dispatch branch of `LLIRLowerer.lower_llir` becomes a\nmethod on the base class (with C++-specific implementations in the\ndefault backend). (2) Add `src/scorch/compiler/backends/cpp_openmp.py`\nwith `CppOpenMPBackend` that replicates the current `LLIRLowerer` output\nexactly - all existing tests must pass byte-for-byte against this\nbackend's output. Use a snapshot comparison in a dedicated test. (3) Add\n`src/scorch/compiler/backends/cpp_scalar.py` with `CppScalarBackend` that\noverrides the OpenMP pragma emission to emit no pragmas and the\n`omp_parallel_for` flag to emit plain `for` loops. (4) Add\n`src/scorch/compiler/backends/ir_printer.py` with `IRPrinter` that emits a\ndeterministic pretty-printed LLIR tree rather than C++ - useful for\ndebugging and test assertions. (5) Rewrite `LLIRLowerer` as a thin wrapper\nthat delegates to a `CodegenBackend` instance (default\n`CppOpenMPBackend`). Add `LLIRLowerer(backend=...)` to allow backend\noverride. Keep the existing `LLIRLowerer` public API stable. (6) The\nbackend interface must carry no state specific to rank: for every tensor\naccess of any rank and every level-type combination, the backend's\n`lower_expr` and `lower_stmt` must produce correct output without special-\ncased dimension checks. (7) Update downstream callers in `src/scorch/\nops.py` to construct a `LLIRLowerer()` without arguments (preserving\ndefault behavior). Add a `CINLowerer.backend: Optional[CodegenBackend] =\nNone` pass-through so users can select backends without touching ops.py.\nWrite comprehensive tests covering: byte-exact equivalence of the default\nbackend's output with the pre-refactor `LLIRLowerer` output on a fixture\nCIN; `CppScalarBackend` emitting no OpenMP pragmas for an otherwise-\nidentical CIN; `IRPrinter` emitting a deterministic string that round-\ntrips back to LLIR via a tiny parser (add the parser as part of the task);\ncorrectness of all existing SpMV/SpMM/3D/4D tests under the default\nbackend; and rejection of backend-specific mismatches (e.g., the IR\nprinter passed to a caller that expects C++).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/Codegen architecture"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_cin_call_inline",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce a `CINCall` higher-order IR node and a sub-CIN inlining pass that\nexpands calls into their parent CIN, enabling a vmap/batched-apply pattern\nat the CIN level so that a block of computation parameterized over a slice\nindex can be expressed once and reused across arbitrary outer ranks without\nduplicating user code. This is the IR-level substrate for batched\noperations: `batched_matmul`, per-row softmax, per-block normalization,\netc. Implementation steps: (1) In `src/scorch/compiler/cin.py`, add\n`CINCall(IndexStmt)` with fields `sub_cin: IndexStmt`, `param_map: Dict[\nIndexVar, IndexVar]` (maps inner/sub-CIN free ivars to outer-CIN ivars that\nbind them at the call site), and `tensor_map: Dict[TensorVar, TensorVar]`\n(maps inner-CIN tensor params to outer-CIN tensor arguments). Extend the\nvisitor protocol. (2) Add `src/scorch/compiler/inline.py` with a\n`CINInliner` pass that walks a CIN statement, locates every `CINCall`, and\nreplaces it with a deep-copied and renamed copy of `sub_cin` using the\nsupplied `param_map` and `tensor_map`. All ivars in the inlined body must\nbe renamed to avoid collisions with the outer nest. (3) Provide a user-\nfacing helper `ops.vmap(fn, in_axes, out_axes)` that takes a Python\ncallable `fn` which constructs an inner CIN (parameterized by the\nper-slice ivars), builds a `CINCall` for each slice, and wraps them in the\nouter `ForAll` loop over the batch dimension. The pass then inlines and\nthe whole thing lowers as one kernel. (4) The inlining must be correct for\ninner sub-CINs of arbitrary rank - the sub-CIN may be a 1D kernel or a 4D\nkernel, and it may itself contain `CINCall` nodes (nested vmap). Support\nnesting depth of at least 3. (5) Integrate the algebraic canonicalizer\n(feature_124) to run on the post-inlined CIN so that ivar renames and\nshared accesses are canonicalized before scheduling. (6) Ensure workspace\nhandling is correct: each `CINCall` instance must get its own fresh\nworkspaces after inlining, not shared across sibling calls. (7) Add an\noptional `preserve_call_site: bool = False` flag that, if True, leaves\n`CINCall` nodes in place and emits a real function call at codegen\ntime instead of inlining (useful for separate-compilation workflows).\nDefault is `False` (inline). Write comprehensive tests covering: a batched\nmatmul `C[b, i, k] = sum_j A[b, i, j] * B[b, j, k]` expressed as a vmap\nover a 2D inner matmul produces the same result as a directly-authored 3D\nCIN; a 4D outer batch of 3D inner operations (examples only, not a rank\nlimit) yields correct results; nested vmap depth 3 correctness; workspace\nisolation between call sites; the `preserve_call_site=True` path compiles\nand executes correctly with a real function call; and rejection when\n`param_map` or `tensor_map` is incomplete.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"IR/CIN nodes"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_register_blocking",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a register-blocking (inner-kernel micro-tiling) scheduler pass that\nspecializes the innermost loops of dense-level nests into BLIS-style register\nmicrokernels with explicit accumulator registers. This is distinct from cache\ntiling (`Scheduler.add_tile`, `src/scorch/compiler/scheduler.py:839`): register\nblocking is applied only to the innermost dense loops of a nest after all other\nscheduling has run, and it transforms the loop body rather than just the bounds.\nThe pass must generalize to tensor operands of arbitrary rank - dense inputs may\nbe 1D, 2D, 3D or higher, and the number of inner loops eligible for register\nblocking depends on how many trailing levels are `LevelType.DENSE`. The\nillustration below uses a 3-nest for exposition only; the implementation must\nnot special-case rank 2 or 3. Implementation steps: (1) Add\n`Scheduler.add_register_block(cin, index_vars: List[IndexVar], sizes: List[int])\n-> CIN` that unrolls each listed ivar by its tile size, replaces the innermost\nbody with an array of scalar accumulator `Var`s, and rewrites the body so that\neach unrolled position accumulates into a distinct register. The pass must\nrefuse to run when any listed ivar is over a sparse level or when any\n`TensorAccess` that uses those ivars is non-affine. (2) In\n`src/scorch/compiler/llir.py`, add a `RegisterTile(Stmt)` node carrying the\nunrolled body, the list of accumulator `Var`s, and the flush sequence that\nstores accumulators back to memory after the innermost loop completes. (3)\nExtend `LLIRLowerer.lower_llir` (`src/scorch/compiler/codegen.py:22`) to emit\nthe accumulator declarations before the innermost loop, the unrolled body\ninside, and the flush after; the emitted C++ must use `restrict` qualifiers on\npointer arguments to the microkernel where legal. (4) Teach\n`Scheduler.auto_schedule` to consider register blocking as the last scheduling\nstep when all trailing levels are dense, with a cost model that prefers sizes\nwhere the accumulator working set fits in 16 named registers. (5) Make sure\nthe pass composes with `Scheduler.add_tile` (cache tiling) and with SIMD\npragmas (`ForLoop.simd`, `llir.py:479`): the register-tile body should sit\ninside the SIMD innermost loop, not replace it. Write comprehensive tests\ncovering: correctness parity on dense matmul of ranks 2 and 4 against torch;\ngenerated C++ contains the expected accumulator declarations and no spurious\nmemory writes inside the unrolled body; rejection when any of the target ivars\niterate over a sparse level; correct composition with cache tiling; and a\nperformance smoke test that the microkernel path is taken when expected.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_blis_operand_packing",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a BLIS/GotoBLAS-style operand packing pass for dense tensors that\ncopies cache-resident panels of large dense operands into a packed buffer\nbefore the inner kernel runs and makes the inner kernel address the packed\nbuffer instead of the original operand. The pass must work for dense operands\nof any rank (not restricted to 2D matrices) and for any valid mode ordering.\nImplementation steps: (1) Add `Scheduler.add_pack(cin, tensor_access:\nTensorAccess, panel_sizes: Dict[IndexVar, int]) -> CIN` that, for the given\ndense tensor access, inserts a `Where`-style workspace of shape `panel_sizes`,\nemits a copy-in loop nest that populates the workspace from the original\ntensor according to the outer cache-tile coordinates, and rewrites every\ndownstream access of that tensor inside the inner kernel to index into the\npacked workspace. The insertion point is immediately after the cache-tile\nloops and immediately before the inner loops. (2) In\n`src/scorch/compiler/cin.py`, extend `Workspace` (line 627) with a\n`packed_from: Optional[TensorAccess]` field and a `pack_layout: List[\nIndexVar]` that records the permutation used when copying into the workspace\n(so the inner kernel sees a layout optimized for its loop order, which may\ndiffer from the original operand's layout). (3) In\n`src/scorch/compiler/cin_lowerer.py`, add the copy-in lowering: for each\npacked workspace, emit a loop nest of depth equal to `len(pack_layout)` that\nreads the original tensor and writes the packed buffer. The emitted code must\nbe parallelizable by OpenMP at the outermost pack loop. (4) Generalize the\npack pass to cooperate with register blocking (feature_152): the packed panel\nis sized to the cache tile, and the inner register kernel reads from the\npacked panel with unit stride in its innermost ivar regardless of the\noperand's original storage order. (5) Make `Scheduler.auto_schedule` call\n`add_pack` for every dense operand whose innermost-loop access has non-unit\nstride under the chosen mode order and whose panel size fits in L2. The cost\nmodel must account for the copy-in overhead and decline to pack when the\noperand is small or the copy-in dwarfs the inner-kernel compute. (6) Pack\ninvalidation must be handled correctly across repeated kernel invocations:\nif the source tensor is unchanged between two calls, the packed buffer may\nbe reused; expose this as a `reuse_pack: bool = False` flag on\n`CINLowerer`. Write comprehensive tests covering: correctness for dense\nmatmul of ranks 2, 3, and 4 with and without packing; generated C++ emits a\ndedicated copy-in loop; the inner kernel accesses the packed buffer with\nunit stride; pack-buffer reuse across back-to-back calls; rejection when\nthe operand dimension is below a configurable threshold; and composition\nwith register blocking and cache tiling.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Pattern match"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_loop_collapse",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a loop-collapse (nest flattening) scheduler pass that fuses a chain of\nperfectly-nested dense `ForAll` loops into a single `ForAll` over a collapsed\nivar whose extent is the product of the original extents. This generalizes\nthe OpenMP `collapse(n)` clause to the full CIN scheduling surface and must\nwork for nests of arbitrary depth. The pass is important for dense workloads\nwhose outer loops each individually have too little work for good OpenMP\nload balancing but whose product iteration space is large. Implementation\nsteps: (1) In `src/scorch/compiler/cin.py`, add\n`IndexVarCollapse(IndexVarExpr)` that records the collapsed ivar and an\nordered list of original ivars together with their extents. Add a helper\n`IndexVarCollapse.expand(flat: IndexVar) -> List[IndexVar]` that reconstructs\nthe original ivars from a flat coordinate using division and modulo. (2) In\n`src/scorch/compiler/scheduler.py`, add `Scheduler.collapse(cin, ivars:\nList[IndexVar]) -> CIN` that verifies the ivars correspond to a perfect\nnest over purely-dense levels (reject otherwise), introduces a fresh\ncollapsed ivar, rewrites the nest as a single `ForAll`, and substitutes\n`IndexVarCollapse.expand` into every `TensorAccess` that depends on any\ncollapsed ivar. Preserve `inserted_workspace` and `no_tile_list` state. (3)\nIn `src/scorch/compiler/cin_lowerer.py`, lower the collapsed ForAll so the\ngenerated C++ recovers the original ivars via integer division/modulo at\nloop-body entry. When OpenMP parallelism is enabled, the collapsed outer\nloop carries the `#pragma omp parallel for` rather than each inner loop.\n(4) Generalize `Scheduler.auto_schedule` to apply `collapse` when all of\nthe following hold: the outer N loops are over dense levels, the product\nof their extents exceeds an OpenMP-work threshold, none of the inner\nstatements write back into any of the collapsed dimensions through a\nnon-affine access, and no intervening `Where` clause separates the\ncandidate loops. (5) The collapsed lowering must correctly cooperate with\ncache tiling: if a loop has been tiled, its outer tile loop may still be\ncollapsed with other outer ivars, but its inner tile loop must not. Write\ncomprehensive tests covering: numerical correctness for dense elementwise\noperations and dense reductions on tensors of ranks 2 and 4; generated\nC++ contains exactly one collapsed outer loop instead of N nested loops;\nrejection when any collapsed ivar touches a sparse level; correct\ndivision/modulo recovery of original ivars in generated code; rejection\nof collapse across `Where` boundaries; and composition with cache tiling\nso that only the outer tile loops collapse.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Reorder & restructure"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_broadcast_specialize",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a broadcast-dimension specialization pass that detects, at compile time,\ndense operand dimensions of size 1 (broadcast axes) and emits a specialized\nkernel that eliminates the broadcast loops entirely, replacing broadcast\naccesses with a scalar load hoisted out of the inner nest. The pass must\nhandle arbitrary rank (the shape `[B, 1, H, W]` broadcast against `[B, C, H,\nW]` is a 4D example used for illustration only; the pass must work for any\nrank and any combination of broadcast axes). Implementation steps: (1) In\n`src/scorch/compiler/cin.py`, extend `TensorVar` (line 532) with a\n`broadcast_mask: Optional[List[bool]]` metadata field (one entry per level;\nTrue means size-1/broadcast). Populate this field from\n`TensorFormat`/`TensorIndex` at `ops`-layer entry. (2) In\n`src/scorch/compiler/scheduler.py`, add\n`Scheduler.specialize_broadcasts(cin) -> CIN` that walks every\n`TensorAccess`, detects accesses into any level whose `broadcast_mask` is\nTrue, rewrites the access to project out the broadcast coordinate (since\nit can only take the value 0), and marks the corresponding ivar in the\nsurrounding `ForAll` as eligible for removal. Then it removes the `ForAll`\nloops that no longer bind any non-trivial access. (3) The pass must\npreserve semantics when a broadcast ivar is a reduction ivar: a sum over\na broadcast axis of extent 1 is just the operand; a product likewise. Add\nthe corresponding rewrite rules in the pass. (4) In\n`src/scorch/compiler/cin_lowerer.py`, emit the specialized kernel: any\ntensor whose innermost access was over a broadcast axis now becomes a\nscalar variable hoisted to the outermost scope shared by its consumers,\nnot reloaded on every iteration. (5) Wire the pass into\n`Scheduler.auto_schedule` so it runs before tiling and before loop\ninterchange, because it can dramatically reduce the effective nest depth.\nGuard it behind `specialize_broadcasts: bool = True` on the scheduler\nconfig. (6) Integrate with `ops.matmul`, `ops.einsum`, and the unary/binary\nop dispatchers so that broadcast metadata is propagated from user-level\ncalls (e.g., a batched matmul `A[B, 1, M, K] @ B[B, N, K, P]` sees the\nsecond operand as non-broadcast along the N axis and specializes\naccordingly). Write comprehensive tests covering: numerical correctness\nagainst torch for elementwise ops, reductions, and matmul over ranks 2\nthrough 5 with various broadcast masks; generated C++ omits the\nbroadcast loops; generated C++ hoists the broadcast operand to a scalar\nload; correctness when broadcast axes are the innermost, middle, or\noutermost dimensions; and rejection (no specialization) when shape\ninformation is not available at compile time.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Specialization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dense_strided_view",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Introduce a first-class `DenseStridedView` representation in `STensor`\nstorage and propagate it through the full compilation pipeline so that\noperations like slicing, transposition, and broadcasting on dense tensors\nproduce views rather than materialized copies and the downstream CIN sees\nthe view with the correct strides. Today dense slicing and transposition\ngo through `to_dense`/reshape paths that materialize copies; the view\ninfrastructure exists as `TensorStorageView` (used by sparse paths) but\ndense levels do not participate. The pass must work for views over dense\noperands of any rank. Implementation steps: (1) In\n`src/scorch/storage.py`, add `DenseStridedView` that wraps a dense\n`Storage` with `offset: int`, `strides: List[int]` (one per logical level,\nin elements), and `shape: List[int]`. Add helpers\n`DenseStridedView.from_slice(storage, slices)`,\n`DenseStridedView.transpose(perm)`, and\n`DenseStridedView.broadcast_to(shape)`. (2) Extend `STensor` to accept a\n`DenseStridedView` as its `storage` field and teach `to_torch`,\n`to_dense`, and the op-dispatch layer to preserve views when possible and\nto materialize on demand when required (for example, when handing the\ntensor to a sparse op that cannot consume a strided view). (3) In\n`src/scorch/compiler/cin.py`, extend `TensorVar` with an optional\n`strides: Optional[List[IndexExpr]]` field (one per dense level; None\nmeans the default contiguous stride is used). (4) In\n`src/scorch/compiler/cin_lowerer.py`, when emitting dense-level access\nfor a tensor whose `strides` is set, compute the flat offset as\n`sum(strides[l] * ivar_l) + base_offset` instead of the default\ncontiguous layout. Every existing dense-tensor access path must fall\nback to the contiguous case when `strides` is None. (5) Teach\n`ops.einsum`, `ops.matmul`, and the unary/binary op entry points to\nconstruct `DenseStridedView`-backed inputs when their inputs are already\nviews, avoiding a materialization round-trip. (6) Add a `materialize()`\nhelper that forces a view to become contiguous storage when required\n(for instance, before handing a tensor to a sparse compilation path that\ndoes not support strided operands). Write comprehensive tests covering:\ncorrectness of sliced, transposed, and broadcast views under elementwise\nops and matmul for ranks 2, 3, and 4; view fusion (two slices back to\nback produce a single view, not a view of a view); correct lowering of\nnon-unit strides in generated C++ (the access expression must contain\nthe explicit stride multiplication); materialization fallback when a\nsparse op consumes a view; and verification that no intermediate copy\nis made when a view flows directly into a dense op.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Views"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_mixed_precision_accum",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add end-to-end mixed-precision accumulator support for dense arithmetic so\nthat `torch.float16` and `torch.bfloat16` operands can flow through the\ncompile pipeline while reductions accumulate in `torch.float32` inside the\ngenerated kernel, with a final cast back to the operand dtype at store\ntime. This is the standard correctness-preserving precision recipe for\ntransformer workloads and must work for dense tensors of any rank - the\nexamples below use GEMM for exposition but the pass must apply to any\ndense reduction (sum, norm, dot, contraction). Implementation steps: (1)\nIn `src/scorch/compiler/llir.py`, add an `accum_dtype: Optional[DataType]`\nfield to `TensorAssign` (CIN) and extend the LLIR `AssignOp` accumulator\nnodes to carry a distinct accumulator type. Add\n`DataType.fp16`/`DataType.bf16` if they are not already present; ensure\n`DataType.from_dtype` covers `torch.float16` and `torch.bfloat16` (extend\n`src/scorch/compiler/llir.py:199` if needed). (2) In\n`src/scorch/compiler/cin.py`, add a `mixed_precision: bool = False`\nattribute on `BinaryOp` / `OpExpr` that flags reduction-like operations\neligible for mixed precision. The ops-layer sets this when operands are\nfp16/bf16 and the op is a reduction. (3) In\n`src/scorch/compiler/cin_lowerer.py`, when `mixed_precision` is set,\nintroduce an explicit accumulator `Var` declared in fp32 before the\nreduction loop, emit casts on each operand read (`fp16 -> fp32` / `bf16 ->\nfp32`), and emit a single cast back to the operand dtype when the\naccumulator is stored to the output at the end of the reduction. All\nintermediate arithmetic must be fp32. (4) Teach the C++ codegen to emit\n`__fp16`/`__bf16` operand types and `float` accumulators and to use the\ncorrect intrinsics/casts (`__fp16_to_float` where available,\n`__bfloat16_to_float32` emulations otherwise); the build must still\ncompile on the existing `python:3.11-slim` toolchain. (5) In `ops.py`,\nwhen the user calls a reduction op on an fp16 or bf16 tensor, set\n`mixed_precision=True` on the emitted CIN so the feature is on by\ndefault; expose `allow_mixed_precision: bool = True` on the op-layer\nentry points for users who want full operand-dtype arithmetic. (6) Make\nsure the pass composes correctly with register blocking (feature_152):\nthe accumulator registers must be fp32 even when the operand registers\nare fp16. Write comprehensive tests covering: numerical correctness of\nfp16/bf16 dense matmul against fp32 torch references for ranks 2, 3,\nand 4 within a documented tolerance; generated C++ declares a `float`\naccumulator and emits per-read casts; opting out via\n`allow_mixed_precision=False` restores full fp16 arithmetic; the mixed-\nprecision path is picked only for reduction ops (not for elementwise\nops); and rejection when the operand dtype is not fp16/bf16.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Type System/Promotion & mixed precision",
"Codegen/Vectorization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_stencil_halo_tiling",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a stencil detection and halo-tiling pass for dense tensor\ncomputations whose access patterns reference shifted neighbors of a\ncentral index. The pass identifies CIN expressions whose dense\n`TensorAccess`es use `IndexVarAdd` / affine index expressions of the form\n`i + k` (for small constants k), classifies them as stencils, and emits a\ntiled kernel that loads a ghost-region block into a local workspace and\nruns the stencil body over the interior. The pass must handle stencils\nover tensors of arbitrary rank (1D finite-difference, 2D Laplacian,\n3D/4D image or volume stencils, and higher). Implementation steps: (1)\nIn `src/scorch/compiler/scheduler.py`, add\n`Scheduler.detect_stencil(cin) -> Optional[StencilShape]` where\n`StencilShape` records, per ivar, the min and max offsets observed across\nall `TensorAccess`es in the CIN. Return `None` if the CIN is not a pure\nstencil (e.g., if any access uses a non-affine ivar). (2) Add\n`Scheduler.halo_tile(cin, tile_sizes: Dict[IndexVar, int]) -> CIN` that\nuses the detected stencil shape to size a ghost-padded workspace, insert\na copy-in loop that populates the workspace with `tile_size + halo`\nelements along each ivar, and rewrite the stencil body to index into the\nworkspace. The boundary condition (zero-fill vs replicate vs wrap) must\nbe selectable via a `boundary: str = \"zero\"` parameter. (3) Extend\n`src/scorch/compiler/cin_lowerer.py` so the halo copy-in is emitted as a\ndedicated loop nest that runs once per tile; the stencil inner loop\noperates entirely on the workspace so that every neighbor access is\ncache-local. (4) In `src/scorch/compiler/cin.py`, add a\n`StencilHalo(IndexStmt)` node that wraps the copy-in, the interior body,\nand the halo metadata; this makes the transformation visible to\ndownstream passes rather than hidden in codegen. (5) Wire the pass into\n`Scheduler.auto_schedule` so stencils are detected and halo-tiled before\ngeneral tiling runs. Guard behind `enable_stencil_tiling: bool = True`.\n(6) Expose a user-visible `ops.stencil(inputs, offsets, compute_fn)`\nconvenience API in `src/scorch/ops.py` that builds a stencil CIN\ndirectly. Write comprehensive tests covering: numerical correctness of\na 1D 3-point, a 2D 5-point Laplacian, and a 3D 7-point stencil against\na torch reference (the 2D and 3D cases are illustrative; no rank\nassumption should appear in the pass itself); generated C++ contains a\ndedicated halo copy-in loop and the inner loop indexes the workspace;\ncorrect handling of zero, replicate, and wrap boundary conditions;\nrejection (no halo tiling) when the CIN is not a pure stencil; and\ncomposition with register blocking such that the register kernel reads\nfrom the halo workspace rather than the original operand.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Tiling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_blas_pattern_match",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a pattern-matching pass that detects dense subtrees of a CIN\ncorresponding to standard BLAS primitives (GEMM, GEMV, GER, SYRK, TRMM,\nDOT, AXPY) and rewrites them to emit direct calls into a detected BLAS\nlibrary (OpenBLAS or MKL) instead of generating C++ loop nests. The pass\nmust handle arbitrary batch ranks: a batched matmul of shape\n`[B1, B2, M, K] @ [B1, B2, K, N]` should lower to a sequence of (or a\nbatched) GEMM calls, not a monolithic fallback to the generic loop-nest\npath. The inner operation is the BLAS primitive; everything outside is\nleft to the existing scheduler. Implementation steps: (1) Add\n`src/scorch/compiler/patterns.py` with `detect_gemm(cin) -> Optional[\nGEMMMatch]` that walks a CIN and returns a `GEMMMatch` whenever the\nsubtree matches `C[..., i, j] += A[..., i, k] * B[..., k, j]` (and\nanalogous patterns for GEMV, GER, SYRK, TRMM, DOT, and AXPY). The match\nmust record the batch dimensions, the reduction dimension, the operand\nstrides, transpose flags for A and B, and alpha/beta scalars if\nexpressible. (2) Add `src/scorch/compiler/blas_backend.py` with\n`BlasBackend` that exposes `emit_gemm(match) -> llir.Stmt` etc.; the\nemitted LLIR must be a `FunctionCall` to `cblas_sgemm` /\n`cblas_dgemm` / the batched variants as appropriate. Wrap batch loops\nthat iterate over the leading batch dimensions around the per-batch\ncall; where CBLAS-batched extensions exist\n(`cblas_sgemm_batch_strided`), emit those directly. (3) Add a\nlink-time detector: `build.py`/`setup.py` probes for OpenBLAS or MKL\nand records the chosen backend in a config module\n(`src/scorch/_blas_config.py`). If neither is present, the pass no-ops\nand the existing loop-nest path is used. (4) In `src/scorch/ops.py`,\nwhen the matmul dispatcher sees dense-dense inputs of any rank, hand\nthe CIN through the pattern pass before the generic scheduler. If a\nmatch is produced, skip scheduling entirely and lower the match\ndirectly. (5) Ensure correctness of transpose flags: the pass must\nrecognize both contiguous and strided dense layouts, and when a\n`DenseStridedView` (feature_156) indicates non-contiguous storage the\npass sets the appropriate `CblasTrans` flag instead of materializing.\n(6) Handle alpha/beta scaling: detect `C = alpha * A @ B + beta * C`\npatterns (with `alpha` and `beta` statically known or runtime-scalar)\nand pass the correct arguments. Write comprehensive tests covering:\ncorrectness of dense matmul, batched matmul (rank 3 and rank 4),\nmatvec, outer product, and syrk against torch references; generated\nC++ contains a `cblas_*` call and no matmul loop nest; behavioral\nfallback to the generic path when no BLAS library is detected;\ncorrect transpose handling for non-contiguous views; correct alpha/\nbeta handling; and correct behavior when the BLAS pattern is nested\ninside a larger CIN that also contains non-GEMM computation\n(matmul-then-ReLU style).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Pattern match"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dense_producer_consumer_fusion",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a dense producer-consumer fusion pass that fuses a dense CIN\nproducer into its immediate dense consumer, sharing an on-chip workspace\nso the producer's output is consumed directly in the consumer's inner\nloop without ever materializing a full intermediate tensor. This is the\ndense counterpart of the existing sparse workspace insertion\n(`Scheduler.insert_workspace`, `src/scorch/compiler/scheduler.py:1098`)\nand must work for producer and consumer CINs of arbitrary rank.\nImplementation steps: (1) Add `Scheduler.fuse_dense(producer: CIN,\nconsumer: CIN) -> CIN` that verifies the producer's output tensor is\nthe consumer's sole input-writer, that the producer and consumer share a\ncompatible loop structure over the common ivars, and that the producer\nhas no other downstream reader. On success, returns a fused CIN whose\nouter loop nest is the union of the producer and consumer loop\nstructures and whose inner body runs the producer's compute followed by\nthe consumer's compute on the same per-iteration element without going\nthrough global memory. (2) In `src/scorch/compiler/cin.py`, add a\n`FusedProducer(IndexStmt)` node that carries both bodies and the shared\nworkspace. Workspace lifetime is a single inner iteration - add a\n`transient: bool = True` flag on `Workspace` to signal this. (3) In\n`src/scorch/compiler/cin_lowerer.py`, lower the transient workspace to\na stack-allocated scalar (not a heap buffer) whenever the consumer\nconsumes each producer output exactly once along the shared ivar; if\nthe consumer reads the producer output K times within the same\niteration, emit a tile-resident buffer of size K. (4) Handle the case\nwhere the producer's output shape is a proper prefix of the consumer's\ninput shape (broadcasting): the fused loop must run the producer\noutside the broadcast dimension so its result is reused across the\nbroadcast axis rather than recomputed. (5) Wire `auto_schedule` to\nconsider fusion whenever two CINs are composed at the `ops` layer\nwithout an intervening user-visible tensor (for example, `relu(matmul(\nA, B))` should fuse the ReLU into the matmul epilogue without\nmaterializing the matmul's full output). Guard behind\n`enable_dense_fusion: bool = True`. (6) Ensure the pass composes\ncorrectly with BLAS dispatch (feature_159): when the producer is a\nBLAS-matched GEMM, emit the GEMM call followed by the fused consumer\nepilogue inside the same outer batch loop, respecting BLAS's alpha/\nbeta if the consumer is a simple linear combination. Write\ncomprehensive tests covering: correctness of `relu(matmul(A, B))`,\n`matmul(A, B) + bias` with broadcast bias, and `layer_norm(matmul(A,\nB))` for ranks 2 and 3; generated C++ does not declare a full\nintermediate tensor; the scalar/tile workspace is declared at the\ncorrect scope; fusion is correctly declined when the producer has\nmultiple consumers; and composition with BLAS dispatch emits a GEMM\ncall plus an inlined epilogue.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Loop transformations/Fusion"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dataflow_selection",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a dataflow-selection scheduler pass that, for dense contractions,\nchooses between output-stationary, weight-stationary, and input-stationary\ndataflows and emits a specialized kernel per choice. In output-stationary\ndataflow the output tile is held in registers across the reduction in K;\nin weight-stationary, one input tile is held across the outputs; in input-\nstationary, the other input is. Each choice has different arithmetic\nintensity characteristics and different cache-reuse patterns. The pass\nmust work for dense contractions of arbitrary rank - the 2D GEMM\ndescription below is illustrative only. Implementation steps: (1) In\n`src/scorch/compiler/scheduler.py`, add a `Dataflow` enum\n(`OUTPUT_STATIONARY`, `WEIGHT_STATIONARY`, `INPUT_STATIONARY`) and\n`Scheduler.set_dataflow(cin, dataflow: Dataflow) -> CIN` that rewrites\nthe loop order and the register-tile direction to match the choice. In\nthe output-stationary case the reduction ivar(s) are innermost and the\nregister tile holds the output; in weight-stationary the reduction ivar\nis interchanged to sit outside one of the operand ivars and the register\ntile holds that operand; likewise for input-stationary. (2) Add a\ndataflow cost model\n`Scheduler._dataflow_cost(cin, dataflow, _CostModelConstants) -> float`\nthat estimates the arithmetic intensity and register-footprint of each\nchoice and picks the minimum-cost one when `auto_schedule` is invoked\nwithout an explicit dataflow override. (3) The pass must compose\ncorrectly with register blocking (feature_152): in each dataflow the\nregister tile is formed around a different set of ivars, so\n`add_register_block` must receive the dataflow-appropriate ivar list.\n(4) The pass must compose with BLAS dispatch (feature_159): BLAS\nlibraries internally choose their own dataflow, so if a BLAS match\noccurs the dataflow pass is skipped for that subtree and the BLAS call\nis used unchanged. (5) Expose a user-override\n`ops.matmul(A, B, dataflow=\"output_stationary\")` so users can pin a\ndataflow for benchmarking. The user-level kwarg is propagated through\n`CINLowerer` into `Scheduler.set_dataflow`. (6) Generalize to higher-\nrank contractions: for a `[..., i, j, k]` rank-3 contraction with\nreduction over `k`, output-stationary holds the `(i, j)` output tile;\nfor a rank-4 contraction over two reduction axes, output-stationary\nholds the non-reduction tile regardless of operand rank. Write\ncomprehensive tests covering: correctness across ranks 2, 3, and 4\nagainst torch for each of the three dataflows; generated C++ differs\nin loop order and register declaration between dataflows; the auto-\nchosen dataflow is the predicted minimum-cost one; user override wins\nover the cost model; BLAS match skips the dataflow pass; and\ncomposition with register blocking yields the expected tile footprint\nfor each dataflow.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Scheduler/Dense passes/Specialization"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_broadcast_sparse_aware",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Make broadcasting work consistently for every elementwise `STensor` operation, including sparse-sparse, sparse-dense, dense-sparse, scalar, and torch scalar operands. The public behavior should match PyTorch broadcasting, but the implementation must stay sparse-aware: dimensions of size 1 are broadcast logically and should not force dense materialization unless the result's implicit value changes. This must work for tensors of arbitrary rank; any 2D examples in tests are illustrative only. Implementation details: add a broadcast-shape resolver in `src/scorch/ops.py`, represent broadcasted axes in `TensorVar` metadata, and teach `CINLowerer`/`ModeIterator` to map a logical output ivar to constant 0 for broadcasted operand axes while preserving mode order. Format inference must choose dense only when implicit zeros become explicit nonzeros; otherwise it should preserve the sparse operand's structure when sound. Support negative axis normalization, mixed mode orders, and chained broadcasts such as rank-4 sparse plus rank-1 dense. Write tests for addition, subtraction, multiplication, division, comparisons, and unary-after-binary expressions across ranks 1 through 5; include cases with disjoint sparse patterns, scalar left and right operands, non-default mode_order, and generated C++ checks proving broadcast axes are not iterated as full sparse levels.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Broadcasting"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_lazy_permute",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add lazy `permute`, `moveaxis`, and `movedim` support for `STensor` so dimension reordering updates logical metadata without physically rewriting storage until an operation truly requires it. The hard requirement is that every existing op consuming a permuted tensor must see the correct logical indices, shapes, formats, and mode_order for tensors of arbitrary rank. Implement a `TensorStorageView` subclass that records a logical-to-physical axis permutation, extend `TensorIndex.mode_order` handling so nested permutations compose to one normalized mapping, and update `TensorVar`/`TensorAccess` construction to emit physical level accesses from logical index variables. Materialization should be explicit via `.contiguous()` or automatic only at unsupported boundaries. Tests should cover rank-2 through rank-5 tensors, sparse and dense formats, double permutation cancellation, permuted operands in `einsum`, `matmul`, elementwise ops, reductions, and conversions to torch. Include tests where the output format's mode_order differs from both inputs, where one operand is permuted and broadcast, and where generated C++ accesses physical coordinate arrays in the correct permuted order.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Transpose & permute"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dtype_promotion",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement PyTorch-style dtype promotion for all scalar, dense, and sparse binary operations. The result dtype should follow `torch.result_type` for tensor-tensor, tensor-scalar, and scalar-tensor cases, including bool, integer, float32, float64, and complex dtypes that Scorch supports or needs to add. Promotion must flow through `STensor.dtype`, `TensorVar.dtype`, LLIR `DataType`, generated pointer types, cvector types, workspace accumulator types, and result assembly. This is rank-independent and must work for arbitrary-dimensional tensors. Be careful that reduction accumulators may use a different dtype from the output dtype, while elementwise ops should not. Tests should cover all binary arithmetic and comparison ops, scalar left and right operands, sparse-sparse and sparse-dense combinations, rank-0 scalar-like tensors through rank-5 tensors, generated C++ type declarations, format conversion after promotion, and parity with torch dense references. Include negative tests for unsupported promotions that must raise clear errors instead of producing invalid C++.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Type System/Promotion & mixed precision"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_pad_crop_nd",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `pad` and `crop` for sparse and dense STensors with PyTorch-like padding specifications generalized to arbitrary rank. Padding with zero should rewrite shapes and coordinates without touching values; padding with a nonzero constant may require densification unless the format supports an explicit fill value. Cropping should be a view when possible and should preserve sparse storage by shifting coordinates. Support asymmetric padding, negative padding as cropping, batch dimensions, non-default mode_order, and composition with slicing/permute views. Update format inference so zero padding preserves sparse formats and nonzero padding records a clear densification reason. Tests should cover ranks 1 through 5, sparse and dense inputs, zero and nonzero pad values, negative padding, crop after permute, pad before contraction, empty result shapes, generated C++ for padded operands flowing into elementwise ops, and dense parity with `torch.nn.functional.pad` or equivalent indexing references.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Concat & pad"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_parallel_output_merge",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement deterministic parallel sparse output assembly using thread-local coordinate buffers followed by a stable merge. When generated kernels run with OpenMP, every thread should append candidate output coordinates and values into a private buffer, then a final merge should sort by logical output coordinates, coalesce duplicates with the operation's reduction operator, and build the requested output format. This should replace unsafe shared `push_back` patterns for coordinate outputs while still supporting serial execution. The implementation must work for arbitrary-rank outputs, reductions that write duplicate coordinates, non-default mode_order, empty outputs, and mixed dtype values. Add a rank-generic `parallel_coo_builder` in `csrc/header.cpp`/`.h`, extend LLIR/codegen to use it for parallel sparse outputs, and ensure the merge is deterministic independent of thread count. Tests should run the same SpMM, scatter_reduce, elementwise union, 3D contraction, and 4D einsum with 1, 2, and 4 threads; verify identical indices and values; check duplicate coalescing; inspect generated C++ for thread-local builders; and confirm no data races under repeated runs.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Codegen/Parallelism"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_argmin_argmax",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement `STensor.argmin(dim=None, keepdim=False)`, `STensor.argmax(dim=None, keepdim=False)`, and the corresponding `torch.argmin` / `torch.argmax` for STensors of any rank and any format. With `dim=None`, return a 0-d int64 STensor giving the flat index of the global min/max in the dense materialization. With a `dim` argument (int, may be negative), reduce only that axis to produce an int64 STensor of one lower rank (or the same rank when `keepdim=True`). Tie-breaking matches PyTorch: the smallest index wins. The result must match `torch.argmin/argmax(input.to_dense(), dim=...)` exactly on every supported input. Subtleties: implicit fill values participate in the reduction (a sparse row `[3.0, 0, 0]` along that axis has argmin=1, the first implicit zero, not argmin=0); when `dim=None` the flat-index calculation must follow the canonical mode_order, not the physical storage order; an empty tensor along the reduced dim raises matching PyTorch's error. Tests should cover ranks 1..4, every format, both `dim=None` and explicit dim (positive and negative), `keepdim` true/false, ties between explicit and implicit values, an all-zero tensor, and parity with `torch.argmin`/`torch.argmax` on every case. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Reductions & Scans/Argmax-style"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_async_jit_compile",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Make JIT kernel compilation non-blocking. `_load_kernel(name, sources, ...)` in `src/scorch/utils.py` currently blocks the calling thread until `torch.utils.cpp_extension.load_inline(...)` finishes - a 5-30 second wait that dominates first-use latency for any new CIN. Add an `_load_kernel_async(...)` variant that returns immediately with a future-like wrapper, kicks off the build in a background thread, and blocks only on the first `module.evaluate(...)` call. The wrapper supports the same arguments as `_load_kernel` and integrates with `_kernel_cache` (a hit returns the wrapper synchronously; a miss starts a single compile thread per unique source set, deduping concurrent submissions for the same kernel name). Compile errors must surface at first-use time, not be silently swallowed. The wrapper must be picklable in the multi-process kernel-cache case so that pre-compiled futures can be transferred. Public API additions are limited to the new variant and an opt-in `scorch.set_async_compile(True)` toggle. Tests should cover repeated submissions of the same kernel from multiple threads (only one compile fires), a successful compile, a deliberately broken source that surfaces the C++ error on first use, kernel-cache hits during in-flight compiles, two concurrent calls that block on the same future correctly, and verify that the wall-clock time to first-use is bounded by the slowest concurrent compile. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_torch_meta_tensor",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add support for STensors created with `device='meta'`, where shape, dtype, format, and mode_order are tracked but values and mode_indices are not allocated. Used for shape/format inference without computation. Add a `device` parameter to `STensor.__init__` (default `'cpu'`); when `device == 'meta'`, the `_storage`'s value and mode_indices arrays are `None` and any access to them raises a `RuntimeError`. Every public op (`__add__`, `__matmul__`, `einsum`, `change_mode_order`, `to_format`, `to_sparse`, `to_dense`, slicing, `narrow`, `select`, etc.) must check for meta inputs and return a meta result with the correct output shape and format inferred via `infer_output_format` - without invoking any C++ kernel. Mixing meta and non-meta inputs in a single op raises a clear error. The op must work for any rank and any format. Tests should cover meta-tensor construction, `+`/`*`/`@` on meta inputs of ranks 1..4 across multiple formats, `einsum` between meta tensors of various rank patterns, format conversions on a meta tensor, an attempted access to `meta.values` (must raise), mixing meta and CPU inputs (must raise), and parity with the corresponding `torch.empty(..., device='meta')` shape inference where applicable. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Torch dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_strided_dense_zerocopy",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a zero-copy fast path to `STensor.from_torch` for non-contiguous (strided) dense torch tensors. Currently `from_torch` calls `.contiguous()` which copies; the new path retains the original storage and stride information. The optimization must work for any rank. Per-axis `stride` metadata is added to `TensorStorage` (defaulting to the contiguous strides); the codegen multiplies indices by stride at every dimension. `to_torch` round-trips correctly (returns a strided view of the original storage with the original strides). The kernel cache key must include strides - a strided kernel is not interchangeable with a contiguous one of the same shape. Subtleties: PyTorch strides may be negative (reversed view), zero (broadcast view), or arbitrary positive values; broadcast and contiguous strides must both work. Operations that previously assumed contiguous storage (memset zero-init, `cvector` length-derivation, indexed reductions) must consult the new stride field. Tests should cover a transposed 2D tensor, a strided 3D tensor with one stride zero (broadcast-style), a 4D tensor with negative strides, round-trip via `to_torch`/`from_torch`, parity with `torch.add`/`torch.mul`/`torch.matmul` on the equivalent contiguous tensor, and a negative test that an unsupported stride pattern raises clearly rather than silently densifying. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Constructors & I/O/Torch dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_threadlocal_dispatch_cache",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Make the global `_einsum_dispatch_cache` and `_kernel_cache` in `src/scorch/ops.py` and `src/scorch/utils.py` safe under concurrent calls from multiple Python threads without sacrificing the cache-hit speedup. The current code reads/writes both dicts without synchronization, which has caused intermittent KeyError and partial-update issues under multi-threaded inference. The fix must be a lock-free or copy-on-write pattern (or a fine-grained sharded-lock pattern) - a single coarse lock around every cache access is unacceptable as it serializes the hot path. The fix must preserve the existing API: callers do not change. The dispatch cache must work correctly when two threads concurrently submit the same expression for the first time (only one compile fires; both threads observe the same compiled module). The kernel cache must do the same. Optionally add `scorch.cache_stats()` returning per-thread hit/miss counts. Tests should cover concurrent submissions of the same einsum from N threads (only one compile), N threads with all-different einsums (each gets compiled exactly once), a stress test of 1000 mixed hits/misses across 16 threads, correctness parity with the single-threaded baseline on every case, and verify that `cache_stats` totals match the underlying counts. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_block_iter",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add `STensor.iter_blocks(block_shape)` that yields STensor blocks of the input, of size `block_shape`, walking the input in canonical mode_order. Each block is itself an STensor with the same dtype and a format chosen via `infer_output_format`; the last block along each axis may be smaller than `block_shape[d]`. `block_shape` is a tuple with one entry per input dimension; an entry of `-1` means 'do not chunk this axis' (yield the full slice along that axis). The iterator must work for any input rank, any format, and any block_shape. For sparse formats, blocks should be views into the input's coordinate arrays (no copy) when possible - for CSR with row-blocking, sub-`crow_indices` are derived; for COO, coordinate columns are filtered per block. For dense formats, blocks may be views via the existing `narrow` path. The 2D row-blocking example is illustrative; rank-N inputs with arbitrary block_shape patterns must work. Tests should cover ranks 1..4, every format, blocks that exactly tile the input vs blocks with a remainder along one or more axes, a single `-1` axis, all `-1` axes (yields the whole input once), an empty input (yields one empty block), and verify that reassembling the blocks via `torch.cat`/`torch.stack` recovers the original tensor exactly. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/Shape & Layout/Views"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_grad_through_format",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Make format-conversion operations differentiable so gradients flow back to the source format and mode_order. Currently `STensor.change_mode_order`, `to_format`, `to_sparse`, `to_dense`, `from_torch`, and `to_torch` are not registered with autograd; gradient computations through these ops silently zero out. Wrap each as a `torch.autograd.Function` whose backward reverses the format/mode_order change and forwards the upstream gradient. The wrappers must work for STensors of any rank and any format. Subtleties: `to_dense` backward must filter the dense gradient through the original sparsity pattern (only stored coordinates receive gradient; implicit positions get nothing); `to_sparse` backward must densify the sparse gradient to the original dense shape; `change_mode_order` backward applies the inverse permutation; chaining several conversions must produce a gradient identical to PyTorch's `torch.autograd.grad` on the equivalent dense ops. The wrappers must compose correctly with `feature_autograd`'s element-wise/matmul wrappers (so a chain like `STensor -> to_dense -> torch.matmul -> some_loss` differentiates end-to-end). Tests should cover gradients of `to_dense`, `to_sparse`, `change_mode_order`, `to_format`, and chains of two/three conversions, ranks 1..4, several format combinations, finite-difference parity vs PyTorch's dense reference, and an interaction test with `feature_autograd`'s `SparseAddFunction`/`SparseMatMulFunction`. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"API/ML Primitives/Autograd"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_persistent_workspace_buffer",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Optimize repeated kernel invocations by reusing per-module workspace buffers across calls. Currently every call to a compiled kernel that uses a CIN `Workspace` allocates a fresh buffer via `malloc`/`free` inside the C++ body; in tight inference loops this dominates execution time. Add a per-module persistent workspace buffer: allocated lazily on first call, grown to a high-water mark when a later call requires more space, freed when the module is evicted from `_kernel_cache`. The buffer is private to the module instance, threaded through as an extra argument to the kernel `evaluate` function. Thread safety under OpenMP requires per-thread sub-buffers (slice the persistent buffer by thread id at parallel-region entry). The optimization must compose with `feature_workspace_pooling`'s lifetime analysis (which decides how many distinct workspace slots a CIN needs); the persistent buffer is keyed by `(slot_id, dtype, shape_high_water_mark)`. Subtleties: shape changes across calls must trigger a buffer grow, not a reallocation; calls with smaller shape than the high-water mark reuse the existing buffer; the buffer must be valid memory across the C++/Python boundary (i.e., owned by a module-level `std::vector<uint8_t>`, not Python-managed). Tests should cover SpMV, SpMM, 3D and 4D einsum repeated 100 times on identical-shape inputs (zero allocations after the first), repeated invocations with growing shape (correct grow behavior), parallel invocations from multiple threads (no data races), eviction from `_kernel_cache` (buffer freed), and verify final results match the non-optimized baseline on every case. Rank examples are illustrative.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_jit_compile_pool",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Replace the synchronous, ad-hoc compile path in `_load_kernel` (`src/scorch/utils.py:32`) with a managed JIT compile pool. Today every miss in `_so_cache` (utils.py:29) calls `torch.utils.cpp_extension.load_inline` on the calling thread, blocking it for 5-30s; concurrent callers requesting the same kernel each pay the full cost and may also race on the build directory. Implement a `_KernelCompilePool` with a bounded worker pool (default `min(4, os.cpu_count())`, configurable via `scorch.set_jit_compile_workers(n)`), a FIFO queue of `(kernel_name, sources, cflags, ldflags)` work items, and a per-name in-flight registry so concurrent `_load_kernel(name=X, ...)` callers from different threads coalesce onto the same future. The hot path on a `_so_cache` hit must remain lock-free (no acquisition of the pool's mutex). Failed compiles surface as `RuntimeError` to every caller waiting on the same future, and the failed entry is removed from the in-flight registry so a subsequent retry triggers a fresh build (do not cache compile failures here \u2014 that is a separate feature). Cancellation: `scorch.shutdown_jit_compile_pool()` must drain pending work, raise `RuntimeError(\"pool shut down\")` on any caller blocked on a future, and reject new submissions. Tests must cover: (a) two threads requesting the same `kernel_name` compile exactly once and both receive the same module, (b) two threads requesting distinct `kernel_name`s compile concurrently (wall time \u2264 1.4\u00d7 the single-thread compile of the slower one), (c) a deliberately-broken source raises `RuntimeError` on every concurrent caller and a re-submit triggers a fresh compile, (d) `shutdown_jit_compile_pool()` does not deadlock when called from a thread that is itself holding a not-yet-resolved future, and (e) `_so_cache` reads on hit do not contend with the pool mutex (use `threading.Lock` ownership inspection or a microbenchmark). Existing tests that exercise `_load_kernel` synchronously must still pass without code changes.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_polymorphic_dispatch_inline_cache",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a per-call-site polymorphic inline cache (PIC) on top of the global `_einsum_dispatch_cache` (`src/scorch/ops.py:31`). Profiling shows that even the existing fast-dispatch path spends nontrivial time hashing the `_dispatch_key` tuple (ops.py:402-410) for every `einsum` call. Implement a small (capacity 4) inline cache attached to each *call site*: use `sys._getframe(1).f_code.co_filename + co_firstlineno` (or an equivalent stable call-site identifier) to key a per-call-site list of `(dispatch_key, cached_entry)` pairs. On hit, skip the global dict lookup entirely and return the entry; on miss within the inline cache but hit in the global, promote the entry to the call site's PIC (LRU eviction within the size-4 list); on global miss, fall through to the existing compile path. The PIC must be safe under concurrent callers: two Python threads calling the same `einsum(...)` line must not corrupt the per-call-site list. Megamorphic (>4 distinct dispatch keys at one call site) call sites must permanently disable the PIC for that site to avoid thrashing. API: `scorch.dispatch_pic_stats(call_site=None) -> Dict[str, int]` returning hit/miss/megamorphic counts per call site (or aggregated when `call_site is None`); `scorch.reset_dispatch_pic()` clears all PIC state. Tests must include: (a) a 1000-iteration loop of `einsum(\"ik,kj->ij\", a, b)` registers 1 miss + 999 hits in the call site's PIC; (b) the same loop with two interleaved sparse formats (4 total keys) stays in PIC; (c) interleaving 5+ distinct dispatch keys at one site flips the PIC to megamorphic and disables further insertion; (d) two `concurrent.futures.ThreadPoolExecutor` workers calling the same line do not corrupt PIC state and the merged hit count equals the call count; (e) PIC-hit dispatch is at least 2\u00d7 faster than the global-cache-hit path under a microbenchmark. Pre-existing dispatch-cache tests must continue to pass.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_so_path_atomic_publish",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Eliminate the data-race in `_load_kernel` (`src/scorch/utils.py:32`) where one thread observes a partially-written `.so` file produced by another thread's in-progress `load_inline`. Today the function checks `os.path.isfile(so_path)` (utils.py:48) and immediately calls `importlib.util.spec_from_file_location(...)`, which can `dlopen` a truncated or zero-length `.so`. Implement an atomic-publish protocol around `torch.utils.cpp_extension.load_inline`: (1) intercept the build via a custom `build_directory` per compile attempt; (2) compile to a `.so.tmp.<unique>` path; (3) `os.fsync()` the file; (4) `os.replace()` (atomic on POSIX) it onto the canonical `<name>.so`; (5) write a sibling sentinel `<name>.ok` file containing the SHA256 of the `.so` after rename (also via tmp+rename). Reader side must verify both that `<name>.ok` exists *and* that it matches the on-disk `.so` hash before `dlopen`; on mismatch it must wait (bounded, \u2264 5 s) for the publisher to finish, then re-validate. A torn read (`.so` exists but `.ok` missing) must trigger a clean rebuild rather than a `dlopen` of incomplete content. Implementation must work across PyTorch's `cpp_extension` versions used in `requirements.txt` (do not depend on private symbols). Add a fault-injection test mode: `scorch.utils._so_publish_fault_mode = \"truncate\"` makes the publisher exit between fsync and rename, so a reader sees a `.so.tmp` and no canonical file (must rebuild). Other modes: `\"hash_mismatch\"`, `\"missing_ok\"`, `\"slow_publisher\"` (publisher sleeps 200 ms after fsync, reader must wait). Tests cover all four modes from a multi-threaded reader/writer setup, plus a stress test with 8 threads racing on the same kernel name asserting that exactly one full compile occurs and no thread observes a torn `.so`. Pre-existing tests in `test_kernels.py` must continue to pass on cold and warm `_so_cache`.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_negative_compile_cache",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a negative compilation cache that records permanently-failing kernel sources so subsequent `_load_kernel` calls fail fast without invoking the C++ compiler. Today, every miss in `_so_cache` (`src/scorch/utils.py:29`) re-runs `load_inline`, even for sources that produced a deterministic `RuntimeError` (e.g., a syntactically invalid emitted kernel from a buggy schedule). Add `_neg_so_cache` keyed by `_kernel_name(...)` storing the most recent compile error. The hard part: distinguish *permanent* failures (compiler-emitted diagnostics: \"error:\", \"undefined reference\", \"no matching function\") from *transient* failures (`Killed signal terminated program cc1plus`, `ENOMEM`, `nonzero exit status -9` from the OOM killer, ninja IO errors, \"Permission denied\"). Permanent failures cache the error. Transient failures retry up to 3 times with exponential backoff (100 ms, 400 ms, 1.6 s) before being recorded as permanent. Bound the negative cache via `set_neg_compile_cache_max_size(n)` (default 256, FIFO eviction). Expose `clear_negative_compile_cache()` and `negative_compile_cache_stats() -> Dict[str, int]`. Must be thread-safe: two threads requesting the same broken kernel must coalesce onto the first error (not produce two separate compile attempts). Must integrate with `_so_cache`: a successful compile after eviction from the negative cache repopulates `_so_cache` normally. Tests cover: (a) a syntactically-invalid `cpp_source` is recorded after one attempt; second attempt raises a wrapped `RuntimeError` mentioning \"negative cache hit\"; (b) a simulated transient OOM (raise `RuntimeError(\"...Killed signal terminated program cc1plus...\")` on first 2 attempts, then succeed) eventually compiles and is *not* in the negative cache; (c) `clear_negative_compile_cache()` enables retry; (d) cache size bound enforced; (e) two concurrent threads with the same broken source produce exactly one compile attempt and both see the negative-cache verdict on retry. Pre-existing `_load_kernel` tests must remain valid (success path identical).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_dispatch_cache_lru_with_pinning",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Replace the unbounded `_einsum_dispatch_cache` and `_kernel_cache` dicts (`src/scorch/ops.py:30-31`) with a thread-safe LRU cache that respects a configurable byte budget and supports kernel pinning. Today the cache grows unbounded; in long-running serving processes this causes RSS bloat that has been observed to OOM at ~3 GB on the worker node. Implement a `_DispatchCache` class that: (1) tracks per-entry approximate size (sum of CPU weight tensors held by the compiled torch module + a fixed overhead estimate; you must measure actual `module.evaluate` reachable storage, not assume zero); (2) enforces `set_dispatch_cache_size_bytes(n)` (default 256 MiB) by evicting cold entries via standard LRU until the budget is met after every insertion; (3) supports `pin_dispatch_entry(dispatch_key)` and `unpin_dispatch_entry(dispatch_key)` so hot kernels are never evicted regardless of LRU position; pinned entries do not count toward eviction-eligible bytes but *do* count toward the displayed total. Eviction must release the underlying torch C++ module (no `_so_cache` zombie references); a re-request after eviction triggers a fresh compile via the existing `_load_kernel` path. Concurrency: hot-path lookups must remain lock-free (use a sharded read-mostly structure or RCU-like immutable snapshot for the `get`); inserts/evictions/pin operations may take a writer lock. Pin/unpin must be safe to call from any Python thread including a thread that is itself blocked inside a kernel's `evaluate()`. Tests cover: (a) inserting 100 entries with a 50-entry-equivalent budget evicts the oldest 50, in LRU order; (b) pinning the 5 oldest entries and re-inserting evicts the next-oldest unpinned; (c) `set_dispatch_cache_size_bytes(0)` immediately drains the cache to only pinned entries; (d) a thread-stress test with 8 readers + 1 writer (insertions causing evictions) reports zero `KeyError` on lookups for keys that the readers re-insert on miss; (e) eviction releases the actual `torch.classes.Module` (use `weakref.ref` to assert collectability after eviction + `gc.collect()`). The pre-existing fast-dispatch path in `einsum` (ops.py:401-461) must continue to work unmodified through the new cache surface.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_kernel_warmup_prefetch",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add speculative kernel pre-compilation. When `einsum(expr, a, b, ...)` is called for the first time with a particular `(expression, formats, dtypes)` triple, asynchronously begin compiling the same expression for a small set of *neighboring* format variants the user is likely to request next: (1) the all-dense variant, (2) the all-sparse-CSR variant, (3) the variant with the leftmost sparse format swapped to its dense equivalent, (4) the variant with the dtype promoted from float32 to float64 if input is float32. Implement as a background thread pool (separate from the JIT compile pool \u2014 workers must be lower priority and yield to user-driven compiles), configurable via `scorch.set_warmup_workers(n)` (default 1) and toggleable via `scorch.set_warmup_enabled(bool)`. Speculative compiles must populate `_einsum_dispatch_cache` (`src/scorch/ops.py:31`) so a subsequent user request hits cache. Constraints: (a) a speculative compile must abort within 5 ms if a user-driven compile arrives that needs the same `_kernel_name` (avoid blocking user latency); (b) speculative work must *not* be initiated for a `(expression, formats, dtypes)` triple already present in the negative compile cache (assume the existence of `negative_compile_cache_stats` from the negative-cache feature, but do not hard-depend \u2014 handle its absence gracefully); (c) on process shutdown, all in-flight warmup tasks must be cancelled within 100 ms (no lingering ninja processes); (d) warmup must never run before the user's call returns \u2014 it is fire-and-forget after the dispatch returns. Tests must cover: (a) calling `einsum(\"ik,kj->ij\", ds_a, dd_b)` triggers warmup of at least 2 neighbor variants within 30 s, observed via `_einsum_dispatch_cache` keys; (b) warmup is cancellable: `set_warmup_enabled(False)` while a job is in flight stops it within 200 ms; (c) `shutdown_warmup_pool()` is idempotent and joinable from a thread holding the GIL; (d) warmup never delays the foreground compile \u2014 measure user-visible wall time with warmup on/off on a cold cache, max regression 5%; (e) dispatch cache hit on a warmed variant returns within the same time as a normal hit (no extra synchronization on the hot path). Pre-existing einsum tests must continue to pass with warmup disabled and enabled.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_huge_page_workspace",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Reduce TLB pressure on large workspaces by allocating any workspace \u2265 2 MiB from transparent huge pages. Currently `csrc/header.cpp` (the file read at every kernel build, `src/scorch/utils.py:166-167` and the per-op equivalents in `src/scorch/ops.py:93-94, 204-205, 708-709, 858-859`) and the codegen-emitted bodies (`src/scorch/compiler/cin_lowerer.py:71-86`) call `malloc(...)` for workspaces; every byte access pays a 4 KiB-page TLB miss on first touch. Add a `scorch_aligned_alloc(size_t bytes)` and `scorch_aligned_free(void* p, size_t bytes)` pair in `csrc/header.h` that: (a) for `bytes \u2265 2 MiB` on Linux, calls `mmap(MAP_ANONYMOUS|MAP_PRIVATE)` followed by `madvise(MADV_HUGEPAGE)` and on free `munmap`s; (b) for `bytes < 2 MiB` on Linux, falls through to the existing `malloc`/`free`; (c) on Darwin/non-Linux, always falls through to `malloc`/`free` (huge pages are Linux-specific). Update `src/scorch/compiler/cin_lowerer.py` so workspace allocation emits `scorch_aligned_alloc(N)` and the matching `scorch_aligned_free(ptr, N)` (size must be threaded to the free site, since `munmap` requires it \u2014 this is a non-trivial codegen change that must update every workspace allocation site). Add runtime stats `scorch_huge_page_alloc_count` and `scorch_huge_page_bytes_in_use` exposed via a new pybind helper. Tests must cover: (a) a 4D einsum that allocates a 4 MiB workspace shows `scorch_huge_page_alloc_count == 1` after one call on Linux (skip the assertion on Darwin); (b) a 2D matmul with a 64 KiB workspace does not increment huge-page stats; (c) numerical parity between huge-page and `malloc` paths on SpMV, SpMM, 3D and 4D einsum across 1D/2D/3D/4D output ranks; (d) workspace allocation/free pairs are balanced after 100 calls (no `bytes_in_use` leak); (e) thread-safety: two `concurrent.futures.ThreadPoolExecutor` workers calling huge-page-eligible kernels do not corrupt stats. Pre-existing `tests/test_scorch/codegen/test_codegen_perf_optimizations.py` and `tests/test_scorch/test_kernels.py` must continue to pass \u2014 note especially that any pre-existing test which greps the emitted C++ for `new float[`, `malloc(`, or `free(` may need its expectation updated, but you must keep equivalent semantic coverage rather than deleting the regex check.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_lifetime_grouped_arena",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Implement a per-call arena allocator that exploits workspace lifetime non-overlap. Today the codegen (`src/scorch/compiler/cin_lowerer.py`) emits a `malloc` per workspace and a `free` at the end of `evaluate()` (see `csrc/header.cpp` for the emitted style); workspaces with disjoint lifetimes still each get their own bytes. Build (1) a static lifetime analysis on the lowered LLIR that computes a *workspace liveness graph*: each workspace's `(birth_loop, death_loop)` pair within the kernel body, then a graph-coloring assignment so workspaces with non-overlapping lifetimes share the same arena slot; (2) a `class CallArena` in `csrc/header.h` that manages a stack-discipline bump allocator with `T* alloc<T>(size_t n, size_t alignment)`, `void scope_enter()`, `void scope_exit()`, and `void reset()`; the arena retains its peak size across calls but resets the bump pointer on reset. (3) Codegen change: every workspace `malloc(...)` site becomes `arena.alloc<T>(n, alignof(T))`, every matching `free(...)` becomes `arena.scope_exit()`, and every kernel `evaluate()` signature gains an `int64_t arena_ptr` first argument that is the address of a process-wide `CallArena` instance (one per `(_kernel_name, thread_id)` pair, looked up from a `thread_local` map at call-site entry). (4) The Python-side `module.evaluate(...)` call site (in `src/scorch/ops.py` at every kernel invocation, including `lower_and_exec_cin`, `spmv`, `matmul_wksp`, the einsum cache hit path at ops.py:441 and the slow path at ops.py:758, plus the cached-kernel re-use at ops.py:695) must be updated to thread the arena address through. Tests must cover: (a) a kernel with three sequentially-used workspaces shares one arena slot of `max(size)` bytes \u2014 verifiable by a peak-RSS or arena-stat probe; (b) a kernel with three concurrent-lifetime workspaces uses three slots; (c) repeated invocation with growing input shape grows the arena monotonically and never shrinks; (d) two threads running the same kernel use independent thread-local arenas; (e) eviction from `_kernel_cache` (`src/scorch/ops.py:30`) frees the associated arena memory; (f) numerical parity vs the current malloc-per-workspace baseline on SpMV, SpMM, 3D einsum, 4D einsum (cross-rank coverage required). Pre-existing call sites that reach `module.evaluate(...)` from non-modified code paths must continue to work \u2014 if any pre-existing test in `tests/test_scorch/test_kernels.py` calls `module.evaluate` directly without the arena pointer, you must provide an overload or a default-arena path so those tests still pass; you may not silently change their public signature without preserving compatibility.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_workspace_torchptr_zero_copy",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Eliminate the result-copy at the end of dense-output kernels by aliasing the workspace onto the result tensor's `data_ptr<T>()`. Currently many emitted kernels (see the dense-output path in `src/scorch/compiler/cin_lowerer.py`) allocate a separate dense `wksp[...]` workspace, accumulate into it, then run a final loop copying `wksp[j] -> C_values[pC1 + j]` (visible in any spmm-Gustavson dump). When the workspace shape is identical to the result-slab shape and the workspace's last use is the copy-out loop, the copy is redundant. Implement (1) a lifetime / shape compatibility analyzer on the lowered LLIR that detects this pattern: a `Workspace` whose every write is into `wksp[idx]` with `idx` matching a result-tensor slice's flattened index, and whose only read is the copy-out loop; (2) a codegen path that, when the analyzer matches, replaces the workspace's `malloc(...)` with `auto* wksp = result_tensor_data_ptr + slice_offset` and elides the copy-out loop entirely; (3) a fallback when the result tensor's allocation is itself deferred until after the workspace's first use \u2014 in that case allocate the result first and *then* alias. Cross-rank: must work for 1D, 2D, 3D, 4D dense outputs, and must continue to allocate a separate workspace when the result is sparse (CSR/COO storage) or the slice layout doesn't match. Must also work when the kernel parallelizes over the outermost result dimension (each OpenMP thread aliases its own slice of `result_tensor`). Tests must cover: (a) emitted code for a 2D Gustavson SpMM with dense output contains *zero* `malloc(` for `wksp` and *zero* `wksp[j] = ` followed by `C_values[...] += wksp[j]` copy-out; (b) numerical parity vs the current path on SpMV (1D out), SpMM-Gustavson-2D (2D out), 3D einsum-out-dense (3D out), 4D contraction (4D out); (c) when result is `COO` the alias path is *not* taken and the original workspace path runs; (d) parallel SpMM with 4 OpenMP threads still aliases (each thread writes disjoint result rows); (e) repeated invocation 100 times with the same shape allocates the result tensor 100 times but allocates the workspace 0 times. Pre-existing tests in `tests/test_scorch/codegen/test_codegen_perf_optimizations.py` and `tests/test_scorch/test_kernels.py` must continue to pass \u2014 adjust the regex tests' expectations consistently when the alias path applies, and add new regex tests for cases where the path should *not* apply.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_calloc_zero_init_workspace",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Replace the emitted `malloc(N) + memset(p, 0, N)` workspace pattern with `calloc(1, N)` for workspaces \u2265 64 KiB, while keeping `malloc + memset` for smaller workspaces (calloc has measurable per-call overhead in glibc for tiny allocations because the kernel-page-zeroing optimization only applies when the allocator returns fresh demand-zero pages from the OS, not from glibc's internal arena). The exact split point is decided at compile time from the workspace size annotation in CIN, not at runtime. Today the malloc/memset emission is in `src/scorch/compiler/cin_lowerer.py:64-86` (look for the `malloc + cast` block). Update the codegen so: (a) when the static-analysis-known size in bytes is `\u2265 64 * 1024`, emit `calloc(1, N)` and elide the `memset`; (b) otherwise emit the existing `malloc + memset`. This change touches every workspace allocation site in the compiler \u2014 be sure to cover Gustavson SpMM, COO accumulators, dense-out workspaces, and the 3D/4D contraction paths. Add a new runtime introspection function `scorch.codegen.workspace_alloc_strategy(cin_stmt) -> List[Tuple[str, int]]` returning per-workspace `(strategy, bytes)` for tests to assert on. Cross-rank coverage required: tests must verify the calloc path is taken on a 4D einsum with a > 64 KiB workspace and the malloc+memset path is taken on a 2D SpMV with a < 1 KiB workspace. Tests must cover: (a) emitted C++ for a large (256 KiB) workspace contains `calloc(1, ` and contains *no* `memset(...,0,` for that workspace; (b) emitted C++ for a small (256-byte) workspace contains `malloc(` and `memset(`; (c) numerical parity between calloc and malloc+memset paths across SpMV, SpMM, 3D, 4D; (d) the existing pre-existing test `tests/test_scorch/codegen/test_codegen_perf_optimizations.py::test_non_tiled_dense_workspace_is_zero_initialized_and_freed` must continue to pass \u2014 its current expectation `\"new float[\" in cpp_code and \"]()\" in cpp_code` is for a workspace size that may now exceed the threshold; you must update the expectation consistently if the threshold flips this case to calloc, but you must not delete the test or weaken its semantic intent (zero-initialization of the dense workspace must still be verified, just possibly via `calloc(1,` rather than via `()`-init); add new tests covering both branches of the threshold. (e) a static-analysis edge case: when the workspace size is data-dependent (cannot be bounded at compile time), the codegen falls back to the malloc+memset path safely.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_numa_local_workspace",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "On NUMA machines, allocate per-thread workspaces from the NUMA node the OpenMP worker thread is currently scheduled on, instead of the always-node-0 default of glibc's malloc. Workspaces are emitted in `src/scorch/compiler/cin_lowerer.py:64-86` and consumed inside `#pragma omp parallel for` regions emitted via `src/scorch/compiler/llir.py:467-489`. Add (1) a `csrc/header.h` helper `void* scorch_numa_local_alloc(size_t bytes)` and matching `scorch_numa_local_free(void*, size_t bytes)` that detects libnuma at compile time (preprocessor `#if __has_include(<numa.h>) && !defined(__APPLE__)`) and uses `numa_alloc_local(...)` when available, with a fallback to `malloc/free` otherwise \u2014 the choice must be visible at runtime via `scorch_numa_aware_alloc_supported() -> bool`; (2) a Python toggle `scorch.set_numa_aware_workspaces(True/False)` (default `False` so behavior is unchanged unless opted in); (3) codegen support so workspace allocations *inside* an `omp_parallel_for` body emit the NUMA-local helper instead of `malloc`, but allocations *outside* parallel regions (e.g., result-tensor accumulators) keep using `malloc` (NUMA-local in serial code is meaningless and `numa_alloc_local` is slower than `malloc`); (4) a `numa_node_of_workspace(addr) -> int` helper exported via pybind that returns the NUMA node a pointer was allocated from (uses `move_pages(2)` syscall on Linux; returns `-1` on Darwin or when libnuma is absent). Tests must cover: (a) on Linux with libnuma present, a 4-thread parallel SpMM kernel runs and `numa_node_of_workspace(thread_id)` matches `numa_node_of_cpu(sched_getcpu())` for the worker that allocated it; (b) on Darwin or hosts without libnuma, the toggle is silently ignored and `scorch_numa_aware_alloc_supported()` returns False; (c) numerical parity with the default malloc path on SpMV, SpMM, 3D einsum, 4D einsum; (d) repeated invocation does not leak NUMA-allocated bytes (track `numa_alloc_local_bytes_in_use` \u2194 free sites); (e) flipping the toggle off mid-process reverts behavior on the next compile (the thread-local strategy choice is captured per-kernel-build, so existing modules continue to use the strategy they were built with \u2014 document this contract in the task and assert on it). Pre-existing tests must continue to pass with the toggle off (default).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_simd_aligned_workspace",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Align workspace allocations to the natural SIMD width so vectorized inner loops can use aligned loads/stores. Today emitted kernels use plain `malloc(...)` (`src/scorch/compiler/cin_lowerer.py:64-86`), which is only guaranteed `alignof(std::max_align_t)` (16 bytes on most x86_64 toolchains) \u2014 for `float` workloads with AVX-512 this leaves up to 60-byte misalignment per allocation, forcing the compiler to emit unaligned `vmovups`. Implement: (1) a CIN-side workspace size annotation that propagates the *value dtype* and the *vector width hint* (32 bytes for AVX2 `float`, 64 bytes for AVX-512 `float`, equivalent for double, default 16 if unknown \u2014 derive from `__AVX512F__`/`__AVX2__` macros at compile time and expose as a `scorch_simd_alignment` constexpr in `csrc/header.h`); (2) codegen change emitting `(T*)std::aligned_alloc(scorch_simd_alignment, round_up(N, scorch_simd_alignment))` with the matching `std::free` instead of plain `malloc/free`; (3) every workspace size in bytes must be rounded up to a multiple of the alignment \u2014 this must be done by the codegen, not at runtime, since C++17 `std::aligned_alloc` requires the size to be a multiple of the alignment. (4) Add `__builtin_assume_aligned(wksp, scorch_simd_alignment)` directly after every workspace pointer is reloaded inside an inner loop so the compiler can vectorize. Cross-rank coverage required: kernels for 1D/2D/3D/4D outputs all emit aligned alloc + assume_aligned. Tests must cover: (a) emitted C++ for a 2D matmul Gustavson workspace contains `std::aligned_alloc(64,` (or `32,` depending on detected width) and contains `__builtin_assume_aligned(`; (b) the rounded-up size is divisible by the alignment (check: `N % scorch_simd_alignment == 0` after rounding); (c) numerical parity vs unaligned baseline across SpMV, SpMM, 3D, 4D einsum; (d) at runtime, every emitted `aligned_alloc` succeeds (returns non-null) on inputs spanning a 4\u00d7 range of sizes; (e) a tight-loop microbenchmark of a memory-bound SpMV is at most 5% slower than a hand-written aligned baseline (within noise); (f) the alignment value reflects the host's actual SIMD width: assert `scorch_simd_alignment == 64` on `__AVX512F__` builds and `32` on `__AVX2__` builds. Pre-existing tests must keep passing \u2014 the size-rounding may change the literal `[N]` in `new float[N]` patterns asserted by `tests/test_scorch/codegen/test_codegen_perf_optimizations.py`, so update the assertions consistently or add separate cases for the aligned path while preserving original-intent coverage.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_grainsize_autotuner",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Replace the static OpenMP `schedule` heuristics emitted via `src/scorch/compiler/llir.py:476-486` with a runtime grain-size autotuner. Today `ForLoop.omp_schedule` is one of `\"static\"`, `\"dynamic\"`, `\"guided\"` decided at compile time, with no chunk size; this leaves performance on the table for sparse workloads where the optimal chunk size depends on per-row nnz distribution. Implement: (1) a `_GrainSizeTuner` keyed by `(kernel_name, omp_loop_id)` that records wall time of the first 5 invocations of a parallelized loop under chunk sizes \u2208 {1, 4, 16, 64, 256} (one per warmup call), then adopts the winner for subsequent calls; (2) a codegen change so emitted `#pragma omp parallel for schedule(...)` becomes `#pragma omp parallel for schedule(runtime)` with `omp_set_schedule(...)` called from a small thunk just before the parallel region \u2014 the thunk reads the current best chunk size from the tuner; (3) a Python API `scorch.set_grainsize_tuning(True/False)` (default True), `scorch.grainsize_tuning_stats() -> Dict[Tuple[str, int], Dict]` returning per-loop sample counts and adopted chunk size, `scorch.reset_grainsize_tuning()` to start over; (4) persistence: after each adopted decision, write to `~/.cache/scorch/grainsizes.json` (use `os.replace` for atomic writes); on process startup, if the file exists, preload the best-known chunk size for each `(kernel_name, omp_loop_id)` so the warmup pays no cost on subsequent runs. Concurrency: two threads driving the same kernel must coalesce on a single tuning experiment (no double-attribution of timings). Tests must cover: (a) a 256-row, power-law-skew SpMV kernel converges to a non-default chunk size within 5 invocations; (b) the *same* kernel signature has *different* adopted chunk sizes when run with different input nnz distributions (the tuner must re-tune on observed input shape, not just on `kernel_name`); (c) persistence: warm-start from `~/.cache/scorch/grainsizes.json` skips warmup and hits the adopted chunk size on call 1; (d) a multi-thread driver running the same kernel from 4 Python threads produces exactly one tuning trace, not four; (e) `set_grainsize_tuning(False)` falls back to the original static `schedule(static)` emission; (f) numerical parity across all chunk sizes on SpMV, SpMM, 3D and 4D einsum (the tuner must never break correctness even with chunk size 1 or 256). Pre-existing tests must continue to pass with tuning on (default).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Work scheduling",
"Runtime/Tuning & user control"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_priority_aware_dispatch",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add priority-aware kernel dispatch so latency-sensitive callers preempt batch callers. Today `einsum(...)`, `matmul(...)`, and `_load_kernel(...)` are dispatched FIFO by the underlying Python GIL/thread interleaving. Implement: (1) a thread-safe priority queue at the dispatch level \u2014 every call to `einsum`/`matmul` enters a per-process `_DispatchScheduler` that tracks the calling thread's priority level; (2) a context manager `scorch.priority_scope(level: str)` with levels `\"interactive\"` (highest), `\"normal\"` (default), `\"batch\"` (lowest); the scope is *thread-local* so simultaneous interactive and batch callers from different threads are correctly attributed; (3) priority inheritance: when a high-priority caller is blocked waiting for a kernel currently being compiled at a lower priority (because some other thread initiated the compile under a `\"batch\"` scope), the compile temporarily inherits the higher priority \u2014 implementation must update the JIT compile pool's queue position for the in-flight item; (4) the priority queue must guarantee no priority inversion deadlocks even under arbitrary nesting (`with priority_scope(\"interactive\"): ... with priority_scope(\"batch\"): ...`) and arbitrary cross-thread dependencies. Tests must cover: (a) interleaved interactive + batch callers from two threads \u2014 interactive completion latency is bounded by \u2264 1.5\u00d7 single-threaded time, even when the batch caller has 10\u00d7 more work queued; (b) priority inheritance: caller A under `\"batch\"` initiates a compile; caller B under `\"interactive\"` requests the same kernel before A's compile finishes; B's wait time is bounded by single-thread compile time, not by any preceding queue depth; (c) nested priority scopes restore the outer level on exit (`with batch: with interactive: ...` exits correctly); (d) a chain of A \u2192 B \u2192 C dependencies under mixed priorities completes without deadlock; (e) priorities are thread-local \u2014 thread X's `priority_scope(\"interactive\")` does not affect thread Y's calls. Pre-existing tests must work with `\"normal\"` priority (default scope) yielding the same observable behavior as today.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Work scheduling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_load_aware_thread_pool",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Make OpenMP worker count adaptive to system load instead of using `OMP_NUM_THREADS` at process start. Today emitted kernels run with whatever `omp_get_max_threads()` returns when the kernel is loaded, frozen for the process lifetime. Implement a manager that periodically (every `omp_load_check_interval_ms`, default 250 ms, configurable) reads `/proc/loadavg` (Linux) or returns a no-op (Darwin/Windows) and rescales the active worker pool: when 1-minute load > `omp_load_high_water * num_cores` (default 0.85), drop active workers to `max(1, num_cores // 2)`; when load < `omp_load_low_water * num_cores` (default 0.4), restore to `num_cores`; cap at `OMP_NUM_THREADS` env var when set. Implementation must (1) integrate with OpenMP's `omp_set_num_threads(n)` mechanism; (2) handle nested parallel regions correctly \u2014 never double-allocate threads (a thread already inside a parallel region must not spawn another full-width team), use `omp_set_max_active_levels(1)` or equivalent; (3) be safe to scale *down* mid-flight: never deadlock by reducing the team size while threads in the current region are still active \u2014 the rescale takes effect at the *next* parallel region entry, not within the current one; (4) expose `scorch.thread_pool_stats() -> Dict` reporting current active workers, pool history, last rescale reason. Tests must cover: (a) on Linux, mocking `/proc/loadavg` to a high value drops `omp_get_max_threads()` to half within 1 second of the next kernel call; (b) on Darwin, the load check is a no-op and the worker count stays at the OS default; (c) nested parallel regions in user code (e.g., user already inside `with parallel_backend(\"openmp\", n=4):` calling `einsum`) do *not* trigger another rescale; (d) numerical parity across worker count transitions on SpMV and SpMM (correctness independent of team size); (e) no deadlock under a stress test that toggles load every 100 ms while 4 threads call `einsum` continuously for 30 seconds; (f) shutdown (process exit) does not hang \u2014 the load-check thread must be a daemon and joinable. Pre-existing tests must continue to pass with the manager disabled (`scorch.set_load_aware_threads(False)` is the default opt-out).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Work scheduling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_kernel_cpu_affinity",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Pin OpenMP threads to specific CPU cores per-kernel based on a user-supplied policy. Today emitted parallel kernels rely on the OS scheduler to place threads, which on machines with > 1 NUMA node leads to cross-NUMA traffic for memory-bound kernels. Implement: (1) a Python user hook `scorch.set_kernel_affinity_policy(callable)` where the callable receives `(kernel_signature: str, num_threads: int) -> List[int]` (returning a list of CPU IDs to bind to, length `num_threads`) \u2014 `kernel_signature` includes the kernel's CIN string, formats, and dtypes; (2) a default policy that returns `list(range(num_threads))` (no specific binding) so behavior is unchanged when the user does not set a policy; (3) a built-in helper `scorch.bandwidth_bound_affinity_policy` that detects bandwidth-bound kernels (those with workspace \u2265 L2 cache size, threshold default 1 MiB) and returns CPU IDs all within the same NUMA node (use `numa_node_of_cpu` and `numa_num_configured_cpus` from libnuma when available); a built-in `scorch.compute_bound_affinity_policy` distributing CPU IDs across all NUMA nodes; (4) a small C++ helper in `csrc/header.h` `void scorch_set_thread_affinity(int thread_id, int cpu_id)` using `pthread_setaffinity_np` on Linux, no-op on Darwin; (5) codegen-level integration: at the entry of each `#pragma omp parallel for` region, call the policy via a small thunk and apply the affinity for each worker thread; (6) state cleanup: when the parallel region exits, restore each thread's previous affinity mask (do not leak affinity state across kernels). Tests must cover: (a) on Linux with libnuma, setting `bandwidth_bound_affinity_policy` for an SpMM kernel binds workers to a single NUMA node \u2014 verify by querying each worker thread's affinity inside the parallel region; (b) on Darwin or hosts without libnuma, `scorch_set_thread_affinity` is a no-op and tests assert that `bandwidth_bound_affinity_policy` returns successfully but has no observable effect; (c) numerical parity across all affinity policies on SpMV, SpMM, 3D, 4D; (d) restoring affinity on parallel-region exit: assert that calling thread's affinity mask is unchanged before and after the kernel; (e) the user hook receives the correct `kernel_signature` and `num_threads` for each kernel call; (f) a stress test setting different policies for different kernels in rapid succession does not interfere across kernel boundaries. Pre-existing tests must continue to pass with the default no-op policy.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Work scheduling"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_shape_specialized_recompile",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add runtime kernel specialization that recompiles a hot kernel with input shapes baked in as compile-time constants. Today `einsum(...)` compiles a kernel parameterized over input shapes; loop bounds are runtime variables. For a hot path called repeatedly with the same shape (typical in inference), specializing yields 2-5\u00d7 speedup via constant propagation, loop unrolling, and SIMD vectorization that the compiler cannot apply when bounds are dynamic. Implement: (1) a per-`(dispatch_key, shape_tuple)` invocation counter in `src/scorch/ops.py`; when count reaches `_SPECIALIZE_THRESHOLD` (default 100, configurable via `scorch.set_specialization_threshold(n)`), schedule a *background* recompile of the kernel with `result_shape`, `A.shape`, `B.shape` baked in as `constexpr` ints; (2) integrate with the dispatch cache so a specialized hit beats the generic hit \u2014 the lookup order is: PIC (if exists) \u2192 specialized dispatch \u2192 generic dispatch; (3) thread-safety: the counter must be atomic and the swap of generic-\u2192-specialized in the dispatch cache must be atomic and visible to all readers without a stop-the-world; (4) eviction: when the dispatch cache evicts the *generic* entry, also evict any specializations attached to it; (5) when the user calls with a *different* shape than the specialized one, fall through to the generic kernel \u2014 do not recompile a fresh specialization for every shape variant; track `_specialization_unique_shapes_seen` and stop specializing once it exceeds 8 (avoid pathological code-cache bloat). Must be safe under concurrent calls: 8 threads hammering the same dispatch key with the same shape must trigger exactly one specialization compile, not 8. Tests must cover: (a) a 1000-iteration SpMV at fixed shape produces one specialized module after `_SPECIALIZE_THRESHOLD` calls, observable via `scorch.specialization_stats()`; (b) the specialized module's emitted C++ contains literal integers for the loop bounds (e.g. `for (int i = 0; i < 1024; i++)` instead of `for (int i = 0; i < N; i++)`); (c) numerical parity vs generic on SpMV, SpMM, 3D einsum, 4D einsum; (d) calling with 9 distinct shapes for the same dispatch key triggers at most 8 specializations; (e) eviction of the generic entry from the dispatch cache also evicts the specialization (use `weakref` to assert collectability); (f) 4 concurrent threads at threshold trigger exactly 1 specialization compile (deduplication); (g) a microbenchmark shows specialized SpMV is at least 1.5\u00d7 faster than generic on a 1024\u00d71024 fixed-shape workload (sanity bar). Pre-existing tests must continue to pass with specialization disabled (`set_specialization_threshold(0)`) and with the default threshold.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Caching & dispatch",
"Runtime/Tuning & user control"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_compile_flag_scope",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a context manager that re-routes kernel compilation through alternate compiler flags. Today `_load_kernel` (`src/scorch/utils.py:32`) is called with `extra_cflags=get_extra_cflags()` (utils.py:68) \u2014 a hard-coded set including `-O3`. Some workloads need `-O2 -fno-fast-math` for IEEE-strict numerics; some need `-O0` for debugging; profiling builds want `-pg`. Implement: (1) `scorch.with_compile_flags(extra_cflags: List[str], extra_ldflags: List[str] = None)` as a thread-local context manager; (2) within the scope, every `_load_kernel(...)` call (called from `einsum`, `matmul`, `spmv`, `lower_and_exec_cin`) merges the scoped flags onto the base `get_extra_cflags()` set, with scoped flags taking precedence on conflict (same flag with different value: the inner scope wins); (3) the cache key (currently `_kernel_name(*sources)` in utils.py:19, hashing only sources) must be extended to include the *resolved* flag set, so that an `-O3` kernel and an `-O0` kernel under the same source produce distinct cache entries \u2014 note `_kernel_name` is referenced from `src/scorch/ops.py:24,98,209,712,862` and possibly other call sites; you must update every call site consistently; (4) the dispatch cache (`_einsum_dispatch_cache`) must also be flag-aware: a hit with a flag-scope active must verify the cached entry's flags match the active scope, else miss; (5) the scope is *thread-local* \u2014 thread A inside `with_compile_flags(...)` does not affect thread B's compiles. Concurrency safety: the global flag stack is per-thread; no locks on the hot path. Tests must cover: (a) compiling the same `einsum` source under `with_compile_flags([\"-O0\"])` and again under default produces *two* `_so_cache` entries with different `_kernel_name` hashes; (b) numerical parity between `-O3` and `-O0` on SpMV, SpMM, 3D einsum, 4D einsum; (c) a `-fno-fast-math` scope changes the IEEE rounding behavior on a deliberate sum-of-near-cancellation test (the result differs from `-ffast-math`); (d) leaving the scope reverts to default flags for subsequent calls (the next compile uses default cache key and is independent); (e) two threads, one in `-O0` scope and one in `-O3` default, compile concurrently and the `-O0` thread does not see the `-O3` `_so_cache` entry; (f) any pre-existing test that asserts `_kernel_name(...)` is stable across runs with no scope change must continue to pass \u2014 the new flag-aware key must be backward-compatible when the resolved flag set equals the default. Pre-existing tests must continue to pass without any active scope.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Tuning & user control"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_workspace_memory_budget",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Add a user-controllable workspace memory budget that triggers tiled fallback schedules. Today `Scheduler.auto_schedule(cin_stmt)` (`src/scorch/compiler/scheduler.py:1461`) chooses a schedule that may demand workspaces of arbitrary size \u2014 for a 4096\u00d74096 dense\u00d7dense Gustavson SpMM the workspace can exceed 1 GiB. Implement: (1) `scorch.set_workspace_memory_budget(bytes)` (default `None` = unlimited); (2) when the budget is set and the auto-scheduled CIN's projected workspace exceeds the budget, the planner falls back to a *tiled* schedule with tile dimensions chosen so the largest workspace \u2264 budget \u2014 implement this by adding a new pass `Scheduler.budget_tile(cin_stmt, budget_bytes)` that inspects every `Workspace` node, computes its projected size from input shapes (use the shape inference path that already exists for `Scheduler.auto_schedule`), and inserts tiling along the most-deep-reduction axis until projected size fits; (3) hard-fail (`raise RuntimeError`) when even tiling to the smallest sensible tile (= 1) cannot fit, with an actionable error message naming the offending workspace and minimum size; (4) introspection: `scorch.query_workspace_demand(expression: str, *shapes_and_formats: Tuple) -> int` returning the projected workspace bytes for a given einsum-style call without compiling. Tests must cover: (a) a 4096\u00d74096 dense\u00d7dense `einsum(\"ik,kj->ij\", ...)` with `set_workspace_memory_budget(64 * 1024 * 1024)` does *not* OOM and produces a result equal to the unbudgeted path; (b) `query_workspace_demand(\"ik,kj->ij\", ((\"ds\", 4096, 4096), (\"dd\", 4096, 4096)))` returns a positive integer matching what the unbudgeted compile would consume; (c) a budget too small for any tiling (e.g. 1 byte for a 1-row SpMV) raises `RuntimeError` mentioning the workspace name and the minimum bytes; (d) numerical parity between budgeted-tiled and unbudgeted paths on SpMV, SpMM, 3D einsum, 4D einsum (cross-rank coverage required); (e) the budget is process-global but not thread-isolated \u2014 setting it in thread A affects thread B (the choice is intentional; document it); (f) the dispatch cache (`_einsum_dispatch_cache` in `src/scorch/ops.py:31`) must include the active budget in its key, so a budget change invalidates dispatch hits compiled under a different budget. Pre-existing tests must continue to pass with the default `None` budget.\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Tuning & user control",
"Runtime/Memory management"
]
},
{
"instance_id": "bobbyyyan__scorch-feature_thread_local_dispatch_observers",
"repo_id": "bobbyyyan__scorch",
"repo_url": "https://github.com/bobbyyyan/scorch.git",
"base_commit": "92fb190",
"language": "python",
"setup_commands": [],
"test_command": "/testbed/run_tests.sh",
"test_timeout": 3000,
"refactor_type": "",
"description": "Allow user-supplied callbacks to intercept dispatch decisions for tracing and profiling. Implement: (1) `scorch.register_dispatch_observer(fn: Callable[[DispatchEvent], None]) -> ObserverHandle` and the matching `unregister_dispatch_observer(handle)`; observers are *thread-local* \u2014 each Python thread maintains its own observer chain in a `threading.local()` object so a thread that has not registered an observer sees zero overhead on the dispatch hot path; (2) a `DispatchEvent` dataclass exposing `event_type` \u2208 `{\"hit\", \"miss\", \"compile_start\", \"compile_end\", \"evict\"}`, `dispatch_key`, `kernel_name`, `wall_time_ns`, `cache_size_after`, plus an optional `error` field for compile failures; (3) integrate emission points throughout `src/scorch/ops.py` (the `_einsum_dispatch_cache` `get` and insert paths around lines 410, 723), `src/scorch/utils.py` (`_load_kernel` and the `_so_cache` interaction), and any LRU/eviction path you add elsewhere. Critical performance constraint: when no observer is registered on the calling thread, dispatch wall-time overhead must be \u2264 1% versus baseline (microbenchmark on a 1000-call einsum hot loop, baseline measured with `register_dispatch_observer` defined but no observers attached). Tests must cover: (a) registering an observer in thread A and calling `einsum(...)` records `hit` and `miss` events in thread A's observer; thread B's observer (if any) sees nothing of A's calls; (b) registering 3 observers in the same thread fires all 3 in registration order on each event; (c) `unregister_dispatch_observer(h)` removes only the matching observer; (d) an observer that raises an exception does not corrupt dispatch state and does not prevent other observers from firing \u2014 the exception is logged via `warnings.warn` and dispatch proceeds; (e) hot-path overhead with no observers: < 1% slowdown on a 10k-call `einsum` microbenchmark vs the same code without `register_dispatch_observer` defined \u2014 measure with `time.perf_counter_ns` over multiple trials; (f) `compile_start`/`compile_end` are emitted in pairs, with `wall_time_ns` of `compile_end` reflecting the actual compile latency; (g) observer is *not* fired from inside the `_kernel_cache` evict path's GC finalizer (no Python re-entry from C++ destructor). Pre-existing tests must continue to pass with no observers registered (the default).\n\nNote: Do not run the test suite to verify your implementation. You may run build commands (e.g., `pip install -e .`) to check that the code compiles, but do not execute any pytest commands. The full test suite will be run separately during evaluation.",
"files": [],
"task_type": "feature",
"categories": [
"Runtime/Tuning & user control"
]
}
]