instance_id
string
repo_id
string
repo_url
string
base_commit
string
language
string
setup_commands
list
test_command
string
test_timeout
int64
refactor_type
string
description
string
files
list
task_type
string
categories
list
bobbyyyan__scorch-feature_cp_decomposition
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement CP (CANDECOMP/PARAFAC) tensor decomposition via Alternating Least Squares (ALS). Add `ops.cp_decomposition(X, rank, max_iter=100, tol=1e-8)` that decomposes an N-dimensional sparse tensor X into a sum of rank-one components. The algorithm alternates over each mode, computing the Matricized Tensor Times Khatri...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_nd_advanced_indexing
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement N-dimensional advanced indexing for `STensor.__getitem__` that supports mixed indexing modes across arbitrary dimensions of 3D, 4D, and 5D sparse tensors. The existing `__getitem__` (feature_10) only handles 2D CSR/COO row and column slicing. Extend it to support the following indexing types applied to any di...
[]
feature
[ "API/Indexing & Mutation/Read indexing" ]
bobbyyyan__scorch-feature_clamp_clip_round
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse element-wise clamp, clip, and rounding operations for N-dimensional sparse tensors. The existing unary operations (feature_7) cover abs, neg, relu, sqrt, exp, log, tanh, and sigmoid via CIN. This task adds operations with distinct sparsity-preservation semantics that are not covered: (1) `ops.clamp(inp...
[]
feature
[ "API/Element-wise/Unary math" ]
bobbyyyan__scorch-feature_einsum_multi_operand
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Extend the existing sparse einsum to support multi-tensor contractions with 3 or more operands in a single expression. The current einsum implementation only handles pairwise (2-tensor) contractions. Add support for expressions involving 3+ sparse tensors such as `einsum('ijk,jl,km->ilm', A, B, C)`. The implementation ...
[]
feature
[ "API/Linear Algebra/Einsum" ]
bobbyyyan__scorch-feature_qr_decomposition
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse QR decomposition for 2D and batched 3D sparse matrices. Add `ops.qr(input, mode='reduced')` that computes A = Q * R where Q is orthogonal and R is upper triangular. Use a Householder reflection-based approach adapted for sparse storage: apply Householder transformations column by column, tracking fill-...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_truncated_svd
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement truncated Singular Value Decomposition (SVD) for sparse matrices using iterative methods that only require sparse matrix-vector products. Add `ops.truncated_svd(input, k, n_iter=5, method='randomized')` that computes the top-k singular triplets (U, S, V) of a sparse matrix without densifying it. Support two m...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_tucker_decomposition
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement Tucker decomposition for N-dimensional sparse tensors using Higher-Order Orthogonal Iteration (HOOI). Add `ops.tucker_decomposition(X, ranks, max_iter=100, tol=1e-8, init='random')` that decomposes an N-dimensional sparse tensor X into a core tensor G and a list of factor matrices [U_1, ..., U_N] such that X ...
[]
feature
[ "API/Linear Algebra/Decompositions" ]
bobbyyyan__scorch-feature_khatri_rao
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement the Khatri-Rao product (column-wise Kronecker product) for sparse matrices. Add `ops.khatri_rao(matrices)` that takes a list of 2D sparse matrices (or a mix of sparse and dense) all having the same number of columns R, with shapes (I_1, R), (I_2, R), ..., (I_N, R), and returns the Khatri-Rao product of shape ...
[]
feature
[ "API/Linear Algebra/Tensor products" ]
bobbyyyan__scorch-feature_elementwise_min_max
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse element-wise binary `minimum` and `maximum` operations for N-dimensional sparse tensors. Add `ops.minimum(input, other)` and `ops.maximum(input, other)` that compute the element-wise min and max of two sparse tensors. These are distinct from `ops.min` and `ops.max` (feature_40), which are reduction ope...
[]
feature
[ "API/Element-wise/Comparison & predicate" ]
bobbyyyan__scorch-feature_equality_compare
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse tensor equality and approximate comparison operations for N-dimensional sparse tensors. Add the following methods and functions: (1) `STensor.__eq__(other)` returning a sparse boolean tensor with True at positions where both tensors have equal values (including positions where both are implicitly zero)...
[]
feature
[ "API/Element-wise/Comparison & predicate" ]
bobbyyyan__scorch-feature_pad_nd
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement N-dimensional sparse tensor padding that adds entries to the coordinate structure without dense materialization. Add `ops.pad(input, pad_widths, fill_value=0)` where `pad_widths` is a sequence of (before, after) tuples, one per dimension, specifying how many zeros (or fill_value entries) to add on each side o...
[]
feature
[ "API/Shape & Layout/Concat & pad" ]
bobbyyyan__scorch-feature_sparsity_pattern_ops
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparsity pattern operations that extract, compare, and manipulate the structural (boolean) sparsity patterns of sparse tensors, independent of their values. Add the following functions: (1) `ops.sparsity_pattern(input)` returning a boolean sparse tensor (values all True) with the same coordinate structure as ...
[]
feature
[ "API/Linear Algebra/Pattern algebra" ]
bobbyyyan__scorch-feature_segment_reduction
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse segment reduction operations for GNN-style message passing workloads. Add `ops.segment_coo(src, index, dim_size=None, reduce='sum')` that aggregates values from a sparse tensor `src` into segments defined by a 1D integer `index` tensor, where `index[i]` specifies which segment the i-th stored value bel...
[]
feature
[ "API/Reductions & Scans/Scans & segment" ]
bobbyyyan__scorch-feature_apply_callable
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a sparse element-wise apply/map interface that applies user-defined callable functions to the stored non-zero entries of N-dimensional sparse tensors. This is distinct from feature_7 (which adds specific hardcoded unary operations to CIN). Add `ops.apply(input, func)` that takes an STensor and a Python callab...
[]
feature
[ "API/Element-wise/Unary math" ]
bobbyyyan__scorch-feature_sort_entries
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement sparse tensor sorting operations that sort stored entries by value or by coordinate indices along specified dimensions for N-dimensional sparse tensors. Add the following functions: (1) `ops.sort_values(input, descending=False)` that returns a new STensor with stored entries sorted by their values (and coordi...
[]
feature
[ "API/Indexing & Mutation/Canonicalization" ]
bobbyyyan__scorch-feature_sparse_embedding
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a sparse embedding layer with sparse gradient support for NLP and recommendation system workloads. Add `SparseEmbedding(num_embeddings, embedding_dim, sparse=True)` as a module class that maintains a dense weight matrix of shape (num_embeddings, embedding_dim) but produces sparse gradients during backpropagat...
[]
feature
[ "API/ML Primitives/Attention & embedding" ]
bobbyyyan__scorch-feature_cin_autodiff
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a compiler-side source-to-source reverse-mode automatic differentiation pass that operates on the CIN IR. This is distinct from a `torch.autograd.Function` wrapper at the Python level - the goal here is that, given a forward CIN statement, the compiler emits a *new* CIN statement that computes the partial derivativ...
[]
feature
[ "API/ML Primitives/Autograd", "IR/CIN nodes" ]
bobbyyyan__scorch-feature_cin_ifthenelse
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a CIN-level `IfThenElse` IR node together with a comparison expression family for conditional sparse computation. Today CIN has no conditional construct - control flow is exclusively spatial (via `ForAll`) or producer-consumer (via `Where`). The LLIR has `IfThenElse` (`llir.py:510`) but it is only synthesized insid...
[]
feature
[ "IR/CIN nodes" ]
bobbyyyan__scorch-feature_rle_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add `LevelType.RLE` (run-length encoding) to the format system and the full compiler pipeline. RLE is the right format for sparsity patterns where consecutive entries share a coordinate (segmented graphs, time-series with aligned events, certain post-permutation matrices). It compresses runs of identical coordinates in...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_morton_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add `LevelType.MORTON` to the format system. A Morton level encodes multiple logical dimensions into a single linear coordinate using a Morton (Z-order) bit-interleaving of those dimensions, producing a space-filling-curve traversal that improves cache locality across multi-dimensional sparse access patterns. The new l...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_ragged_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add `LevelType.RAGGED` to the format system to natively represent jagged sublist structures (one variable-length list per parent coordinate) without padding to a maximum length. RAGGED differs from `COMPRESSED` in two ways: (a) the offset array points into a *value* array directly rather than into a coordinate array - ...
[]
feature
[ "Format/Hierarchical & multi-d" ]
bobbyyyan__scorch-feature_packed_coords_bitwidth
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement variable-bit-width packed coordinate arrays for sparse levels. The `_bit_width: Optional[int]` field on `LevelFormat` already exists in `src/scorch/format.py:45` but is unused throughout the storage and codegen pipeline; wire it through end-to-end so that a sparse level whose dimension fits in 8 bits uses a `...
[]
feature
[ "Format/Compressed-style levels" ]
bobbyyyan__scorch-feature_cache_hierarchy_tiling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a multi-level cache-aware tiling pass to the scheduler that produces nested tiles sized to fit L1, L2, and L3 caches respectively. Today, `Scheduler.add_tile()` (`scheduler.py:839`) performs a single level of tiling at a fixed `tile_size`; extend so that for a chosen index variable the scheduler can apply *multiple...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_workspace_pooling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement workspace lifetime analysis and memory pooling across multiple `Where` clauses in a CIN program. Today, every `Where` lowers via `cin_lowerer.py:lower_Where` (line 860) into an independent workspace allocation; chained kernels (e.g. `D = (A @ B) + (A @ C)` lowering into two workspaces) allocate two separate b...
[]
feature
[ "Scheduler/Sparse-specific passes/Workspace transforms" ]
bobbyyyan__scorch-feature_auto_transpose_insertion
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add an automatic transpose-insertion pass to the scheduler that, when the cost model favors it, physically permutes the mode order of an input operand so the chosen loop order matches its storage order. Today the scheduler picks a loop order based on sparsity (`Scheduler.sort_by_sparsity_descending`, `Scheduler.optimiz...
[]
feature
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_atomic_parallel_scatter
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add atomic operations to the LLIR for safe parallel scatter into shared sparse outputs. Today the scheduler heuristically refuses to parallelize loops whose inner body writes to a shared output position (`CINLowerer._should_parallelize_outer_forall` at `cin_lowerer.py:2048`, `_is_openmp_compatible_for_loop` at `cin_low...
[]
feature
[ "Codegen/Parallelism" ]
bobbyyyan__scorch-feature_loop_distribution
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a loop distribution (loop fission) pass to the scheduler that splits a single multi-statement loop nest into multiple loop nests, enabling the existing fusion/parallelization machinery to operate on cleaner units. Today the scheduler only does loop reordering and tiling - there is no transformation that breaks one ...
[]
feature
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_simd_intrinsics
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Extend codegen to emit explicit SIMD intrinsics rather than relying solely on `#pragma omp simd` directives. Today, `ForLoop.simd: bool` (`llir.py:479`) only emits the SIMD pragma; extend so that vectorizable loops can be lowered to AVX-512 (x86) or NEON (ARM) intrinsics with a scalar epilogue and a fallback to the pra...
[]
feature
[ "Codegen/Vectorization" ]
bobbyyyan__scorch-feature_cin_simplify
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add an algebraic simplification and canonicalization pass over the CIN IR that runs before scheduling and lowering. Today no such pass exists - every `BinaryOp` lowers verbatim, even when one operand is a multiplication by zero or addition of zero. (1) Create `src/scorch/compiler/cin_simplify.py` with a `Simplification...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Algebraic rewrites" ]
bobbyyyan__scorch-feature_streaming_backend
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a streaming (out-of-core) backend for sparse tensors that exceed available memory by processing them in coordinate-aligned chunks read from disk. (1) In `src/scorch/storage.py`, add a `StreamingTensorStorage(TensorStorage)` subclass that backs its value/coordinate arrays with a memory-mapped binary file plus a sequ...
[]
feature
[ "Codegen/Backend targets", "Runtime/Memory management" ]
bobbyyyan__scorch-feature_work_stealing_scheduler
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a task-based work-stealing scheduler for sparse kernels with severely imbalanced row work distributions. Today parallelism is exclusively OpenMP `parallel for` with static or guided scheduling, decided at the LLIR `ForLoop.omp_schedule` level (`llir.py:476`). Skewed sparse workloads (power-law-distributed row nnz, ...
[]
feature
[ "Runtime/Work scheduling" ]
bobbyyyan__scorch-feature_contraction_order_opt
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Extend the einsum implementation with a cost-model-driven contraction-ordering optimizer for expressions over three or more sparse operands. Today `ops.einsum` (`ops.py:377`) handles multi-tensor contractions by sequential pairwise reduction in equation order; extend to choose a near-optimal contraction tree. (1) In a ...
[]
feature
[ "Scheduler/Contraction planning" ]
bobbyyyan__scorch-feature_scalar_param_specialize
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add compile-time scalar-parameter specialization to the CIN->LLIR pipeline. Many sparse ops accept scalar parameters (e.g. `addmm`'s `alpha`/`beta`, `dropout`'s `p`, `pow`'s `exponent`, `mul`/`div` by a scalar, `clamp`'s bounds); today these are passed as runtime kernel arguments. Specializing on the scalar value at co...
[]
feature
[ "Scheduler/Dense passes/Specialization" ]
bobbyyyan__scorch-feature_dependence_analysis
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a sound dependence-analysis pass that decides whether a candidate loop reordering preserves semantics for sparse computations. Today `Scheduler.optimize_loop_order` (`scheduler.py:520`) reorders loops by a sparsity heuristic; while this often produces faster code, it can silently mis-reorder for non-trivial r...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Dataflow analyses" ]
bobbyyyan__scorch-feature_octree_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add `LevelType.OCTREE` to the format system: a hierarchical level type that groups multiple consecutive dimensions of a sparse tensor under a single tree-structured index, accelerating range queries and locality-preserving traversal for high-dimensional sparsity. Despite the name, the level generalizes to k-d trees for...
[]
feature
[ "Format/Hierarchical & multi-d" ]
bobbyyyan__scorch-feature_format_coercion_pass
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a compile-time format-coercion pass that decides for each operand of a CIN computation whether to convert its storage format mid-pipeline (before the kernel runs) when the cost model favors it. Today, `ops.matmul` (`ops.py:250`) dispatches to prebuilt kernels when input formats match a registered spec (`prebuilt_ke...
[]
feature
[ "Scheduler/Sparse-specific passes/Format adaptation" ]
bobbyyyan__scorch-feature_loop_skewing
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a polyhedral-style loop-skewing transformation in the scheduler that rewrites an index pair `(i, j)` into `(i, i + j)` (and the general unimodular affine case `(i, c1 * i + c2 * j)` for small constants) so that inherently serial wavefronts can be exposed as parallel hyperplanes. This is a strictly-more-genera...
[]
feature
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_llir_ssa
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Introduce an SSA (Static Single Assignment) form for the LLIR so downstream optimization passes (dead-code elimination, common subexpression elimination, loop-invariant code motion) have a principled substrate to operate on. Today, `src/scorch/compiler/llir.py` uses an imperative model where `Assign` and `VarInit` free...
[]
feature
[ "IR/LLIR form" ]
bobbyyyan__scorch-feature_unroll_and_jam
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a loop unroll-and-jam optimization pass for dense inner-loop bodies of tensor operations of arbitrary rank. Unroll-and-jam simultaneously unrolls an outer loop by a factor U and fuses the U copies of the inner loop body together, exposing register-level reuse opportunities that plain loop unrolling cannot. Im...
[]
feature
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_software_prefetch
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a software-prefetch insertion pass that emits `__builtin_prefetch` (and `_mm_prefetch` on x86 where supported) calls ahead of sparse coordinate-array and value-array loads so the CPU can hide pointer-chasing latency. Today the compiler emits no explicit prefetches; sparse kernels with indirect access patterns are b...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Classical passes" ]
bobbyyyan__scorch-feature_loop_invariant_code_motion
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement position-level loop-invariant code motion (LICM) that hoists sparse position/coordinate array loads out of inner loops whenever the enclosing position variable is invariant with respect to the inner loops. Today the lowered LLIR frequently re-reads `A1_crd[pA1]`, `A1_pos[pA0+1]`, and `A2_size` on every iterat...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Classical passes" ]
bobbyyyan__scorch-feature_coord_cse
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a sparse-coordinate common subexpression elimination (CSE) pass that deduplicates coordinate-address arithmetic across the branches of a lowered iteration lattice. In the lowered C++ today, the same position-to- linear-address computation (for example `pA0 * A1_size + iA1` or `A1_pos[pA0]` + offset chains) is...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Classical passes" ]
bobbyyyan__scorch-feature_affine_canonicalize
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement an affine-index canonicalization pass over the CIN IR that rewrites every index expression into a canonical sum-of-products form so that downstream passes (strength reduction, CSE, dependence analysis, alias analysis) can rely on syntactic equality of equivalent expressions. Today `IndexVarAdd` (`src/scorch/c...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Algebraic rewrites" ]
bobbyyyan__scorch-feature_ragged_level_unsorted
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a ragged (jagged) sparse level type `LevelType.RAGGED` that represents a variable-length dimension without the sorted/compressed invariant of `LevelType.COMPRESSED`. Each group at the parent position has an integer length and a flat run of entries; unlike COMPRESSED, the coordinates within a group need not be sorte...
[]
feature
[ "Format/Hierarchical & multi-d" ]
bobbyyyan__scorch-feature_nested_level
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a nested (recursive) format level type `LevelType.NESTED` whose entries are themselves sparse sub-tensors with their own `TensorFormat`, enabling hierarchical blocking beyond the fixed block-sparse format (feature_1). This is the sparse analogue of a B+tree or an arbitrarily-nested ragged array and is required for ...
[]
feature
[ "Format/Hierarchical & multi-d" ]
bobbyyyan__scorch-feature_bidirectional_iteration
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Extend the compiler to support bidirectional (descending and arbitrary- permutation) iteration over compressed and coordinate sparse levels. Currently `ModeIterator` in `src/scorch/compiler/iterator.py` emits strictly ascending `for (int pA1 = A1_pos[pA0]; pA1 < A1_pos[pA0+1]; pA1++)` loops; algorithms such as reverse-...
[]
feature
[ "IR/Iteration semantics" ]
bobbyyyan__scorch-feature_zero_propagation
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a structural-zero propagation pass over the CIN IR that proves portions of an iteration space produce provably-zero output and eliminates the corresponding lattice branches and loops before lowering. Today the compiler emits code for every branch of the iteration lattice regardless of whether any branch's ope...
[]
feature
[ "Scheduler/Sparse-specific passes/Iter-space pruning" ]
bobbyyyan__scorch-feature_shared_traversal
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Introduce a first-class shared-traversal CIN construct that expresses "compute k output tensors from a single sparse traversal of the input operands" and emit one fused kernel per group, saving redundant index manipulation and memory traffic. This generalizes feature_40's multi-output max/argmax and feature_21's binary...
[]
feature
[ "IR/CIN nodes" ]
bobbyyyan__scorch-feature_empty_intersection_prove
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a symbolic intersection-emptiness prover that inspects operand formats, shapes, and any available structural metadata to decide whether a sparse-sparse intersection is provably empty - and when it is, eliminates the corresponding lattice branch at compile time before any C++ is emitted. Today, even when two o...
[]
feature
[ "Scheduler/Sparse-specific passes/Iter-space pruning" ]
bobbyyyan__scorch-feature_density_specialization
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a density-class trip-count specialization pass that compiles multiple variants of each CIN kernel - one per "density class" of the sparse operands - and emits a runtime dispatcher that selects among them based on the observed nnz-to-size ratio at call time. Sparse kernel performance varies by orders of magnitude ac...
[]
feature
[ "Scheduler/Sparse-specific passes/Format adaptation" ]
bobbyyyan__scorch-feature_bump_pool_arena
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a thread-local bump-pool arena allocator for kernel workspaces so that generated kernels stop paying `malloc`/`free` costs on every invocation. Today the emitted C++ calls `malloc` for each workspace and `free` at kernel end (see `csrc/header.cpp`); under OpenMP each thread's workspaces incur separate system ...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_dead_code_elimination
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a dead-code elimination (DCE) pass over the lowered LLIR that removes statements whose defined variables are never used and whose only side effect is the definition itself. Today the compiler emits many dead temporaries because each pass (cin_lowerer, iter_lattice, iterator) conservatively introduces local variable...
[]
feature
[ "Scheduler/IR analyses & scalar opts/Classical passes" ]
bobbyyyan__scorch-feature_value_layout_rewrite
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a value-array layout rewrite pass that chooses per-operand Structure-of-Arrays (SoA) vs Array-of-Structures (AoS) for multi-output kernels, based on downstream access patterns. Today every tensor stores its value array as a single contiguous `cvector<T>`; when a kernel writes to multiple output tensors that s...
[]
feature
[ "Scheduler/Dense passes/Layout rewrite" ]
bobbyyyan__scorch-feature_user_reduction_op
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Introduce a first-class user-defined `ReductionOp` CIN IR node with a deterministic parallel tree-reduction lowering so that arbitrary associative and commutative combiners (not just `+` and `*`) can be expressed and parallelized correctly under OpenMP. The existing semiring matmul (feature_38) piggybacks on the hard-w...
[]
feature
[ "IR/CIN nodes" ]
bobbyyyan__scorch-feature_codegen_refactor
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Refactor `src/scorch/compiler/codegen.py` from a monolithic `LLIRLowerer.lower_llir` dispatch into a `CodegenBackend` abstraction with a default `CppOpenMPBackend` that preserves existing behavior, plus a `CppScalarBackend` (no OpenMP, useful for debugging and reference) and a stub `IRPrinter` backend (emits a pretty-p...
[]
feature
[ "IR/Codegen architecture" ]
bobbyyyan__scorch-feature_cin_call_inline
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Introduce a `CINCall` higher-order IR node and a sub-CIN inlining pass that expands calls into their parent CIN, enabling a vmap/batched-apply pattern at the CIN level so that a block of computation parameterized over a slice index can be expressed once and reused across arbitrary outer ranks without duplicating user c...
[]
feature
[ "IR/CIN nodes" ]
bobbyyyan__scorch-feature_register_blocking
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a register-blocking (inner-kernel micro-tiling) scheduler pass that specializes the innermost loops of dense-level nests into BLIS-style register microkernels with explicit accumulator registers. This is distinct from cache tiling (`Scheduler.add_tile`, `src/scorch/compiler/scheduler.py:839`): register blocki...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_blis_operand_packing
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a BLIS/GotoBLAS-style operand packing pass for dense tensors that copies cache-resident panels of large dense operands into a packed buffer before the inner kernel runs and makes the inner kernel address the packed buffer instead of the original operand. The pass must work for dense operands of any rank (not ...
[]
feature
[ "Scheduler/Dense passes/Pattern match" ]
bobbyyyan__scorch-feature_loop_collapse
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a loop-collapse (nest flattening) scheduler pass that fuses a chain of perfectly-nested dense `ForAll` loops into a single `ForAll` over a collapsed ivar whose extent is the product of the original extents. This generalizes the OpenMP `collapse(n)` clause to the full CIN scheduling surface and must work for nests o...
[]
feature
[ "Scheduler/Loop transformations/Reorder & restructure" ]
bobbyyyan__scorch-feature_broadcast_specialize
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a broadcast-dimension specialization pass that detects, at compile time, dense operand dimensions of size 1 (broadcast axes) and emits a specialized kernel that eliminates the broadcast loops entirely, replacing broadcast accesses with a scalar load hoisted out of the inner nest. The pass must handle arbitrary rank...
[]
feature
[ "Scheduler/Dense passes/Specialization" ]
bobbyyyan__scorch-feature_dense_strided_view
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Introduce a first-class `DenseStridedView` representation in `STensor` storage and propagate it through the full compilation pipeline so that operations like slicing, transposition, and broadcasting on dense tensors produce views rather than materialized copies and the downstream CIN sees the view with the correct stri...
[]
feature
[ "API/Shape & Layout/Views" ]
bobbyyyan__scorch-feature_mixed_precision_accum
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add end-to-end mixed-precision accumulator support for dense arithmetic so that `torch.float16` and `torch.bfloat16` operands can flow through the compile pipeline while reductions accumulate in `torch.float32` inside the generated kernel, with a final cast back to the operand dtype at store time. This is the standard ...
[]
feature
[ "API/Type System/Promotion & mixed precision", "Codegen/Vectorization" ]
bobbyyyan__scorch-feature_stencil_halo_tiling
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a stencil detection and halo-tiling pass for dense tensor computations whose access patterns reference shifted neighbors of a central index. The pass identifies CIN expressions whose dense `TensorAccess`es use `IndexVarAdd` / affine index expressions of the form `i + k` (for small constants k), classifies the...
[]
feature
[ "Scheduler/Loop transformations/Tiling" ]
bobbyyyan__scorch-feature_blas_pattern_match
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a pattern-matching pass that detects dense subtrees of a CIN corresponding to standard BLAS primitives (GEMM, GEMV, GER, SYRK, TRMM, DOT, AXPY) and rewrites them to emit direct calls into a detected BLAS library (OpenBLAS or MKL) instead of generating C++ loop nests. The pass must handle arbitrary batch ranks...
[]
feature
[ "Scheduler/Dense passes/Pattern match" ]
bobbyyyan__scorch-feature_dense_producer_consumer_fusion
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a dense producer-consumer fusion pass that fuses a dense CIN producer into its immediate dense consumer, sharing an on-chip workspace so the producer's output is consumed directly in the consumer's inner loop without ever materializing a full intermediate tensor. This is the dense counterpart of the existing sparse...
[]
feature
[ "Scheduler/Loop transformations/Fusion" ]
bobbyyyan__scorch-feature_dataflow_selection
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a dataflow-selection scheduler pass that, for dense contractions, chooses between output-stationary, weight-stationary, and input-stationary dataflows and emits a specialized kernel per choice. In output-stationary dataflow the output tile is held in registers across the reduction in K; in weight-stationary, one in...
[]
feature
[ "Scheduler/Dense passes/Specialization" ]
bobbyyyan__scorch-feature_broadcast_sparse_aware
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Make broadcasting work consistently for every elementwise `STensor` operation, including sparse-sparse, sparse-dense, dense-sparse, scalar, and torch scalar operands. The public behavior should match PyTorch broadcasting, but the implementation must stay sparse-aware: dimensions of size 1 are broadcast logically and sh...
[]
feature
[ "API/Constructors & I/O/Broadcasting" ]
bobbyyyan__scorch-feature_lazy_permute
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add lazy `permute`, `moveaxis`, and `movedim` support for `STensor` so dimension reordering updates logical metadata without physically rewriting storage until an operation truly requires it. The hard requirement is that every existing op consuming a permuted tensor must see the correct logical indices, shapes, formats...
[]
feature
[ "API/Shape & Layout/Transpose & permute" ]
bobbyyyan__scorch-feature_dtype_promotion
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement PyTorch-style dtype promotion for all scalar, dense, and sparse binary operations. The result dtype should follow `torch.result_type` for tensor-tensor, tensor-scalar, and scalar-tensor cases, including bool, integer, float32, float64, and complex dtypes that Scorch supports or needs to add. Promotion must fl...
[]
feature
[ "API/Type System/Promotion & mixed precision" ]
bobbyyyan__scorch-feature_pad_crop_nd
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement `pad` and `crop` for sparse and dense STensors with PyTorch-like padding specifications generalized to arbitrary rank. Padding with zero should rewrite shapes and coordinates without touching values; padding with a nonzero constant may require densification unless the format supports an explicit fill value. C...
[]
feature
[ "API/Shape & Layout/Concat & pad" ]
bobbyyyan__scorch-feature_parallel_output_merge
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement deterministic parallel sparse output assembly using thread-local coordinate buffers followed by a stable merge. When generated kernels run with OpenMP, every thread should append candidate output coordinates and values into a private buffer, then a final merge should sort by logical output coordinates, coales...
[]
feature
[ "Codegen/Parallelism" ]
bobbyyyan__scorch-feature_argmin_argmax
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement `STensor.argmin(dim=None, keepdim=False)`, `STensor.argmax(dim=None, keepdim=False)`, and the corresponding `torch.argmin` / `torch.argmax` for STensors of any rank and any format. With `dim=None`, return a 0-d int64 STensor giving the flat index of the global min/max in the dense materialization. With a `dim...
[]
feature
[ "API/Reductions & Scans/Argmax-style" ]
bobbyyyan__scorch-feature_async_jit_compile
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Make JIT kernel compilation non-blocking. `_load_kernel(name, sources, ...)` in `src/scorch/utils.py` currently blocks the calling thread until `torch.utils.cpp_extension.load_inline(...)` finishes - a 5-30 second wait that dominates first-use latency for any new CIN. Add an `_load_kernel_async(...)` variant that retur...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_torch_meta_tensor
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add support for STensors created with `device='meta'`, where shape, dtype, format, and mode_order are tracked but values and mode_indices are not allocated. Used for shape/format inference without computation. Add a `device` parameter to `STensor.__init__` (default `'cpu'`); when `device == 'meta'`, the `_storage`'s va...
[]
feature
[ "API/Constructors & I/O/Torch dispatch" ]
bobbyyyan__scorch-feature_strided_dense_zerocopy
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a zero-copy fast path to `STensor.from_torch` for non-contiguous (strided) dense torch tensors. Currently `from_torch` calls `.contiguous()` which copies; the new path retains the original storage and stride information. The optimization must work for any rank. Per-axis `stride` metadata is added to `TensorStorage`...
[]
feature
[ "API/Constructors & I/O/Torch dispatch" ]
bobbyyyan__scorch-feature_threadlocal_dispatch_cache
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Make the global `_einsum_dispatch_cache` and `_kernel_cache` in `src/scorch/ops.py` and `src/scorch/utils.py` safe under concurrent calls from multiple Python threads without sacrificing the cache-hit speedup. The current code reads/writes both dicts without synchronization, which has caused intermittent KeyError and p...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_block_iter
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add `STensor.iter_blocks(block_shape)` that yields STensor blocks of the input, of size `block_shape`, walking the input in canonical mode_order. Each block is itself an STensor with the same dtype and a format chosen via `infer_output_format`; the last block along each axis may be smaller than `block_shape[d]`. `block...
[]
feature
[ "API/Shape & Layout/Views" ]
bobbyyyan__scorch-feature_grad_through_format
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Make format-conversion operations differentiable so gradients flow back to the source format and mode_order. Currently `STensor.change_mode_order`, `to_format`, `to_sparse`, `to_dense`, `from_torch`, and `to_torch` are not registered with autograd; gradient computations through these ops silently zero out. Wrap each as...
[]
feature
[ "API/ML Primitives/Autograd" ]
bobbyyyan__scorch-feature_persistent_workspace_buffer
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Optimize repeated kernel invocations by reusing per-module workspace buffers across calls. Currently every call to a compiled kernel that uses a CIN `Workspace` allocates a fresh buffer via `malloc`/`free` inside the C++ body; in tight inference loops this dominates execution time. Add a per-module persistent workspace...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_jit_compile_pool
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Replace the synchronous, ad-hoc compile path in `_load_kernel` (`src/scorch/utils.py:32`) with a managed JIT compile pool. Today every miss in `_so_cache` (utils.py:29) calls `torch.utils.cpp_extension.load_inline` on the calling thread, blocking it for 5-30s; concurrent callers requesting the same kernel each pay the ...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_polymorphic_dispatch_inline_cache
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a per-call-site polymorphic inline cache (PIC) on top of the global `_einsum_dispatch_cache` (`src/scorch/ops.py:31`). Profiling shows that even the existing fast-dispatch path spends nontrivial time hashing the `_dispatch_key` tuple (ops.py:402-410) for every `einsum` call. Implement a small (capacity 4) inline ca...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_so_path_atomic_publish
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Eliminate the data-race in `_load_kernel` (`src/scorch/utils.py:32`) where one thread observes a partially-written `.so` file produced by another thread's in-progress `load_inline`. Today the function checks `os.path.isfile(so_path)` (utils.py:48) and immediately calls `importlib.util.spec_from_file_location(...)`, whi...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_negative_compile_cache
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a negative compilation cache that records permanently-failing kernel sources so subsequent `_load_kernel` calls fail fast without invoking the C++ compiler. Today, every miss in `_so_cache` (`src/scorch/utils.py:29`) re-runs `load_inline`, even for sources that produced a deterministic `RuntimeError` (e.g., a synta...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_dispatch_cache_lru_with_pinning
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Replace the unbounded `_einsum_dispatch_cache` and `_kernel_cache` dicts (`src/scorch/ops.py:30-31`) with a thread-safe LRU cache that respects a configurable byte budget and supports kernel pinning. Today the cache grows unbounded; in long-running serving processes this causes RSS bloat that has been observed to OOM a...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_kernel_warmup_prefetch
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add speculative kernel pre-compilation. When `einsum(expr, a, b, ...)` is called for the first time with a particular `(expression, formats, dtypes)` triple, asynchronously begin compiling the same expression for a small set of *neighboring* format variants the user is likely to request next: (1) the all-dense variant,...
[]
feature
[ "Runtime/Caching & dispatch" ]
bobbyyyan__scorch-feature_huge_page_workspace
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Reduce TLB pressure on large workspaces by allocating any workspace ≥ 2 MiB from transparent huge pages. Currently `csrc/header.cpp` (the file read at every kernel build, `src/scorch/utils.py:166-167` and the per-op equivalents in `src/scorch/ops.py:93-94, 204-205, 708-709, 858-859`) and the codegen-emitted bodies (`sr...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_lifetime_grouped_arena
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Implement a per-call arena allocator that exploits workspace lifetime non-overlap. Today the codegen (`src/scorch/compiler/cin_lowerer.py`) emits a `malloc` per workspace and a `free` at the end of `evaluate()` (see `csrc/header.cpp` for the emitted style); workspaces with disjoint lifetimes still each get their own by...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_workspace_torchptr_zero_copy
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Eliminate the result-copy at the end of dense-output kernels by aliasing the workspace onto the result tensor's `data_ptr<T>()`. Currently many emitted kernels (see the dense-output path in `src/scorch/compiler/cin_lowerer.py`) allocate a separate dense `wksp[...]` workspace, accumulate into it, then run a final loop c...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_calloc_zero_init_workspace
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Replace the emitted `malloc(N) + memset(p, 0, N)` workspace pattern with `calloc(1, N)` for workspaces ≥ 64 KiB, while keeping `malloc + memset` for smaller workspaces (calloc has measurable per-call overhead in glibc for tiny allocations because the kernel-page-zeroing optimization only applies when the allocator retu...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_numa_local_workspace
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
On NUMA machines, allocate per-thread workspaces from the NUMA node the OpenMP worker thread is currently scheduled on, instead of the always-node-0 default of glibc's malloc. Workspaces are emitted in `src/scorch/compiler/cin_lowerer.py:64-86` and consumed inside `#pragma omp parallel for` regions emitted via `src/sco...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_simd_aligned_workspace
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Align workspace allocations to the natural SIMD width so vectorized inner loops can use aligned loads/stores. Today emitted kernels use plain `malloc(...)` (`src/scorch/compiler/cin_lowerer.py:64-86`), which is only guaranteed `alignof(std::max_align_t)` (16 bytes on most x86_64 toolchains) — for `float` workloads with...
[]
feature
[ "Runtime/Memory management" ]
bobbyyyan__scorch-feature_grainsize_autotuner
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Replace the static OpenMP `schedule` heuristics emitted via `src/scorch/compiler/llir.py:476-486` with a runtime grain-size autotuner. Today `ForLoop.omp_schedule` is one of `"static"`, `"dynamic"`, `"guided"` decided at compile time, with no chunk size; this leaves performance on the table for sparse workloads where t...
[]
feature
[ "Runtime/Work scheduling", "Runtime/Tuning & user control" ]
bobbyyyan__scorch-feature_priority_aware_dispatch
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add priority-aware kernel dispatch so latency-sensitive callers preempt batch callers. Today `einsum(...)`, `matmul(...)`, and `_load_kernel(...)` are dispatched FIFO by the underlying Python GIL/thread interleaving. Implement: (1) a thread-safe priority queue at the dispatch level — every call to `einsum`/`matmul` ent...
[]
feature
[ "Runtime/Work scheduling" ]
bobbyyyan__scorch-feature_load_aware_thread_pool
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Make OpenMP worker count adaptive to system load instead of using `OMP_NUM_THREADS` at process start. Today emitted kernels run with whatever `omp_get_max_threads()` returns when the kernel is loaded, frozen for the process lifetime. Implement a manager that periodically (every `omp_load_check_interval_ms`, default 250...
[]
feature
[ "Runtime/Work scheduling" ]
bobbyyyan__scorch-feature_kernel_cpu_affinity
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Pin OpenMP threads to specific CPU cores per-kernel based on a user-supplied policy. Today emitted parallel kernels rely on the OS scheduler to place threads, which on machines with > 1 NUMA node leads to cross-NUMA traffic for memory-bound kernels. Implement: (1) a Python user hook `scorch.set_kernel_affinity_policy(c...
[]
feature
[ "Runtime/Work scheduling" ]
bobbyyyan__scorch-feature_shape_specialized_recompile
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add runtime kernel specialization that recompiles a hot kernel with input shapes baked in as compile-time constants. Today `einsum(...)` compiles a kernel parameterized over input shapes; loop bounds are runtime variables. For a hot path called repeatedly with the same shape (typical in inference), specializing yields ...
[]
feature
[ "Runtime/Caching & dispatch", "Runtime/Tuning & user control" ]
bobbyyyan__scorch-feature_compile_flag_scope
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a context manager that re-routes kernel compilation through alternate compiler flags. Today `_load_kernel` (`src/scorch/utils.py:32`) is called with `extra_cflags=get_extra_cflags()` (utils.py:68) — a hard-coded set including `-O3`. Some workloads need `-O2 -fno-fast-math` for IEEE-strict numerics; some need `-O0` ...
[]
feature
[ "Runtime/Tuning & user control" ]
bobbyyyan__scorch-feature_workspace_memory_budget
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Add a user-controllable workspace memory budget that triggers tiled fallback schedules. Today `Scheduler.auto_schedule(cin_stmt)` (`src/scorch/compiler/scheduler.py:1461`) chooses a schedule that may demand workspaces of arbitrary size — for a 4096×4096 dense×dense Gustavson SpMM the workspace can exceed 1 GiB. Impleme...
[]
feature
[ "Runtime/Tuning & user control", "Runtime/Memory management" ]
bobbyyyan__scorch-feature_thread_local_dispatch_observers
bobbyyyan__scorch
https://github.com/bobbyyyan/scorch.git
92fb190
python
[]
/testbed/run_tests.sh
3,000
Allow user-supplied callbacks to intercept dispatch decisions for tracing and profiling. Implement: (1) `scorch.register_dispatch_observer(fn: Callable[[DispatchEvent], None]) -> ObserverHandle` and the matching `unregister_dispatch_observer(handle)`; observers are *thread-local* — each Python thread maintains its own ...
[]
feature
[ "Runtime/Tuning & user control" ]