File size: 5,827 Bytes
5fed0fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
Problem Setting
---------------
Consider a CSV file with $N$ rows and $M$ columns, where $M \leq 10$. We feed each row to an LLM inference engine (with a prefix KV cache) by concatenating all column values in that row. For the $i$-th row with entries $A[i,1], A[i,2], \ldots, A[i,M]$, we construct the input string:
```math
S_i = \text{Concat}(\text{string}(A[i,1]), \text{string}(A[i,2]), \ldots, \text{string}(A[i,M]))
````
When requesting $S_i$ for $i > 1$, the prefix KV-cache hit rate depends on the longest common prefix with any previously seen request:
```math
\text{hit\_rate}_i =
\frac{\max_{1 \le j < i} \text{LCP}(S_i, S_j)}{|S_i|}
```
where $LCP(S, T)$ is the length of the longest common prefix between strings $S$ and $T$.
You are allowed to reorder the CSV columns. Let $p$ be a permutation of $\{1, 2, ..., M\}$. The reordered string for row $i$ becomes:
```math
S'_i = \text{Concat}(\text{string}(A[i,p_1]), \text{string}(A[i,p_2]), \ldots, \text{string}(A[i,p_M]))
```
The goal is to choose a permutation $p$ that maximizes the overall KV-cache hit rate:
```math
\max_p\;
\frac{\sum_{i=2}^N \max_{1 \le j < i} \text{LCP}(S'_i, S'_j)}
{\sum_{i=1}^N |S'_i|}
```
Target
---
Maximize prefix hit rate shown above (higher is better)
- **Hard Constraint**: Average runtime per dataset must be $\leq 10$ seconds (score = 0 if exceeded) and correctly handle column merge constraint.
**Column Merges**:
- Column merge specs are provided per dataset
- Columns in each merge group are concatenated into a single column
- The merged column replaces the original columns
- Merge operations are applied before column reordering
API Specification
---
Implement a `Solution` class:
```python
import pandas as pd
class Solution:
def solve(
self,
df: pd.DataFrame,
early_stop: int = 100000,
row_stop: int = 4,
col_stop: int = 2,
col_merge: list = None,
one_way_dep: list = None,
distinct_value_threshold: float = 0.7,
parallel: bool = True,
) -> pd.DataFrame:
"""
Reorder columns in the DataFrame to maximize prefix hit rate.
Args:
df: Input DataFrame to optimize
early_stop: Early stopping parameter (default: 100000)
row_stop: Row stopping parameter (default: 4)
col_stop: Column stopping parameter (default: 2)
col_merge: List of column groups to merge (columns in each group are merged into one)
one_way_dep: List of one-way dependencies (not used in this variant)
distinct_value_threshold: Threshold for distinct values (default: 0.7)
parallel: Whether to use parallel processing (default: True)
Returns:
DataFrame with reordered columns (same rows, different column order)
"""
# Your implementation
pass
```
**Evaluation Process**:
1. Column merges are applied if specified
2. Your `solve()` method reorders the remaining columns
3. Rows are concatenated (no spaces) and prefix hit rate is calculated
Scoring (0-100)
---
baseline_hit_rate = Average prefix hit rate using original column order (0-point anchor)
avg_hit_rate = Your solution's average prefix hit rate across all datasets
For each dataset:
dataset_score = ((hit_rate - baseline_hit_rate) / (1.0 - baseline_hit_rate)) × 100
final_score = Average of individual dataset scores
Score is clamped to [0, 100] range
**Runtime Constraint**:
- Average runtime per dataset must be ≤ 10 seconds
- If average runtime exceeds 10 seconds, score = 0.0
**Scoring Examples**:
- baseline_hit_rate = 0.0 (worst), avg_hit_rate = 1.0 (perfect) → Score = 100
- baseline_hit_rate = 0.5, avg_hit_rate = 0.5 → Score = 0
- baseline_hit_rate = 0.5, avg_hit_rate = 0.75 → Score = 50
- baseline_hit_rate = 0.5, avg_hit_rate = 1.0 → Score = 100
Implementation Notes
---
- Row values are concatenated without spaces: `"".join(row.values)`
- Column reordering should optimize for maximum prefix overlap in the concatenated string representation
- Consider column dependencies, distinct value distributions, and merge requirements when reordering
- Large datasets with $M > 10$ columns require efficient algorithms due to larger search space
- In our smaller dataset, $15k \leq N \leq 28k$ and $4 \leq M \leq 9$
**Example input**
please ignore the $> 10$ column number here
---
```csv
ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month
1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1
2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1
3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0
4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0
5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0
...
```
**Example output**
---
```
ID,LIMIT_BAL,SEX,EDUCATION,MARRIAGE,AGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6,default payment next month
1,20000,2,2,1,24,2,2,-1,-1,-2,-2,3913,3102,689,0,0,0,0,689,0,0,0,0,1
2,120000,2,2,2,26,-1,2,0,0,0,2,2682,1725,2682,3272,3455,3261,0,1000,1000,1000,0,2000,1
3,90000,2,2,2,34,0,0,0,0,0,0,29239,14027,13559,14331,14948,15549,1518,1500,1000,1000,1000,5000,0
4,50000,2,2,1,37,0,0,0,0,0,0,46990,48233,49291,28314,28959,29547,2000,2019,1200,1100,1069,1000,0
5,50000,1,2,1,57,-1,0,-1,0,0,0,8617,5670,35835,20940,19146,19131,2000,36681,10000,9000,689,679,0
```
($p$ = $1, 2, \ldots M$)
|