Dataset Viewer
Auto-converted to Parquet Duplicate
renderDpi
int64
200
200
name
stringlengths
1
7
page
int64
0
507
figType
stringclasses
2 values
regionBoundary
dict
caption
stringlengths
6
3.85k
imageText
listlengths
0
17k
renderURL
stringlengths
171
177
captionBoundary
dict
200
5
9
Figure
{ "x1": 118.44, "x2": 490.32, "y1": 95.39999999999999, "y2": 545.04 }
Figure 5: Optical metasurface design problem configuration and results. (A-B) Design variables (materials, layer thicknesses, and cross-section geometry types). (C) A sample absorbance spectrum and the wavelength intervals (highlighted wavelength regions) corresponding to absorbance above the threshold t. The design objective is to generate new optical metasurface designs that exhibit higher absorbance than a threshold t at the user-defined wavelength interval(s). (D) KDE of the estimated likelihood for generated designs. (E) Satisfaction rates, average scores, and selection rates for RIGID designs under varying sampling thresholds (solid lines), in comparison to the satisfaction rate of GA designs (horizontal dashed line). (F) Designs (geometries and material selections) and corresponding absorbance spectra of five metasurfaces generated by RIGID. These five solutions are generated designs with the highest likelihood of satisfying specified target high-absorbance regions (marked as highlighted wavelength regions). All the layer thicknesses (hl, l = 1, 2, 3) have a unit of nm. Here all five designs satisfy the target. Generated designs for other targets are shown in SI Appendix, Figs. S7-S9. (G) Distributions of design variables for satisfactory solutions generated by RIGID and GA (for the same target defined in Panel F). GA designs are highly localized and lack diversity compared to RIGID designs.
[ "RIGID", "GA", "Thickness", "1", "Thickness", "2", "Thickness", "3", "si", "ty", "D", "en", "Material", "1", "Material", "2", "Material", "3", "Geometric", "type", "Satisfaction", "rate", "(GA)", "Average", "score", "s", "et", "ric", "Sampling", "threshold", "M", "Satisfaction", "rate", "Selection", "rate", "Likelihood:", "0.379", "h1", "=", "109.85", "h2", "=", "78.19", "h3", "=", "146.44", "Likelihood:", "0.388", "h1", "=", "140.66", "h2", "=", "116.79", "h3", "=", "145.87", "Likelihood:", "0.391", "h1", "=", "126.21", "h2", "=", "124.31", "h3", "=", "143.99", "Likelihood:", "0.393", "h1", "=", "141.34", "h2", "=", "141.74", "h3", "=", "141.01", "Likelihood:", "0.397", "h1", "=", "123.47", "h2", "=", "97.30", "h3", "=", "139.03", "en", "si", "ty", "A", "bs", "or", "ba", "nc", "e", "nt", "D", "C", "ou", "Estimated", "likelihood", "Wavelength", "(nm)", "Wavelength", "(nm)", "nc", "e", "or", "ba", "A", "bs", "Target", "not", "satisfied", "Target", "satisfied", "All", "designs", "G", "F", "D", "E", "A", "B", "C", "h2", "h1", "t", "Material", "1", "Material", "2", "Material", "3", "h3" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure5-1.png
{ "x1": 71.6709976196289, "x2": 540.1668090820312, "y1": 560.8505859375, "y2": 697.7620239257812 }
200
1
2
Figure
{ "x1": 74.52, "x2": 540, "y1": 72, "y2": 239.04 }
Figure 1: Schematic diagram of the RIGID method. We first train a random forest on a design-response dataset to learn the forward design-response relation — predicting qualitative responses (e.g., bandgap existence at any given wave frequency) of designs. Then given a design target, we can infer the likelihood of any design satisfying the target by probing into the trained random forest. New designs with tailored responses can be generated by sampling the design space based on the likelihood estimation.
[ "y", "0", "1y", "0", "1", "Inverse", "inference", "Design", "variable", "i", "(x", "i", ")", "De", "sig", "n", "v", "ari", "ab", "le", "j", "(x", "j)", "Target-tailored", "designsLikelihood", ")", "(n", "m", "gt", "h", "el", "en", "W", "av", "Design", "Target", "regions", "Target", "high-", "absorbance", "Target", "bandgaps", "H", "z)", "y", "(M", "ue", "nc", "Fr", "eq", "Design", "data", "Qualitative", "response", "Quantitative", "response", "Interpretable", "machine", "learning", "model", "S", "High", "absorbance", "Bandgap", "existence", "S", "Light", "Acoustic", "waves", "x", "=", "(x1,", "x2,", "...)", "x", "=", "(x1,", "x2,", "...)", ")", "(n", "m", "gt", "h", "el", "en", "W", "av", "Absorbance", "z)", "(M", "H", "nc", "y", "eq", "ue", "Reduced", "wavevector", "Fr", "Generative", "design", "Foraward", "learning", "Design", "variable", "i", "(x", "i", ")", "De", "sig", "n", "v", "ari", "ab", "le", "j", "(x", "j)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure1-1.png
{ "x1": 72, "x2": 540.3499145507812, "y1": 263.4195251464844, "y2": 313.0580139160156 }
200
7
12
Figure
{ "x1": 75.96, "x2": 537.12, "y1": 138.96, "y2": 454.32 }
Figure 7: Visualization of estimated likelihood and validation metrics for synthetic problems. (A) Likelihood function values for randomly created design targets. Orange lines show boundaries of actual feasible regions associated with the targets T = {I(a, b; s) = 1|∀s ∈ Ω}. Points show satisfactory RIGID designs and GA designs. (B) Likelihood function values estimated by random forests with varying numbers of decision trees. The design target is set as T = {I(a, b; s) = 1|∀s ∈ [0.45, 0.48]} for the SqExp problem and T = {I(a, b; s) = 1|∀s ∈ [0.63, 0.68] ∪ [0.69, 0.71]} for the SupSin problem. (C) Validation metrics for inverse design generation.
[ "RIGID", "GA", "Average", "score", "Satisfaction", "rate", "Selection", "rate", "Average", "score", "Satisfaction", "rate", "Selection", "rate", "SqExp", "ric", "s", "M", "et", "Sampling", "threshold", "SupSin", "ric", "s", "M", "et", "Sampling", "threshold", "C", "S", "q", "E", "x", "p", "S", "u", "p", "S", "in", "B", "S", "q", "E", "x", "p", "S", "u", "p", "S", "in", "oo", "d", "ke", "lih", "ed", "li", "im", "at", "E", "st", "Ω", "=", "[0.47,", "0.52]", "Ω", "=", "[0.87,", "0.9]", "Ω", "=", "[0.16,", "0.2]", "Ω", "=", "[0.22,", "0.26]", "U", "[0.31,", "0.34]", "Ω", "=", "[0.09,", "0.13]", "Ω", "=", "[0.47,", "0.51]", "Ω", "=", "[0.84,", "0.88]A", "Ω", "=", "[0.25,", "0.28]", "1", "Tree", "10", "Trees", "100", "Trees", "1000", "Trees" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure7-1.png
{ "x1": 71.7509994506836, "x2": 540.0038452148438, "y1": 481.67852783203125, "y2": 542.2259521484375 }
200
3
7
Figure
{ "x1": 117, "x2": 492.12, "y1": 72, "y2": 444.24 }
Figure 3: Acoustic metamaterial design problem configuration and results. (A) Design variables of center and corner mass radii (rcenter and rcorner) and strut radius (rstrut). (B) High symmetry points of the cubic irreducible Brillouin zone. (C) A sample dispersion relation and bandgap (marked by the highlighted zone). The design objective is to generate new acoustic metamaterial designs with target bandgaps. (D) KDE of the estimated likelihood for generated designs. (E) Satisfaction rates, average scores, and selection rates for RIGID designs under varying sampling thresholds (solid lines), in comparison to the satisfaction rate of GA designs (horizontal dashed line). The horizontal dotted line indicates 100% satisfaction. (F) Geometries and corresponding dispersion relations of five RIGID designs with the highest likelihood of satisfying a specified target bandgap (marked as highlighted frequency regions). All the radii (rstrut, rcenter, and rcorner) have a unit of µm. Here only the fourth design fails to meet a small portion (at around 6 MHz) of the target bandgap, whereas the others meet it. Generated designs for other targets are shown in SI Appendix, Figs. S2-S6.
[ "F", "E", "C", "D", "B", "A", "ric", "s", "M", "et", "Sampling", "threshold", "Satisfaction", "rate", "(GA)", "Average", "score", "Satisfaction", "rate", "Selection", "rate", "Reduced", "wavevector", "H", "z)", "y", "(M", "ue", "nc", "Fr", "eq", "H", "z)", "y", "(M", "ue", "nc", "Fr", "eq", "Reduced", "wavevector", "rcenter", "rcorner", "rstrut", "a", "Likelihood:", "0.321", "rstrut", "=", "4.70", "rcenter", "=", "17.15", "rcorner", "=", "0.04", "Likelihood:", "0.328", "rstrut", "=", "4.76", "rcenter", "=", "16.32", "rcorner", "=", "7.53", "Likelihood:", "0.334", "rstrut", "=", "4.35", "rcenter", "=", "14.60", "rcorner", "=", "2.18", "Likelihood:", "0.436", "rstrut", "=", "4.39", "rcenter", "=", "18.26", "rcorner", "=", "9.27", "Likelihood:", "0.484", "rstrut", "=", "4.32", "rcenter", "=", "14.15", "rcorner", "=", "8.42", "si", "ty", "D", "en", "Estimated", "likelihood", "Target", "not", "satisfied", "All", "designs", "Target", "satisfied" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure3-1.png
{ "x1": 71.25299835205078, "x2": 541.7467041015625, "y1": 458.4525146484375, "y2": 562.636962890625 }
200
6
11
Figure
{ "x1": 121.32, "x2": 487.08, "y1": 75.96, "y2": 366.12 }
Figure 6: Synthetic data creation for (A) the SqExp problem and (B) the SupSin problem. For each problem, the left panel shows 100 functions with randomly sampled parameters a and b. We treat a and b as synthetic design variables, and the corresponding functions as quantitative responses (e.g., absorbance spectra of optical metasurfaces). The right panel shows qualitative responses (e.g., high-absorbance wavelength ranges or bandgaps) are simulated by synthetic ranges, derived by thresholding the 100 functions (Equations 2 and 3 with threshold t=0.9).
[ "Design", "ID", "A", "s", "t", "z", "Squared", "Exponential", "Functions", "Synthetic", "Ranges", "B", "s", "Design", "ID", "t", "z", "Superposed", "Sine", "Functions", "Synthetic", "Ranges" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure6-1.png
{ "x1": 72, "x2": 541.2417602539062, "y1": 378.155517578125, "y2": 427.7950134277344 }
200
4
8
Figure
{ "x1": 73.8, "x2": 538.1999999999999, "y1": 146.51999999999998, "y2": 392.03999999999996 }
Figure 4: Distributions of satisfactory solutions for two bandgap targets. The off-diagonal plots show the pairwise bivariate distributions of design variables, and the diagonal plots show the marginal distributions of the data in each column. The left panel shows that GA designs are highly localized while RIGID can lead to diverse solutions. The right panel indicates that none of the GA designs satisfy the target, while satisfactory RIGID designs are diverse and can be very different from feasible designs from data. Solutions from data include feasible designs in both training and test data.
[ "RIGID", "Data", "GA", "us", "C", "or", "ne", "r", "m", "as", "s", "ra", "di", "us", "ad", "iu", "s", "C", "en", "te", "r", "m", "as", "s", "ra", "di", "ut", "r", "S", "tr", "Corner", "mass", "radius", "Center", "mass", "radius", "Target", "bandgap", "in", "[6.14,", "6.61]", "and", "[8.52,", "8.96]", "(MHz)", "Target", "bandgap", "in", "[5.01,", "5.90]", "(MHz)", "Strut", "radius", "us", "C", "or", "ne", "r", "m", "as", "s", "ra", "di", "us", "ad", "iu", "s", "C", "en", "te", "r", "m", "as", "s", "ra", "di", "ut", "r", "S", "tr", "Corner", "mass", "radius", "Strut", "radius", "Center", "mass", "radius" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure4-1.png
{ "x1": 71.7509994506836, "x2": 540.0042114257812, "y1": 417.467529296875, "y2": 478.0150146484375 }
200
2
4
Figure
{ "x1": 99, "x2": 510.12, "y1": 75.96, "y2": 419.03999999999996 }
Figure 2: The inverse design pipeline of the proposed RIGID method (using the inverse design of acoustic metamaterials as an example). Given design parameters x and the auxiliary variable s (e.g., wave frequency), a trained random forest predicts the probability of the qualitative response y (e.g., bandgap existence). Each tree in the random forest splits the joint space of x and s into regions, each associated with a specific prediction (shown on leaf nodes). The splitting criteria are encoded in tree nodes. “T” means meeting a criterion and “F” means not meeting it. RIGID first identifies leaf nodes that are relevant to the considered range of auxiliary variable s by checking splitting criteria related to s and pruning tree branches that are irrelevant (Step 1). If the considered range of s has multiple parts, we repeat this step for each part and take the intersection of relevant leaves (Step 2). Each relevant leaf node corresponds to a decision path indicating a region in the design space, as well as a predicted probability of target satisfaction, which is a score we assign to the corresponding design space region (Step 3). With multiple trees in a random forest, we can average the scores predicted by each tree and use the average score as our likelihood estimation (Step 4). We can then sample from the design space based on the likelihood distribution to generate new designs tailored to the target (Step 5). Note that the 2-dimensional likelihood maps are only for visualization purposes. The actual dimension will be the same as the design space dimension (i.e., the number of design variables).
[ "Frequency", "(MHz)", "3", "4", "6", "7", "0.6", "0.8", "0.4", "meeting", "target", "Target", "and", "actual", "bandgaps", "Designs", "Likelihood", "of", "Step", "5.", "Generating", "designs", "based", "on", "likelihood", "Final", "likelihood", "map", "xj", "xi", "...", "Step", "4.", "Averaging", "likelihood", "over", "all", "trees", "in", "random", "forest", "0.2", "0.9", "0.6", "xj", "xi", "0.0", "F", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "x2", "≤", "3", "x1", "≤", "1", "0.3", "0.6", "0.9", "0.2", "1.0", "0.3", "0.2", "0.7", "s", "≤", "7", "x2", "≤", "5", "x1", "≤", "4", "x0", "≤", "2", "T", "s", "≤", "5", "Step", "3.", "Tracing", "up", "the", "tree", "0.3", "0.6", "0.9", "0.2", "1.0", "0.3", "0.2", "0.7", "0.3", "0.6", "0.9", "0.2", "1.0", "0.3", "0.2", "0.7", "Step", "1.", "Tracing", "down", "the", "tree", "to", "check", "s", "Step", "2.", "Getting", "intersection", "of", "relevant", "leaves", "3", "≤", "s", "≤", "4", "F", "6", "≤", "s", "≤", "7", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "x2", "≤", "3", "x1", "≤", "1", "0.3", "0.6", "0.9", "0.2", "1.0", "0.3", "0.2", "0.7", "s", "≤", "7", "x2", "≤", "5", "x1", "≤", "4", "x0", "≤", "2", "T", "s", "≤", "5", "F", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "T", "F", "x2", "≤", "3", "x1", "≤", "1", "0.3", "0.6", "0.9", "0.2", "1.0", "0.3", "0.2", "0.7", "s", "≤", "7", "x2", "≤", "5", "x1", "≤", "4", "x0", "≤", "2", "T", "s", "≤", "5", "Target", ":", "Generate", "acoustic", "metamaterials", "with", "bandgap(s)", "in", "3-4", "and", "6-7", "MHz", "frequency", "ranges" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure2-1.png
{ "x1": 72, "x2": 540.1652221679688, "y1": 435.72052001953125, "y2": 583.541015625 }
200
1
5
Table
{ "x1": 306.71999999999997, "x2": 543.24, "y1": 316.8, "y2": 574.1999999999999 }
Table 1: Evaluation on different datasets. “ALL” the proportional mixture of the four base datasets.
[ "ALL", "0.555", "0.685", "0.505", "0.621", "0.529", "0.652", "HI", "0.435", "0.611", "0.361", "0.517", "0.395", "0.560", "AI", "0.474", "0.611", "0.419", "0.532", "0.445", "0.569", "SI", "0.444", "0.601", "0.413", "0.539", "0.428", "0.568", "RI", "0.499", "0.633", "0.414", "0.526", "0.453", "0.574", "F1", "(Choice)", "Dataset", "Precision", "Precision", "(Choice)", "Recall", "Recall", "(Choice)", "F1", "We", "conducted", "evaluation", "of", "Gllm", "through", "two", "compara-", "tive", "experiments", "on", "GPT-4-generated", "instruction", "datasets,", "aiming", "to", "investigate", "the", "impact", "of", "different", "instruction", "datasets", "and", "fine-tuning", "paradigms.", "The", "evaluation", "metrics", "employed", "encompass", "precision,", "recall,", "and", "F1", "score.", "It’s", "worth", "noting", "that", "a", "potential", "issue", "in", "determining", "the", "pre-", "cision", "of", "generating", "sub-goals", "that", "are", "close", "in", "semantics.", "For", "instance,", "associating", "the", "sub-goal", "“moving", "speed”", "val-", "ues", "“very", "fast”", "versus", "“fast”", "may", "be", "perceived", "as", "a", "negative", "instance", "under", "precision", "measurement.", "Consequently,", "we", "argue", "that", "the", "generation", "of", "such", "sub-goals", "should", "weigh", "more", "in", "choosing", "sub-goal", "than", "determining", "values.", "Thus,", "we", "further", "propose", "three", "choice-based", "metrics:", "precision", "(choice),", "recall", "(choice),", "and", "F1", "(choice).", "Table", "1", "provides" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table1-1.png
{ "x1": 307.1310119628906, "x2": 543.0916748046875, "y1": 586.1245727539062, "y2": 604.0830078125 }
200
4
10
Figure
{ "x1": 78.84, "x2": 518.04, "y1": 289.8, "y2": 432 }
Figure 4: Preprocessing for an observation with four types of features.
[ "Environmental", "features", "Other", "players", "-----------------'", "The", "Agent", "RBG", "Bird's-eye-view", "-----------------'", "([j�iiiu", "-,", "Unit", "Features" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure4-1.png
{ "x1": 158.40199279785156, "x2": 438.4804992675781, "y1": 443.4005432128906, "y2": 449.40301513671875 }
200
15
24
Table
{ "x1": 90, "x2": 507.24, "y1": 264.96, "y2": 483.12 }
Table 15: Language model performance evaluation with different sizes of fine-tuning training set. The underlined “Improve Rate” values represent the improvement percentage of the “CoFT → SFT” method relative to “SFT” method.
[ "CoFT", "39.42%", "58.35%", "34.28%", "50.10%", "36.67%", "53.91%", "CoFT", "→", "SFT", "41.61%", "60.40%", "34.55%", "50.32%", "37.75%", "54.90%", "SFT", "17.66%", "38.28%", "13.33%", "29.47%", "15.20%", "33.30%", "Improve", "Rate", "135.52%", "57.78%", "159.15%", "70.79%", "148.43%", "64.88%", "3%", "CoFT", "45.58%", "60.84%", "41.77%", "53.92%", "43.59%", "57.17%", "CoFT", "→", "SFT", "48.06%", "61.01%", "43.15%", "54.31%", "45.47%", "57.47%", "SFT", "42.08%", "55.31%", "30.86%", "41.45%", "35.61%", "47.39%", "Improve", "Rate", "14.20%", "10.32%", "39.80%", "31.03%", "27.69%", "21.28%", "10%", "CoFT", "49.43%", "62.66%", "44.45%", "56.29%", "46.81%", "59.30%", "CoFT", "→", "SFT", "49.92%", "62.59%", "45.51%", "57.65%", "47.61%", "60.02%", "SFT", "46.12%", "60.96%", "33.68%", "45.39%", "38.93%", "52.03%", "Improve", "Rate", "8.25%", "2.68%", "35.11%", "27.01%", "22.30%", "15.34%", "30%", "CoFT", "51.85%", "64.84%", "46.68%", "57.91%", "49.13%", "61.18%", "CoFT", "→", "SFT", "55.48%", "68.52%", "50.48%", "62.10%", "52.86%", "65.15%", "SFT", "54.70%", "65.20%", "49.00%", "60.20%", "51.70%", "63.20%", "Improve", "Rate", "1.42%", "5.09%", "3.02%", "3.16%", "2.25%", "3.09%", "100%", "F1", "(Choice)", "Recall", "(Choice)", "F1", "Dataset", "Size", "Training", "Method", "Precision", "Precision", "(Choice)", "Recall" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table15-1.png
{ "x1": 55.13100051879883, "x2": 541.43603515625, "y1": 495.2445373535156, "y2": 513.2030029296875 }
200
16
25
Table
{ "x1": 72.72, "x2": 524.16, "y1": 138.96, "y2": 621 }
Table 16: Chain of thought prompt for GPT4.
[ "In", "order", "to", "complete", "the", "command", "‘You", "should", "lie", "in", "wait’,", "let", "us", "plan", "the", "states", "of", "the", "agent", "step", "by", "step", "using", "the", "following", "template:", "1.", "Analyze", "the", "verbal", "orders", "of", "teammates", "and", "players,", "what", "do", "you", "want", "to", "do?", "According", "to", "the", "command,", "also", "analysis", "the", "relevant", "states", "of", "teammates", "and", "enemies", "that", "need", "attention.", "The", "verbal", "command", "of", "the", "teammate", "player", "is", "[Command],", "which", "means", "teammate", "player", "wants", "the", "agent...", "2.", "Analyze", "which", "states", "of", "the", "agents", "are", "most", "relevant", "to", "the", "verbal", "commands", "of", "teammate", "player.", "The", "agents", "in", "the", "unselected", "states", "will", "adjust", "themselves", "to", "complete", "your", "plan", "(analyze", "the", "reason", "first,", "then", "select", "key", "states", "one", "by", "one", "as", "few", "as", "possible", "and", "as", "important", "as", "possible", "according", "to", "the", "degree", "of", "importance)?", "According", "to", "the", "teammate’s", "command:", "2.1.", "[Reason1]:", "[State1]", "2.2.", "[Reason2]:", "[State2]", "...", "3.", "Plan", "how", "these", "key", "states", "need", "to", "be", "adjusted", "(analyze", "the", "reason", "first,", "and", "then", "make", "adjustments", "one", "state", "by", "one", "state,", "the", "state", "can", "be", "changed", "or", "remain", "the", "same,", "and", "must", "be", "selected", "from", "the", "value", "range", "of", "the", "game", "state", "[Choice", "1,", "Choice", "2,", "...])?", "According", "to", "the", "teammate’s", "command:", "3.1.", "[State1]:", "[Reason1]:", "[Current_value1]", "->[Target_value2]", "3.2.", "[State2]:", "[Reason2]:", "[Current_value1]", "->[Target_value2]", "...", "4.", "Modify", "the", "adjustment", "that", "may", "be", "wrong,", "and", "refer", "to", "the", "Rules", "to", "analyze", "which", "state", "adjustments", "may", "conflict,", "repeat", "or", "be", "unnecessary,", "and", "output", "the", "modified", "adjustment", "plan:", "According", "to", "the", "states", "adjustments", "in", "3...", "4.1.", "[State1]:", "[Current_value1]", "->[Target_value2]", "4.2.", "[State2]:", "[Current_value1]", "->[Target_value2]", "...", "5.", "According", "to", "the", "analyze", "and", "the", "planing", "of", "the", "verbal", "command,", "further", "analyze", "the", "behavior", "tendency", "required", "in", "the", "adjustment", "process", "(the", "proportion", "of", "Mobile,", "Offense,", "Waiting,", "Supplies,", "Scouting,", "first", "analyze", "the", "reason,", "and", "then", "calculate", "the", "percentage)", "Mobile:", "[Reason1]:", "[Percent1]", "Offense:", "[Reason2]:", "[Percent2]", "Waiting:", "[Reason3]:", "[Percent3]", "Supplies:", "[Reason4]:", "[Percent4]", "Scouting:", "[Reason5]:", "[Percent5]", "6.", "Analyze", "how", "long", "the", "current", "command", "needs", "to", "be", "kept", "(for", "example,", "the", "command", "of", "‘killing", "the", "enemy’", "needs", "to", "be", "kept", "for", "a", "‘short", "term’,", "and", "the", "command", "of", "‘pay", "attention", "to", "reconnaissance’", "needs", "to", "be", "kept", "for", "a", "‘long", "term’.", "First", "analyze", "the", "reason", "and", "then", "make", "a", "judgment).", "According", "to", "the", "command", "of", "the", "teammate,", "[Analysis]:", "The", "current", "command", "needs", "to", "be", "kept", "by", "‘[XX", "term]’.", "If", "you", "see", "phrases", "like", "[Context]", "in", "answer", "template,", "replace", "the", "entire", "phrase", "according", "to", "the", "meaning", "of", "the", "Context,", "do", "not", "repeat", "the", "content;", "make", "analogy", "expansion", "for", "‘...’;", "keep", "‘:’;", "absolutely", "do", "not", "modify", "others", "in", "template." ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table16-1.png
{ "x1": 207.34300231933594, "x2": 389.2301025390625, "y1": 633.1275634765625, "y2": 639.1300048828125 }
200
7
14
Table
{ "x1": 56.879999999999995, "x2": 540, "y1": 66.96, "y2": 435.24 }
Table 7: Overview of sub-goal classes, we show a part of them here.
[ "Sub-goal", "Class", "Candidates", "Damage", "to", "enemy", "[Zero,Low,Little", "low,Medium,Little", "high,High]", "Whether", "knock", "down", "enemy", "[True,False]", "Whether", "kill", "enemy", "[True,False]", "Whether", "seen", "enemy", "[True,False]", "Whether", "seen", "by", "enemy", "[True,False]", "Number", "of", "enemies", "have", "ever", "seen", "[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]", "Length", "of", "distance", "moved", "[No", "movement,Short,Medium,Long,Very", "long]", "Average", "velocity", "[Static,Slow,Medium,Fast,Falling]", "Horizontal", "direction", "of", "movement", "[West,Northwest,North,NorthEast,East,Southeast,South,Southwest]", "Horizontal", "direction", "of", "view", "[West,Northwest,North,NorthEast,East,Southeast,South,Southwest]", "Pitch", "direction", "of", "view", "[Low,Little", "low,Medium,Little", "high,High]", "Health", "level", "[Empty,Low,Medium,High,Full]", "Whether", "to", "restore", "health", "[True,False]", "Whether", "the", "health", "is", "damaged", "[True,False]", "Whether", "rescued", "teammate", "[True,False]", "Whether", "be", "knocked", "down", "[True,False]", "Whether", "prone", "position", "[True,False]", "Whether", "have", "a", "gun", "[True,False]", "Whether", "have", "bullets", "[True,False]", "Whether", "have", "medical", "kits", "[True,False]", "Distance", "with", "nearest", "enemy", "[Touch,Nearby,Moderate,Far,Out", "of", "reach,Extreme", "Far]", "Whether", "closer", "with", "nearest", "enemy", "[True,False]", "Whether", "crouch", "position", "[True,False]", "Whether", "hold", "a", "gun", "[True,False]", "Length", "of", "distance", "from", "agent", "to", "teammate", "[Touch,Nearby,Moderate,Far,Out", "of", "reach,Extreme", "Far]", "Whether", "seen", "by", "teammate", "[True,False]", "Teammate’s", "position", "relative", "to", "agent", "[West,Northwest,North,NorthEast,East,Southeast,South,Southwest]", "Whether", "follow", "with", "the", "views", "of", "teammate", "[True,False]", "Whether", "target", "the", "same", "enemy", "as", "teammate", "[True,False]", "Whether", "follow", "with", "the", "movement", "direction", "of", "teammate", "[True,False]", "Horizontal", "direction", "of", "movement", "of", "enemy", "[West,Northwest,North,NorthEast,East,Southeast,South,Southwest,None]", "Velocity", "of", "enemy", "[Static,Slow,Medium,Fast,Falling,None]", "Enemy’s", "position", "relative", "to", "agent", "[West,Northwest,North,NorthEast,East,Southeast,South,Southwest,None]" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table7-1.png
{ "x1": 162.31199645996094, "x2": 434.2611389160156, "y1": 447.3495178222656, "y2": 453.35198974609375 }
200
11
20
Figure
{ "x1": 66.6, "x2": 542.88, "y1": 330.84, "y2": 615.9599999999999 }
Figure 11: (a) Sub-goal distribution during co-training. The 20 most frequently occurring goal meta states are filtered out and displayed. The vertical axis represents the probability of the state being output by the language model; (b) For a collected trajectory segment with length k = 200, we firstly estimate the basic value for the last k − j + 1 states (here j = 20) and select one state as the goal with the probability proportional to their values.
[ "where", "φ⋆", "a", "target", "network", "which", "shares", "the", "same", "architecture", "as", "the", "RND", "predictor", "but", "the", "network", "is", "non-trainable.", "∥φ(E(st,", "g))−", "φ⋆(E(st,", "g))∥,", "(13)", "t=0", "Rfrnd", "=", "−", "T∑", "•", "rfrnd", "-", "Reward", "Indicating", "Whether", "the", "Generated", "Goal", "is", "Reachable", "for", "the", "Current", "State.", "RND", "(Burda", "et", "al.,", "2018;", "Nikulin", "et", "al.,", "2023;", "Zhang", "et", "al.,", "2021;", "Du", "et", "al.,", "2023)", "is", "an", "effective", "method", "to", "measure", "the", "visiting", "frequency", "of", "states", "or", "transitions", "in", "RL,", "where", "higher", "a", "RND", "score", "(reward),", "the", "more", "frequent", "a", "state", "is", "visited.", "Thus,", "we", "can", "leverage", "such", "a", "method", "to", "quantify", "how", "novel", "a", "state", "is:", "1g∩PROJ(st", ")̸=∅", "(12)", "t=0", "Rfkeep", "=", "n(g", "∩", "PROJ(s0))", "·", "T∑", "•", "rfkeep", "-", "Reward", "Indicating", "How", "Long", "the", "Goal", "Can", "Be", "Kept.", "As", "depicted", "by", "Equation", "(12),", "upon", "accomplishing", "the", "goal,", "the", "agent", "receives", "a", "reward", "proportional", "to", "the", "cumulative", "number", "of", "steps", "taken", "to", "sustain", "the", "goal", "state,", "scaled", "by", "the", "count", "of", "distinct", "sub-goals", "between", "the", "initial", "state", "s0", "and", "the", "goal", "g,", "i.e.", "n(g", "∩", "PROJ(s0)):", "Rfg", "=", "T∑", "t=1", "∥", "|g", "−", "Proj(st−1)|", "−", "|g", "−", "Proj(st)|", "|g", "−", "Proj(s0)|+", "ϵ", "∥1,", "where", "ϵ", "=", "1e-6", "(11)", "agent", "progressively", "reduces", "the", "distance", "between", "the", "initial", "state", "and", "the", "goal,", "scaling", "it", "by", "the", "magnitude", "of", "the", "initial", "state-goal", "difference:" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure11-1.png
{ "x1": 55.439998626708984, "x2": 541.4409790039062, "y1": 261.9555358886719, "y2": 303.822998046875 }
200
1
1
Figure
{ "x1": 88.56, "x2": 506.52, "y1": 73.8, "y2": 256.32 }
Figure 1: Overview of co-training in OpenPAL. The Policy and LLM is pre-trained with multi-step fine-tuning and goalconditioned RL, respectively. Then, the co-training aligns them towards achieving instruction open-endedness.
[ "G", "oal", "R", "ew", "ard", "s", "Rewards", "Feedback", "Goals", "Planning", "Observations", "Agent", "Actions", "Human", "Instructions", "Be", "Careful", "!”", "Prepare", "to", "Ambush.”", "“Find", "Them", "Out", "!”", "Multi-step", "Fine-tuning", "Goal-conditioned", "RL", "Co-training", "LLM", "Policy", "Open-ended", "Environments" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure1-1.png
{ "x1": 55.439998626708984, "x2": 543.0977172851562, "y1": 274.0145263671875, "y2": 291.97198486328125 }
200
3
6
Table
{ "x1": 307.08, "x2": 543.24, "y1": 611.64, "y2": 717.12 }
Table 3: Comparison of goal-generation. Cyan the helpful, pink the conflicting, and orange the critical sub-goals. It is evident that co-training enables goal-generation to avoid conflicts of sub-goals and improves reasonability by including helpful and critical sub-goals.
[ "ments", "mainly", "lies", "in", "2", "≤", "|g|", "≤", "4,", "because", "|g|", "=", "1", "is", "too", "easy", "while", "|g|", "≥", "5", "is", "too", "hard", "to", "complete.", "Figure", "3(b)", "shows", "a", "case", "of", "|g|", "=", "3", "that", "co-training", "indeed", "improves", "the", "completion", "ratio", "as", "the", "green", "curve.", "It", "is", "noteworthy", "that", "the", "performance", "suddenly", "downgrades", "at", "each", "reset.", "This", "phenomenon", "is", "attributed", "to", "the", "reset", "of", "Gllm", "breaks", "the", "adaptation", "with", "πg", ",", "avoiding", "being", "trapped", "in", "local", "opti-", "mal.", "Meanwhile,", "the", "performance", "tends", "to", "converge,", "which", "indicates", "the", "successor", "loops", "produce", "a", "better", "adaptation" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table3-1.png
{ "x1": 307.1310119628906, "x2": 543.0897827148438, "y1": 530.1415405273438, "y2": 583.9649658203125 }
200
12
21
Figure
{ "x1": 54.72, "x2": 542.16, "y1": 209.88, "y2": 407.15999999999997 }
Figure 12: Implementation of the RND predictor network.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure12-1.png
{ "x1": 182.2519989013672, "x2": 414.6295166015625, "y1": 418.42352294921875, "y2": 424.4259948730469 }
200
8
13
Table
{ "x1": 274.68, "x2": 540, "y1": 408.59999999999997, "y2": 520.1999999999999 }
Table 8: The major changes in the training procedure.
[ "4/14/2023", "1", "1802702", "Experiment", "started", "4/27/2023", "1808552", "1802702", "Env-init:", "Random", "weapons", "5/8/2023", "2829170", "1803087", "Action:", "Add", "a", "fire", "action", "for", "long", "distance", "5/10/2023", "3034011", "1803087", "Env-init:Random", "safe", "area", "in", "the", "whole", "map", "5/11/2023", "3130353", "1803855", "Observation:", "Add", "number", "of", "remaining", "players", "in", "the", "game", "5/12/2023", "3198564", "2412975", "Observation:", "Add", "BEV", "feature", "5/16/2023", "3673506", "2418111", "Observation:", "Add", "history", "rotation", "feature", "5/22/2023", "4519567", "2418368", "Observation:", "Add", "rotation", "change", "feature", "5/29/2023", "5442025", "2418368", "Reward:", "Add", "rewards", "for", "teamwork", "6/2/2023", "5899503", "2418368", "Update", "new", "game", "version", "6/13/2023", "7306607", "3013409", "Network:", "Add", "obstacle", "avoidance", "reward", "and", "corresponding", "value", "head", "6/14/2023", "7404118", "3015457", "Observation:", "Add", "distance", "feature", "to", "nearby", "obstacles", "6/16/2023", "7628098", "3015457", "Env-init:", "Player", "numbers", "per", "team", "increased", "to", "4", "6/19/2023", "7974450", "3109267", "Action:", "Use", "attention", "to", "select", "target", "to", "attack", "Date", "Iteration", "#params", "Change" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table8-1.png
{ "x1": 300.52301025390625, "x2": 514.748779296875, "y1": 532.37255859375, "y2": 538.375 }
200
5
13
Table
{ "x1": 96.84, "x2": 500.03999999999996, "y1": 66.96, "y2": 364.32 }
Table 5: The introduction of different rewards.
[ "Feature", "Weight", "Description", "enemy", "discovery", "0.02", "reward", "for", "see", "an", "enemy", "detected", "by", "enemy", "-0.002", "punishment", "for", "being", "seen", "by", "an", "enemy", "scout", "0.0001", "reward", "for", "search", "for", "an", "enemy", "no-op", "-0.0002", "punishment", "for", "stopping", "and", "doing", "nothing", "bullet", "0.015", "reward", "for", "using", "and", "refilling", "bullets", "health", "point", "0.03", "reward", "for", "health", "point", "changes", "be", "knocked", "down", "-2.5", "punishment", "for", "being", "knocked", "down", "dead", "-3.5", "punishment", "for", "being", "killed", "damage", "enemy", "0.1", "reward", "for", "damaging", "an", "enemy", "knock", "down", "enemy", "4.5", "reward", "for", "knocking", "down", "an", "enemy", "kill", "enemy", "3.5", "reward", "for", "killing", "an", "enemy", "approach", "a", "downed", "teammate", "0.001", "reward", "for", "approaching", "a", "downed", "teammate", "help", "a", "downed", "teammate", "up", "0.8", "reward", "for", "helping", "up", "a", "downed", "teammate", "not", "save", "a", "downed", "teammate", "-0.5", "punishment", "for", "not", "saving", "a", "downed", "teammate", "go", "to", "blue", "circle", "0.00015", "reward", "for", "going", "to", "blue", "circle", "be", "in", "white", "circle", "-0.00005", "small", "punishment", "for", "being", "outside", "of", "white", "circle", "outside", "blue", "circle", "-0.012", "punishment", "for", "being", "outside", "of", "blue", "circle", "teammate", "damage", "enemy", "0.03", "reward", "from", "teammate", "damaging", "enemies", "teammate", "get", "up", "0.6", "reward", "from", "teammate", "getting", "up", "I", "help", "teammate", "up", "4", "reward", "for", "helping", "teammate", "up", "interrupt", "helping", "teammate", "up", "-0.05", "punishment", "for", "the", "interruption", "to", "help", "teammate", "up", "obstacle", "avoidance", "0.012", "punishment", "for", "being", "stuck", "goal", "1", "reward", "of", "goal", "completion" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table5-1.png
{ "x1": 205.58399963378906, "x2": 390.98785400390625, "y1": 376.2755432128906, "y2": 382.27801513671875 }
200
7
17
Figure
{ "x1": 78.84, "x2": 518.04, "y1": 66.96, "y2": 244.07999999999998 }
Figure 7: This training system has four key parts: Actor, Learner, League and LLM replicas. Actors are responsible for data collection, the Learner trains the policy model using this data, the League coordinates the overall training process and displays results, and the LLM Replicas handle goal generation and distribute them to downstream workers.
[ "Model", "parameters", "--+", "Environments", "Rollout", "outcomes", "Checkpoints", "\\", "Actors", "1", ",", "Actions", "I", "/", "---------", "1", "/;___", "--��====���", "\\", "\\", "\\", ";'", "State", "ab�tractions", ",;--", "l", "LLM", "Replica", "!,I", "V\\", "I", "/", "\\", "I", "'\"------------./", "I", "I", "I", "I", "I", "Goals", "/", "I", ",..)\"'-----------\"\\", "'I", ",.---'", "--------...,", "'I", "Model", "parameters", "--+", "Redis", "+-", "Learner", "League", "Training", "outcomes", "Learning", "tasks" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure7-1.png
{ "x1": 55.439998626708984, "x2": 541.6138916015625, "y1": 255.96652221679688, "y2": 285.8800048828125 }
200
9
22
Table
{ "x1": 72.72, "x2": 524.16, "y1": 156.96, "y2": 603 }
Table 9: Chain of thought response from GPT4.
[ "1.", "Analyze", "the", "verbal", "orders", "of", "teammates", "and", "players,", "what", "do", "you", "want", "to", "do?", "According", "to", "the", "command,", "also", "analysis", "the", "relevant", "states", "of", "teammates", "and", "enemies", "that", "need", "attention.", "The", "verbal", "command", "of", "the", "teammate", "player", "is", "‘You", "should", "lie", "in", "wait’,", "which", "means", "teammate", "player", "wants", "the", "agent", "to", "set", "up", "an", "ambush", "or", "take", "a", "hiding", "position.", "2.", "Analyze", "which", "states", "of", "the", "agents", "are", "most", "relevant", "to", "the", "verbal", "commands", "of", "teammate", "player.", "The", "agents", "in", "the", "unselected", "states", "will", "adjust", "themselves", "to", "complete", "your", "plan.", "According", "to", "the", "teammate‘S", "command:", "2.1.", "Need", "to", "hide:", "‘Whether", "prone", "position’,", "‘Whether", "crouch", "position’", "2.2.", "Reduce", "movement", "to", "stay", "unnoticed:", "‘Average", "velocity’,", "‘Length", "of", "distance", "moved’", "2.3.", "Ensure", "readiness", "for", "potential", "attack:", "‘Whether", "hold", "a", "gun’", "3.", "Plan", "how", "these", "key", "states", "need", "to", "be", "adjusted.", "According", "to", "the", "teammate‘S", "command:", "3.1.", "‘Whether", "prone", "position’:", "Need", "to", "hide:", "‘False’", "->", "‘True’", "3.2.", "‘Whether", "crouch", "position’:", "Alternative", "hiding", "posture", "if", "not", "prone:", "‘False’", "->", "‘True’", "3.3.", "‘Average", "velocity’:", "Reduce", "movement:", "‘Fast’", "->", "‘Static’", "3.4.", "‘Length", "of", "distance", "moved’:", "Reduce", "movement:", "‘Short’", "->", "‘No", "movement’", "3.5.", "‘Whether", "hold", "a", "gun’:", "Ensure", "readiness", "for", "potential", "attack:", "‘True’", "->", "‘True’", "4.", "Modify", "the", "adjustment", "that", "may", "be", "wrong,", "and", "refer", "to", "the", "Rules", "to", "analyze", "which", "state", "adjustments", "may", "conflict,", "repeat", "or", "be", "unnecessary,", "and", "output", "the", "modified", "adjustment", "plan:", "According", "to", "the", "states", "adjustments", "in", "3:", "4.1.", "‘Whether", "prone", "position’:", "‘False’", "->", "‘True’", "4.2.", "‘Average", "velocity’:", "‘Fast’", "->", "‘Static’", "4.3.", "‘Length", "of", "distance", "moved’:", "‘Short’", "->", "‘No", "movement’", "4.4.", "‘Whether", "hold", "a", "gun’:", "‘True’", "->", "‘True’", "(Removed", "‘Whether", "crouch", "position’", "as", "it", "conflicts", "with", "‘Whether", "prone", "position’", "and", "prone", "position", "gives", "better", "cover", "for", "an", "ambush.)", "5.", "According", "to", "the", "analyze", "and", "the", "planing", "of", "the", "verbal", "command,", "further", "analyze", "the", "behavior", "tendency", "required", "in", "the", "adjustment", "process.", "Mobile:", "Need", "to", "stop", "all", "movement:", "0%", "Offense:", "Must", "be", "ready", "to", "attack", "if", "necessary:", "20%", "Waiting:", "The", "main", "goal", "is", "to", "set", "up", "an", "ambush", "and", "wait:", "70%", "Supplies:", "No", "need", "for", "supplies", "during", "this", "phase:", "0%", "Scouting:", "Need", "to", "observe", "the", "surroundings,", "but", "not", "actively", "scout:", "10%", "6.", "Analyze", "how", "long", "the", "current", "command", "needs", "to", "be", "kept.", "According", "to", "the", "command", "of", "the", "teammate,", "taking", "a", "hiding", "position", "or", "setting", "up", "an", "ambush", "is", "usually", "done", "with", "the", "intention", "of", "waiting", "for", "an", "opportune", "moment", "to", "strike", "or", "observe", "the", "enemy‘S", "movement:", "The", "current", "command", "needs", "to", "be", "kept", "by", "‘long", "term’." ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table9-1.png
{ "x1": 202.91900634765625, "x2": 393.6529541015625, "y1": 614.9955444335938, "y2": 620.9979858398438 }
200
6
12
Table
{ "x1": 394.91999999999996, "x2": 545.04, "y1": 335.88, "y2": 423 }
Table 6: Action space.
[ "Sub", "Action", "Space", "Dim", "Size", "movement", "direction", "16", "yaw", "direction", "16", "pitch", "direction", "3", "body", "action", "9", "basic", "action", "7", "switch", "weapon", "action", "3" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table6-1.png
{ "x1": 423.6629943847656, "x2": 513.1072387695312, "y1": 435.44952392578125, "y2": 441.4519958496094 }
200
5
12
Figure
{ "x1": 54.72, "x2": 542.16, "y1": 538.92, "y2": 694.0799999999999 }
Figure 5: Network structure of our proposed policy.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure5-1.png
{ "x1": 195.38299560546875, "x2": 401.49908447265625, "y1": 705.4715576171875, "y2": 711.4739990234375 }
200
3
7
Figure
{ "x1": 58.68, "x2": 533.16, "y1": 254.88, "y2": 383.4 }
Figure 3: (a) The completion ratio of goals with dimension size ranges from 1 to 7; (b) The goal completion ratio of goals that |g| = 3, the trend curve reflects the improving completion ratio; (c) The sub-goals distribution changes along the training in one loop of co-training, where the description of each gi is included in Table 14.
[ "(c)", "Sub-goals", "distribution", "y", "ib", "ilit", "Po", "ss", "0.200", "0.175", "0.150", "0.125", "0.100", "0.075", "0.050", "0.025", "Sub", "goal", "g4", "g8", "g12", "g16", "g20", "Time", "in", "one", "loop", "(b)", "Goal", "completion", "rate", "(|g|", "=", "3)", "1", "2", "3", "4", "5", "6", "7", "G", "oa", "l", "C", "om", "pl", "et", "io", "n", "Ra", "ti", "o", "(%", ")", "Reset", "Loops", "Trend", "of", "policy", "ability", "Smoothed", "Completion", "Rate", "Original", "Completion", "Rate", "(a)", "Goal", "completion", "rate", "(1", "≤", "|g|", "≤", "7)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure3-1.png
{ "x1": 55.439998626708984, "x2": 541.4445190429688, "y1": 401.0495300292969, "y2": 430.9620056152344 }
200
2
7
Figure
{ "x1": 66.96, "x2": 533.16, "y1": 83.88, "y2": 199.44 }
Figure 2: (a) The goal completion rate on training dataset; (b) The goal completion rate on unseen goals, i.e., the test dataset; (c) The evaluation of policy learning in cases of w/ and w/o KL-divergence regularizer.
[ "(c)", "ics", "Mean", "basic", "reward", "per", "step", "Mean", "basic", "reward", "per", "step", "(No", "KL)", "#Enemies", "killed", "#Enemies", "knocked", "down", "#Enemies", "killed", "(No", "KL)", "#Enemies", "knocked", "down", "(No", "KL)", "at", "ist", "e", "St", "Ga", "m", "0.8", "0.7", "0.6", "0.5", "0.4", "0.3", "rd", "Re", "wa", "0.00", "0.02", "0.04", "0.06", "0.08", "0.10", "0", "0.1M", "0.2M", "0.3M", "0.4M", "0.5M", "0.6M", "0.7M", "0.8M", "0.9M", "1.0M", "Training", "Steps", "(b)", "(%", ")", "Ours", "HER", "Ra", "tio", "io", "n", "pl", "et", "C", "om", "Go", "al", "32.5", "30.0", "27.5", "25.0", "22.5", "20.0", "17.5", "15.0", "0", "0.1M", "0.2M", "0.3M", "0.4M", "0.5M", "0.6M", "0.7M", "0.8M", "0.9M", "1.0M", "Training", "Steps", "(a)", "(%", ")", "Ours", "HER", "Ra", "tio", "io", "n", "pl", "et", "C", "om", "Go", "al", "30", "29", "28", "27", "26", "25", "24", "0", "0.1M", "0.2M", "0.3M", "0.4M", "0.5M", "0.6M", "0.7M", "0.8M", "0.9M", "1.0M", "Training", "Steps" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure2-1.png
{ "x1": 55.11199951171875, "x2": 542.2698974609375, "y1": 216.98654174804688, "y2": 234.94500732421875 }
200
11
18
Table
{ "x1": 347.76, "x2": 540, "y1": 379.8, "y2": 509.03999999999996 }
Table 11: Parameter settings for RL.
[ "PPO", "clip", "eps", "0.2", "Optimizer", "Adam", "Learning", "rate", "0.0001", "Batch", "size", "20480", "Number", "of", "CPUs", "5120", "(AMD", "EPYC", "7H12", "64-Core)", "Number", "of", "GPUs", "2", "(A100)", "γ", "(basic)", "0.995", "γ", "(oa)", "0.92", "γ", "(goal)", "0.993", "λ", "0.95", "Entropy", "coefficient", "0.025", "Unroll", "length", "20", "Sample", "max", "use", "times", "3", "Gradient", "clip", "threshold", "10" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table11-1.png
{ "x1": 371.489990234375, "x2": 516.6849365234375, "y1": 520.978515625, "y2": 526.98095703125 }
200
8
18
Figure
{ "x1": 64.44, "x2": 531, "y1": 72, "y2": 300.59999999999997 }
Figure 8: Overview of the training framework with LLM. This training framework has three kinds of LLM tuning approaches: CoFT (Chain of Thoughts assisted Fine-Tuning), SFT (Supervised Fine-Tuning), EFT (Ensemble Fine-Tuning); and one LLM-RL co-training approach.
[ "State", "Co-Training", "selecting", "Format", "recognition", "PPO", "Tuning", "Complete", "status", "Reward", "Interaction", "Exam", "Set", "InstructionResponse", "Format", "Reward", "Examination", "Reward", "Goal", "Completion", "Reward", "Rewards", "State", "ENV", "ChatGLM", "LoRA-FT", "+", "LoRA-CTRL", "Model", "Instruction", "voting", "SFT", "generating", "Ensemble", "Response", "Response", "2", "Response", "N", "...", "...", "+", "LoRA-CK2", "ChatGLM", "LoRA-FT", "LoRA-CK1", "Response", "1", "EFT", "State", "tuning", "ChatGLM", "LoRA-FT+", "HI", "RI", "SI", "AI", "Instructions", "Goals", "States", "SFT", "tuning", "...", "...", "...", "...", "ChatGLM", "LoRA-FT+", "HI", "RI", "SI", "AI", "CoT", "Responses", "CoT", "Questions", "States", "...", "...", "...", "...", "CoFT" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure8-1.png
{ "x1": 55.439998626708984, "x2": 542.8262939453125, "y1": 320.9515380859375, "y2": 350.864013671875 }
200
12
18
Table
{ "x1": 299.88, "x2": 540, "y1": 621.72, "y2": 682.1999999999999 }
Table 12: Evaluation on lora rank.
[ "8", "0.544", "0.672", "0.482", "0.608", "0.502", "0.629", "0.060", "0.124", "16", "0.550", "0.673", "0.487", "0.601", "0.507", "0.626", "0.070", "0.124", "32", "0.555", "0.685", "0.505", "0.621", "0.529", "0.652", "0.065", "0.159", "64", "0.547", "0.675", "0.501", "0.616", "0.519", "0.635", "0.070", "0.124", "128", "0.552", "0.684", "0.507", "0.626", "0.524", "0.645", "0.075", "0.134", "F1", "(Choice)", "Accurate", "Accurate", "(Choice)", "Rank", "Precision", "Precision", "(Choice)", "Recall", "Recall", "(Choice)", "F1" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table12-1.png
{ "x1": 351.6669921875, "x2": 487.905517578125, "y1": 694.26953125, "y2": 700.27197265625 }
200
4
11
Table
{ "x1": 202.67999999999998, "x2": 540, "y1": 153.72, "y2": 476.28 }
Table 4: The details of features in the observation space.
[ "BEV", "Region", "Altitude", "map", "and", "aerial", "view", "map", "3x64x64", "4.Spatial", "feature", "Scalar", "BEV", "12288", "Pose", "Position,", "rotation,", "camera", "position,", "camera", "rotation,", "etc.", "43", "Attribute", "Character", "ID,team", "ID,size,skills", "28", "Invisible", "enemies", "Status", "HP,oxygen,speed,", "peek", "type,", "alive", "state,", "body", "state,etc.", "33", "3.Invisible", "enemy", "feature", "Scalar", "Invisible", "(nearby)", "enemy", "feature", "only", "for", "value", "estimation", "104", "Status", "Type,size,quantity,etc.", "19", "Supply", "Position", "Position,relative", "position,distance", "7", "Attribute", "Type,state", "5", "Door", "Pose", "Position,relative", "position,rotation", "8", "Attribute", "Type,damage,elapsed", "time", "8", "Event", "Position", "Occurred", "position", "3", "Time", "existence", "time,", "rest", "time,", "total", "time,", "delay", "time,", "appear", "time", "5", "Status", "State,pain,radius", "4", "Circle", "Position", "Blue", "circle,", "white", "circle", "6", "2.Global", "feature", "Scalar", "Includes", "circle,", "event,", "door", "and", "supply", "65", "Pose", "Position,", "rotation,", "relative", "position,", "distance", "16", "Attribute", "Monster", "type,size", "8", "Monsters", "Status", "HP,", "max", "HP,", "HP", "percent,", "target", "type", "6", "Pose", "Position,", "rotation,", "camera", "position,", "camera", "rotation,", "etc.", "43", "Attribute", "Character", "ID,team", "ID,size,skills", "28", "Enemies", "Status", "HP,", "oxygen,speed,", "peek", "type,", "alive", "state,", "body", "state,etc.", "33", "Pose", "Position,", "rotation,", "camera", "position,", "camera", "rotation,", "etc.", "43", "Attribute", "Character", "ID,team", "ID,size,skills", "28", "Teammates", "Status", "HP,", "oxygen,speed,", "peek", "type,", "alive", "state,", "body", "state,etc.", "30", "Heroes", "Pose", "Position,", "rotation,", "camera", "position,", "camera", "rotation,", "etc.", "76", "Item", "Backpack,weapon", "144", "Attribute", "Character", "ID,team", "ID,size,skills", "28", "Status", "HP,", "oxygen,speed,", "peek", "type,", "alive", "state,", "body", "state,etc.", "44", "1.Unit", "feature", "Scalar", "Includes", "heroes,", "teammates,", "enemies,", "monster", "527", "Feature", "Class", "Field", "Description", "Dimension" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table4-1.png
{ "x1": 258.8039855957031, "x2": 483.5702209472656, "y1": 487.9355163574219, "y2": 493.93798828125 }
200
17
26
Table
{ "x1": 93.96, "x2": 503.28, "y1": 91.8, "y2": 668.16 }
Table 17: Example of prompt and response.
[ "Whether", "prone", "position:True", "Average", "velocity:Static", "Length", "of", "distance", "moved:No", "movement", "Whether", "hold", "a", "gun:True", "response", "goal", "meta-state", "prompt", "In", "order", "to", "complete", "the", "command", "‘You", "should", "lie", "in", "wait.’,", "how", "the", "agent’s", "game", "state", "should", "change?", "question", "prompt", "Teammate", "player", "gives", "you", "a", "verbal", "command", "based", "on", "the", "current", "game", "states:‘You", "should", "lie", "in", "wait.’", "command", "prompt", "The", "state", "of", "the", "agent", "can", "be", "described", "as", "follows:{‘Damage", "to", "enemy’:", "‘Zero’,", "‘Whether", "knock", "down", "enemy’:", "‘False’,", "‘Whether", "kill", "enemy’:", "‘False’,", "‘Whether", "seen", "enemy’:", "‘True’,", "‘Whether", "seen", "by", "enemy’:", "‘True’,", "‘Number", "of", "enemies", "have", "ever", "seen’:", "3,", "‘Length", "of", "distance", "moved’:", "‘Short’,", "‘Average", "velocity’:", "‘Fast’,", "‘Horizontal", "direction", "of", "movement’:", "‘West’,", "‘Horizontal", "direction", "of", "view’:", "‘NorthEast’,", "‘Pitch", "direction", "of", "view’:", "‘Medium’,", "‘Health", "level’:", "‘Full’,", "‘Whether", "to", "restore", "health’:", "‘False’,", "‘Whether", "the", "health", "is", "damaged’:", "‘False’,", "‘Whether", "rescued", "teammate’:", "‘False’,", "‘Whether", "be", "knocked", "down’:", "‘False’,", "‘Whether", "prone", "position’:", "‘False’,", "‘Whether", "have", "a", "gun’:", "‘True’,", "‘Whether", "have", "bullets’:", "‘True’,", "‘Whether", "have", "medical", "kits’:", "‘True’,", "‘Distance", "with", "nearest", "enemy’:", "‘Nearby’,", "‘Whether", "closer", "with", "nearest", "enemy’:", "‘True’,", "‘Whether", "crouch", "position’:", "‘False’,", "‘Whether", "hold", "a", "gun’:", "‘True’,", "‘Whether", "seen", "by", "teammate’:", "‘True’,", "‘Length", "of", "distance", "from", "agent", "to", "teammate’:", "‘Touch’,", "‘Teammate’s", "position", "relative", "to", "agent’:", "‘Southwest’,", "‘Whether", "follow", "with", "the", "views", "of", "teammate’:", "‘False’,", "‘Whether", "target", "the", "same", "enemy", "as", "teammate’:", "‘False’,", "‘Whether", "follow", "with", "the", "movement", "direction", "of", "teammate’:", "‘False’}", "self", "state", "prompt", "The", "state", "of", "the", "enemy", "can", "be", "described", "as", "follows:{‘Horizontal", "direction", "of", "movement", "of", "enemy’:", "‘Southwest’,", "‘Velocity", "of", "enemy’:", "‘Slow’,", "‘Enemy’s", "position", "relative", "to", "agent’:", "‘West’}", "enemy", "state", "prompt", "The", "state", "of", "the", "agent’s", "teammates", "can", "be", "described", "as", "follows:{‘", "Length", "of", "distance", "moved’:", "‘No", "movement’,", "‘Average", "velocity’:", "‘Slow’,", "‘Horizontal", "direction", "of", "movement’:", "‘Southeast’,", "‘Horizontal", "direction", "of", "view’:", "‘South’,", "‘Pitch", "direction", "of", "view’:", "‘Medium’,", "‘Health", "level’:", "‘Empty’,", "‘Whether", "to", "restore", "health’:", "‘False’,", "‘Whether", "the", "health", "is", "damaged’:", "‘False’,", "‘Whether", "rescued", "teammate’:", "‘False’,", "‘Whether", "prone", "position’:", "‘False’,", "‘Whether", "crouch", "position’:", "‘False’,", "‘Whether", "have", "a", "gun’:", "‘True’,", "‘Whether", "hold", "a", "gun’:", "‘False’,", "‘Whether", "have", "bullets’:", "‘True’,", "‘Whether", "have", "medical", "kits’:", "‘True’,", "‘Whether", "be", "knocked", "down’:", "‘False’,", "‘Damage", "to", "enemy’:", "‘Zero’,", "‘Whether", "knock", "down", "enemy’:", "‘False’,", "‘Whether", "seen", "enemy’:", "‘True’,", "‘Number", "of", "enemies", "have", "ever", "seen’:", "5,", "‘Whether", "seen", "by", "enemy’:", "‘True’,", "‘Distance", "with", "nearest", "enemy’:", "‘Nearby’,", "‘Whether", "closer", "with", "nearest", "enemy’:", "‘False’,", "‘ID", "of", "teammate", "player’:", "2}", "teammate", "state", "prompt", "system", "background", "prompt", "We", "have", "an", "agent", "and", "a", "player", "working", "together", "as", "a", "teammate", "in", "a", "PUBG", "game.", "We", "hope", "you", "can", "help", "the", "agent", "plan", "how", "the", "agent’s", "game", "state", "should", "change,", "so", "as", "to", "complete", "the", "player’s", "command", "and", "help", "the", "player", "win", "the", "game.", "prompt" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table17-1.png
{ "x1": 211.08900451660156, "x2": 385.48431396484375, "y1": 679.9525756835938, "y2": 685.9550170898438 }
200
13
23
Table
{ "x1": 129.96, "x2": 467.28, "y1": 315, "y2": 378 }
Table 13: Evaluation on LoRA target.
[ "All", "0.529", "0.642", "0.471", "0.581", "0.485", "0.596", "0.069", "0.119", "Attention", "0.555", "0.685", "0.505", "0.621", "0.529", "0.652", "0.065", "0.159", "Mlp", "0.549", "0.664", "0.482", "0.587", "0.514", "0.620", "0.065", "0.134", "F1", "(Choice)", "Accurate", "Accurate", "(Choice)", "Dataset", "Precision", "Precision(Choice)", "Recall", "Recall", "(Choice)", "F1" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table13-1.png
{ "x1": 223.0590057373047, "x2": 373.51416015625, "y1": 389.69354248046875, "y2": 395.6960144042969 }
200
10
23
Table
{ "x1": 72.72, "x2": 524.16, "y1": 74.88, "y2": 270 }
Table 10: Rule prompt for GPT4.
[ "1.Only", "select", "the", "most", "relevant", "and", "necessary", "states", "for", "planning,", "and", "the", "unplanned", "states", "will", "be", "adjusted", "by", "the", "agent", "itself", "2.[Choose", "1,", "Choose", "2,", "...]", "indicates", "the", "values", "that", "can", "be", "selected", "for", "the", "state.", "When", "you", "plan,", "you", "can", "only", "choose", "the", "value", "of", "the", "state", "from", "it,", "and", "do", "not", "invent", "new", "value", "not", "listed", "in", "[Choice1,", "Choice2,", "...].", "3.The", "selected", "state", "can", "change", "the", "current", "value", "or", "maintain", "the", "current", "value.", "The", "agent", "will", "try", "to", "achieve", "and", "maintain", "the", "value", "of", "the", "state", "you", "choose", "after", "you", "give", "the", "plan.", "4.Agents", "don’t", "voluntarily", "discard", "items", "(for", "example", "guns,", "bullets,", "medical", "kits)", "unless", "items", "are", "reduced", "or", "set", "as", "False", "in", "your", "plan,", "so", "there", "is", "no", "need", "to", "keep", "them,", "only", "to", "choose", "when", "making", "changes.", "5.Do", "not", "plan", "and", "adjust", "the", "states", "of", "teammates", "and", "enemies,", "they", "can", "move", "freely", "and", "cannot", "be", "controlled.", "6.Avoid", "conflicts", "of", "states", "planing.", "For", "example,", "agent", "unable", "to", "move", "quickly", "when", "lying", "down,", "and", "unable", "to", "see", "enemies", "when", "length", "of", "distance", "from", "agent", "to", "enemy", "is", "far", "away.", "7.Avoid", "the", "repetition", "of", "states", "planing.", "For", "example,", "if", "the", "Average", "velocity", "has", "been", "adjusted", "to", "be", "Fast,", "there", "is", "no", "need", "to", "adjust", "the", "Whether", "prone", "position", "to", "False,", "because", "the", "agent", "can", "automatically", "adjust", "state", "to", "fit", "overlapping", "meanings.", "8.When", "it", "is", "necessary", "to", "refer", "to", "enemy", "or", "teammate", "information", "for", "planing,", "describe", "the", "specific", "state", "value", "during", "analysis." ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table10-1.png
{ "x1": 231.6959991455078, "x2": 364.8760070800781, "y1": 282.1695251464844, "y2": 288.1719970703125 }
200
14
23
Table
{ "x1": 153.72, "x2": 443.15999999999997, "y1": 423, "y2": 685.0799999999999 }
Table 14: Top 20 sub-goals ranked by frequency.
[ "g1", "Average", "velocity", "g2", "Horizontal", "direction", "of", "movement", "g3", "Whether", "seen", "enemy", "g4", "Whether", "hold", "a", "gun", "g5", "Whether", "prone", "position", "g6", "Length", "of", "distance", "moved", "g7", "Length", "of", "distance", "from", "agent", "to", "teammate", "g8", "Distance", "with", "nearest", "enemy", "g9", "Whether", "seen", "by", "enemy", "g10", "Damage", "to", "enemy", "g11", "Whether", "have", "bullets", "g12", "Horizontal", "direction", "of", "view", "g13", "Whether", "follow", "with", "the", "movement", "direction", "of", "teammate", "g14", "Whether", "crouch", "position", "g15", "Whether", "have", "a", "gun", "g16", "Whether", "have", "medical", "kits", "g17", "Whether", "to", "restore", "health", "g18", "Health", "level", "g19", "Whether", "knock", "down", "enemy", "g20", "Whether", "target", "the", "same", "enemy", "as", "teammate", "Symbol", "Sub-goal", "Class" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table14-1.png
{ "x1": 201.01100158691406, "x2": 395.560546875, "y1": 697.5615844726562, "y2": 703.5640258789062 }
200
10
19
Figure
{ "x1": 83.52, "x2": 522, "y1": 283.68, "y2": 403.56 }
Figure 10: Distribution comparison between real goals (Oracles) and goals generated by Gop (Prediction). The illustration shows that Gop generates goals that follows the real distribution, indicating good generalization on open-ended goal generation.
[ "(c)", "Corresponding", "to", "∆V", "Oracles", "Prediction", "io", "n", "je", "ct", "P", "ro", "Go", "al", "100", "50", "0", "50", "100", "20", "15", "10", "5", "0", "5", "10", "15", "V", "(b)", "Corresponding", "to", "∆T", "Oracles", "Prediction", "io", "n", "je", "ct", "P", "ro", "Go", "al", "100", "50", "0", "50", "100", "50", "75", "100", "125", "150", "175", "200", "225", "250", "T", "(a)", "Corresponding", "to", "(∆T,∆V", ")", "Oracles", "Prediction", "100", "50", "0", "50", "100", "io", "n", "je", "ct", "P", "ro", "Go", "al", "250", "200", "225", "150", "175", "100", "125", "50", "75", "T", "10", "15", "0", "5", "10", "5", "20", "15", "V" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure10-1.png
{ "x1": 55.439998626708984, "x2": 541.4405517578125, "y1": 420.9795227050781, "y2": 450.8919982910156 }
200
9
19
Figure
{ "x1": 58.68, "x2": 542.16, "y1": 66.96, "y2": 204.84 }
Figure 9: Illustration of BEV features in observation space. (a) and (b) are the altitude maps where bright areas are higher than dark areas. (c) is the aerial view map where the disconnected areas are windows or doors. One pixel in (a), (b) and (c) denotes 0.8 meter, 4 meters and 0.4 meter respectively. The small yellow blocks represent player positions and small blue blocks represent enemy positions.
[ "(a)", "Altitude", "map", "(0.8m)", "(b)", "Altitude", "map", "(4m)", "(c)", "Aerial", "view", "map", "(0.4m)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure9-1.png
{ "x1": 55.439998626708984, "x2": 542.1045532226562, "y1": 217.79751586914062, "y2": 259.66497802734375 }
200
6
15
Figure
{ "x1": 60.12, "x2": 537.12, "y1": 68.75999999999999, "y2": 223.92 }
Figure 6: The value changes during the training process.
[ "ue", "V", "al", "Ba", "sic", "5", "4", "3", "2", "1", "0", "-1", "Dates", "07/0", "7", "06/2", "3", "06/0", "9", "05/2", "6", "05/1", "2", "04/2", "8", "04/1", "4" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure6-1.png
{ "x1": 186.0679931640625, "x2": 410.8142395019531, "y1": 239.76754760742188, "y2": 245.77001953125 }
200
1
5
Table
{ "x1": 64.8, "x2": 544.3199999999999, "y1": 117.72, "y2": 320.03999999999996 }
Table 1: Performance comparison of all baselines and our models. The best and second-best results are shown in bold and underlined, respectively. All values are multiplied by 100.
[ "Hybrid", "AUC", "66.98", "±", "1.23", "67.27", "±", "2.15", "68.79", "±", "1.99", "68.81", "±", "1.84", "69.52", "±", "2.91", "69.39", "±", "3.90", "70.60", "±", "1.54", "72.27", "±", "1.64", "ACC", "65.93", "±", "1.57", "67.24", "±", "3.06", "68.46", "±", "2.75", "68.73", "±", "2.32", "69.33", "±", "3.46", "69.42", "±", "2.88", "69.71", "±", "3.98", "72.37", "±", "1.92", "F1", "64.62", "±", "1.84", "65.25", "±", "1.98", "65.35", "±", "2.47", "65.36", "±", "3.86", "64.80", "±", "3.82", "64.03", "±", "2.44", "65.92", "±", "1.91", "69.58", "±", "2.05", "AP", "60.52", "±", "1.67", "61.57", "±", "1.16", "62.31", "±", "1.72", "62.26", "±", "1.05", "63.59", "±", "3.69", "63.33", "±", "3.96", "64.02", "±", "3.28", "64.49", "±", "2.33", "Finance", "AUC", "64.18", "±", "1.71", "64.60", "±", "1.79", "65.81", "±", "2.81", "65.59", "±", "3.32", "66.24", "±", "2.90", "66.17", "±", "2.71", "68.43", "±", "3.87", "69.60", "±", "1.83", "ACC", "62.89", "±", "2.65", "63.80", "±", "2.29", "64.60", "±", "3.48", "64.28", "±", "3.24", "64.47", "±", "3.73", "67.87", "±", "3.98", "67.83", "±", "2.06", "69.69", "±", "3.23", "F1", "61.91", "±", "1.86", "62.78", "±", "1.57", "63.43", "±", "1.73", "63.41", "±", "1.81", "64.26", "±", "2.98", "64.56", "±", "3.96", "65.59", "±", "2.71", "67.49", "±", "1.84", "AP", "58.96", "±", "1.25", "59.37", "±", "2.23", "60.37", "±", "2.32", "60.14", "±", "3.27", "60.58", "±", "3.17", "60.38", "±", "3.31", "61.06", "±", "2.65", "62.70", "±", "3.35", "Tech", "AUC", "63.94", "±", "2.08", "64.40", "±", "1.85", "64.62", "±", "2.23", "64.59", "±", "2.02", "65.53", "±", "3.69", "66.02", "±", "3.52", "67.63", "±", "2.63", "68.57", "±", "3.46", "ACC", "63.38", "±", "3.21", "63.45", "±", "1.65", "63.71", "±", "2.91", "63.64", "±", "3.38", "64.37", "±", "4.26", "64.19", "±", "2.57", "66.81", "±", "2.21", "68.76", "±", "1.91", "F1", "61.73", "±", "1.38", "62.22", "±", "3.48", "62.51", "±", "3.19", "62.14", "±", "1.76", "63.96", "±", "3.44", "63.72", "±", "1.83", "64.97", "±", "1.54", "65.22", "±", "2.67", "AP", "57.38", "±", "1.79", "58.19", "±", "2.17", "58.38", "±", "2.46", "57.91", "±", "3.34", "58.73", "±", "3.55", "59.10", "±", "3.90", "60.99", "±", "2.85", "61.59", "±", "3.46", "LightGCN", "PJFNN", "BPJFNN", "APJFNN", "MV-CoN", "HAN", "HGT", "CSAGNN", "Dataset", "Metric", "Models" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table1-1.png
{ "x1": 53.50199890136719, "x2": 558.1998901367188, "y1": 85.34053802490234, "y2": 102.31597900390625 }
200
2
5
Table
{ "x1": 52.919999999999995, "x2": 295.92, "y1": 379.8, "y2": 607.68 }
Table 2: Statistics of datasets. M refers to Member, J refers to Job, S refers to Skill, CP refers to Candidate Pair, and PC refers to Professional Connection.
[ "job", "descriptions", "independently,", "and", "the", "matching", "degree", "is", "calculated", "by", "cosine", "similarity.", "•", "BPJFNN", "[28]", "leverages", "bidirectional", "LSTM", "to", "learn", "the", "representations", "of", "resumes", "and", "job", "descriptions.", "•", "APJFNN", "[28]", "leverages", "bidirectional", "LSTM", "and", "hierarchi-", "cal", "attention", "mechanism", "to", "learn", "the", "representations", "of", "resumes", "and", "job", "descriptions.", "•", "MV-CoN", "[3]", "combines", "text", "matching", "model", "and", "RGCN", "to", "learn", "representations", "of", "resumes", "and", "job", "descriptions.", "•", "HAN", "[33]", "uses", "a", "dual", "attention", "mechanism", "to", "aggregate", "neighbor", "information", "via", "different", "metapaths.", "•", "HGT", "[13]", "designs", "node-", "and", "edge-type", "dependent", "param-", "eters", "to", "characterize", "the", "heterogeneous", "attention", "over", "each", "edge.", "Tech", "33000+", "62000+", "27000+", "136000+", "1922000+", "Finance", "20000+", "27000+", "23000+", "36000+", "615000+", "Hybrid", "83000+", "120000+", "33000+", "200000+", "2768000+", "Industry", "#M", "#J", "#S", "#CP", "#PC" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table2-1.png
{ "x1": 53.50199890136719, "x2": 294.04852294921875, "y1": 336.924560546875, "y2": 364.8590087890625 }
200
1
1
Figure
{ "x1": 370.08, "x2": 508.32, "y1": 82.8, "y2": 199.07999999999998 }
Figure 1: The metagraph of Workplace Heterogeneous Information Network. It encompasses not only members (M) and jobs (J), which are crucial for Person-Job Fit, but also entities such as skills (S), companies (C), and schools (H).
[ "auxiliary", "entity", "direct", "path", "meta", "path", "recommended", "entities", "H", "S", "requireC", "educate", "post", "master", "work(ed)", "co-apply", "co-applied", "JM", "apply", "connect" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure1-1.png
{ "x1": 317.9549865722656, "x2": 559.8109741210938, "y1": 215.28756713867188, "y2": 254.1810302734375 }
200
4
6
Figure
{ "x1": 353.52, "x2": 522.36, "y1": 89.64, "y2": 268.2 }
Figure 4: Hyperparameter tuning experiments to investigate the specific effects of social relations and job-specific attention mechanism.
[ "(b)", "Model", "performance", "varying", "the", "number", "of", "CSAGNN", "layers", "while", "fixing", "the", "number", "of", "sampled", "skills", "to", "10.", "tech", "finance", "hybrid", "F1", "0.71", "0.70", "0.69", "0.68", "0.67", "0.66", "0.65", "0.64", "0", "1", "2", "#layers", "tech", "finance", "hybrid", "AU", "C", "0.73", "0.72", "0.71", "0.70", "0.69", "0.68", "0", "1", "2", "#layers", "(a)", "Model", "performance", "varying", "the", "number", "of", "sampled", "skills", "while", "fixing", "CSAGNN", "layers", "to", "1.", "tech", "finance", "hybrid", "F1", "0.71", "0.70", "0.69", "0.68", "0.67", "0.66", "0.65", "0.64", "0", "5", "10", "20", "#skills", "tech", "finance", "hybrid", "AU", "C", "0.73", "0.72", "0.71", "0.70", "0.69", "0.68", "0", "5", "10", "20", "#skills" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure4-1.png
{ "x1": 317.9549865722656, "x2": 559.8074340820312, "y1": 287.14056396484375, "y2": 315.07501220703125 }
200
3
6
Table
{ "x1": 59.76, "x2": 287.28, "y1": 219.6, "y2": 461.15999999999997 }
Table 3: Ablation studies conducted on all datasets, with all values multiplied by 100.
[ "w/o", "CSA", "68.37", "69.39", "64.84", "62.65", "w/o", "CSA&H", "65.85", "64.32", "63.26", "59.49", "CSAGNN", "72.27", "72.37", "69.58", "64.49", "w/o", "S", "70.81", "71.67", "68.68", "64.67", "w/o", "A", "70.03", "71.47", "68.18", "64.06", "Hybrid", "w/o", "CSA", "68.98", "69.04", "66.15", "62.06", "w/o", "CSA&H", "63.75", "62.78", "61.49", "58.74", "CSAGNN", "69.60", "69.69", "67.55", "62.77", "w/o", "S", "69.39", "69.48", "66.68", "62.21", "w/o", "A", "69.22", "69.32", "66.44", "62.32", "Finance", "w/o", "CSA", "68.06", "68.22", "64.26", "61.21", "w/o", "CSA&H", "62.84", "62.78", "61.35", "57.53", "CSAGNN", "68.57", "68.76", "65.22", "61.59", "w/o", "S", "68.26", "68.42", "64.41", "61.34", "w/o", "A", "68.23", "68.40", "64.38", "61.29", "Tech", "Industry", "Model", "AUC", "ACC", "F1", "AP" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table3-1.png
{ "x1": 53.50199890136719, "x2": 294.04779052734375, "y1": 186.98855590820312, "y2": 203.9639892578125 }
200
2
2
Figure
{ "x1": 117, "x2": 494.28, "y1": 83.88, "y2": 176.04 }
Figure 2: Steps of WHIN pre-training. (a) Workplace heterogeneous graph with metapath. (b) Subgraph sampling for mini-batch pre-training. (c) A pre-training model with encoder-decoder architecture using Link-level pre-training task.
[ "pos", "path", "pos", "metapath", "neg", "path", "neg", "metapath", "company", "school", "skill", "job", "member", "(a)", "(b)", "(c)", "…", "loss", "…", "RGCN", "MLP", "layers" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure2-1.png
{ "x1": 53.79800033569336, "x2": 558.2023315429688, "y1": 189.54055786132812, "y2": 206.5159912109375 }
200
5
7
Figure
{ "x1": 180, "x2": 440.28, "y1": 82.8, "y2": 236.16 }
Figure 5: A case where professional connections improve performance on Person-Job Fit. CSAGNN can improve the performance of Person Job Fit by filtering and aggregating information from professional networks. For privacy protection reasons, we have rewritten the statement in the example while ensuring that the semantic information remains unchanged.
[ "Education", "require", "No", "job–related", "description", "Rich", "job–related", "description", "connect", "Mathematics", "Microsoft", "Azure", "…", "…", "connect", "score:", "0.6115", "score:", "0.3613", "score:", "0.3215", "CSAGNN(ours)", "CF-based", "model(LightGCN)", "Context-based", "model(PJFNN)", "Description", "Profile", "Profile", "Profile", "#3513", "neighbor", "•", "I", "specialize", "in", "writing", "and", "supporting", "CI/CD", "pipelines", "in", "Jenkins/Azure", "•", "DevOps", "and", "deploying", "to", "multi-site", "environment", "•", "……", "#17323", "neighbor", "•", "……", "mathematical", "statistics.", "My", "goal", "upon", "graduation", "is", "to", "secure", "a", "fellowship", "in", "data", "analysis,", "business", "analysis,", "software", "development", "•", "I", "am", "a", "graduate", "from", "XX", "University", "in", "BSc", "Computer", "and", "mathematical", "science", "•", "I", "have", "completed", "core", "computer", "science", "courses", "as", "well", "as", "mathematics,", "and", "•", "I", "have", "assisted", "clients", "with:", "Performing", "computer", "forensics", "imaging,", "data", "extraction,", "processing,", "and", "reporting.", "•", "……", "#3241", "member", "#137054", "job", "•", "Create", "&", "Maintain", "Sophisticated", "CI/CD", "Pipelines", "•", "Coach", "&", "mentor", "other", "engineers", "•", "Identify", "technical", "risks", "and", "mitigate", "these", "(pre,", "during", "&", "post-deployment)", "•", "……" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure5-1.png
{ "x1": 53.79800033569336, "x2": 558.2024536132812, "y1": 248.77053833007812, "y2": 276.70501708984375 }
200
6
7
Figure
{ "x1": 118.8, "x2": 228.23999999999998, "y1": 293.76, "y2": 391.32 }
Figure 6: Visualization of WHIN Pre-trained skill embeddings showing clear distinction between Technology and Health-Related Skills.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure6-1.png
{ "x1": 53.79800033569336, "x2": 295.6476135253906, "y1": 403.7155456542969, "y2": 431.6499938964844 }
200
3
3
Figure
{ "x1": 116.64, "x2": 489.96, "y1": 82.8, "y2": 203.04 }
Figure 3: Architecture of CSAGNN. When determining the match between job 𝑗0 and member𝑚0, the information from𝑚0’s professional connections, 𝑚1 and 𝑚2, is simultaneously acquired. The initial representations of both the member and job are formed by concatenating the WHIN pre-training embedding with the representation obtained after processing the text information through BERT. All member text information is re-weighted through an attention mechanism based on the skills 𝑠𝑖 required by 𝑗0. A neighbor sampler module, operating based on the similarity between different members and the required skills, can filter out professional connections that are not relevant to the job.
[ "Average", "…", "…", "…", "BERT", "Pre-trained", "Embeddings", "ℎ!!", "ℎ!!", "×(𝑙", "−", "1)", "ℎ!!", "𝑦#!!,#!", "ℎ!#", "(%)", "ℎ!\"", "(%)", "ℎ!!", "(%)", "ℎ!#", "(#)", "ℎ!\"", "(#)", "ℎ!!", "(#)", "ℎ!#", "(%)", "ℎ!!", "(%)", "𝑠#", "𝑠\"", "𝑠!", "𝑗!", "𝑚#", "𝑚\"", "𝑚!", "ℎ!!", "(%)", "ℎ!#", "(%)", "ℎ!\"", "(%)", "Ground-truth", "value", "Predict", "value", "loss", "layers", "…", "MLP", "𝑦!!,#!", "Skill", "based", "cross", "attention", "layer", "N", "eighbor", "Sam", "pler", "Aggregation", "…", "…", "…", "N", "eighbor", "Sam", "pler", "Aggregation", "…", "…" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure3-1.png
{ "x1": 53.79800033569336, "x2": 558.20458984375, "y1": 216.61257934570312, "y2": 277.42401123046875 }
200
2
8
Table
{ "x1": 174.95999999999998, "x2": 420.12, "y1": 267.84, "y2": 366.12 }
Table 2: The result of the Hartree-Fock computation of HeH+ by the symbolic numeric method. We used four normalized right eigenvectors |φ1), ..., |φ4) of mTy that have real eigenvalues (which are given as Eig in the table) and computed (φi|mTj |φi) for j = x, y, e. The third and the fourth solutions give the ground state.
[ "1", "-1.114772", "0.604062", "-1.114772", "-0.537546", "2", "1.114772", "-0.604062", "1.114772", "-0.537546", "3", "-0.337484", "-0.801308", "-0.337484", "-1.600455", "4", "0.337484", "0.801308", "0.337484", "-1.600455", "i", "Eig", "(φi|x|φi)", "(φi|y|φi)", "(φi|e|φi)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table2-1.png
{ "x1": 107.99967956542969, "x2": 487.3741455078125, "y1": 391.7254333496094, "y2": 464.8599853515625 }
200
3
11
Table
{ "x1": 106.92, "x2": 440.28, "y1": 149.76, "y2": 282.24 }
Table 3: The expectation values of the unitary operators (φi| exp(− √ −1A)|φi) for A = mTx ,m T y , and m T e in the simple toy model. The table contains the result for two different solutions, distinguished by two different eigenvectors (φ1 and φ2), which correspond to the solutions for e = −1 and e = 1, respectively. The eigenvectors |φi) are computed from the analytic formula given in the previous section. The symbol j in the table means the imaginary unit number √ −1,
[ "1", "mx", "0.707107", "0.760245-0.649637j", "0.760245-0.649637j", "1", "my", "0.707107", "0.760245-0.649637j", "0.760245-0.649637j", "1", "me", "-1.000000", "0.540302+0.841471j", "0.540302+0.841471j", "2", "mx", "0.707107", "0.760245-0.649637j", "0.760245-0.649637j", "2", "my", "-0.707107", "0.760245+0.649637j", "0.760245+0.649637j", "2", "me", "1.000000", "0.540302-0.841471j", "0.540302-0.841471j", "i", "M", "(φi|M", "|φi)", "(φi|", "exp(−", "√", "−1MT", ")|φi)", "(φ|Oexp|φi)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table3-1.png
{ "x1": 108, "x2": 487.36444091796875, "y1": 296.6854553222656, "y2": 386.8599853515625 }
200
4
11
Table
{ "x1": 106.92, "x2": 450, "y1": 412.56, "y2": 647.28 }
Table 4: The expectation values of the unitary operators (φi| exp(− √ −1A)|φi) for A = mTx ,m T y , andm T e in the Hartree-Fock computation. For the computation of expectation values, we used four eigenvectors ({φi|i = 1, ..., 4}) of mTy , which have real eigenvalues.
[ "1", "mx", "0.604062", "0.823035-0.567990j", "0.823035-0.567990j", "1", "my", "-1.114772", "0.440383+0.897810j", "0.440383+0.897810j", "1", "me", "-0.537546", "0.858968+0.512030j", "0.858968+0.512030j", "2", "mx", "-0.604062", "0.823035+0.567990j", "0.823035+0.567990j", "2", "my", "1.114772", "0.440383-0.897810j", "0.440383-0.897810j", "2", "me", "-0.537546", "0.858968+0.512030j", "0.858968+0.512030j", "3", "mx", "-0.801308", "0.695768+0.718267j", "0.695768+0.718267j", "3", "my", "-0.337484", "0.943591+0.331114j", "0.943591+0.331114j", "3", "me", "-1.600455", "-0.029654+0.999560j", "-0.029654+0.999560j", "4", "mx", "0.801308", "0.695768-0.718267j", "0.695768-0.718267j", "4", "my", "0.337484", "0.943591-0.331114j", "0.943591-0.331114j", "4", "me", "-1.600455", "-0.029654+0.999560j", "-0.029654+0.999560j", "i", "M", "(φi|MT", "|φi)", "(φi|", "exp(−", "√", "−1MT", ")|φi)", "(φi|Oexp|φi)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table4-1.png
{ "x1": 108, "x2": 487.255126953125, "y1": 661.12548828125, "y2": 717.4600219726562 }
200
1
7
Table
{ "x1": 189.72, "x2": 406.08, "y1": 621.72, "y2": 669.24 }
Table 1: The result of the Hartree-Fock computation of HeH+ by the standard self-consistent method with STO-3g basis set, at the interatomic distance R = 1.4632.
[ "STO-3G", "0.801918", "0.336800", "-1.597448", "x", "y", "e" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table1-1.png
{ "x1": 107.99972534179688, "x2": 487.2760925292969, "y1": 694.60546875, "y2": 734.0200805664062 }
200
4
5
Figure
{ "x1": 56.879999999999995, "x2": 557.28, "y1": 62.64, "y2": 445.32 }
Fig. 4: We compare with state-of-the-art video pre-training methods on language-conditioned manipulation tasks in the LIBERO benchmark [27]. (a) Visualization of the LIBERO tasks separated into four suites, focusing on different aspects of the manipulation policies in spatial reasoning, object reasoning, task understanding, and performing long-horizon tasks. (b) Quantitative comparisons on different suites. We additionally compare baselines with fast computation on a task suite containing 90 tasks (i.e. LIBERO-90). ATM outperforms the baselines in all tasks and excels in LIBERO-Goal and LIBERO-Long.
[ "put", "the", "bowl", "on", "top", "of", "the", "cabinet", "put", "the", "yellow", "and", "white", "mug", "in", "the", "microwave", "and", "close", "it", "pick", "up", "the", "black", "bowl", "in", "the", "drawer", "and", "place", "it", "on", "the", "plate", "pick", "up", "the", "butter", "and", "place", "it", "in", "the", "basket", "(b)", "Comparison", "on", "the", "performance", "(a)", "Visualization", "of", "the", "tasks", "in", "the", "LIBERO", "benchmark", "LIBERO-Goal", "LIBERO-Long", "LIBERO-Spatial", "LIBERO-Object" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure4-1.png
{ "x1": 48.95899963378906, "x2": 563.0291137695312, "y1": 457.01153564453125, "y2": 510.8349609375 }
200
V
14
Table
{ "x1": 103.67999999999999, "x2": 505.08, "y1": 293.76, "y2": 382.32 }
TABLE V: Average success rate on LIBERO benchmark. Our method performs consistently better than all the baselines across all suites. UniPi-Replan is only evaluated for a single seed due to the computation cost.
[ "UniPi-Replan", "[5]", "-", "31.00", "3.00", "-", "-", "ATM", "(Ours)", "68.50±", "1.78", "68.00±", "6.18", "77.83±", "0.82", "39.33±", "15.80", "48.41±", "2.09", "VPT", "[3]", "37.83±", "4.29", "19.50±", "0.82", "3.33±", "2.36", "3.83±", "1.65", "-", "UniPi", "[12,", "23]", "69.17±", "3.75", "59.83±", "3.01", "11.83±", "2.02", "5.83±", "2.08", "-", "BC", "39.00±", "8.20", "51.83±", "13.54", "42.50±", "4.95", "16.67±", "3.66", "29.78±", "1.14", "R3M-finetune", "[33]", "49.17±", "3.79", "52.83±", "2.25", "5.33±", "1.43", "9.17±", "2.66", "9.59±", "0.27", "BC-Full-Trainset", "(Oracle)", "71.83±", "3.70", "71.00±", "7.97", "76.33±", "1.31", "24.17±", "2.59", "-", "Method", "Libero-Spatial", "Libero-Object", "Libero-Goal", "Libero-Long", "Libero-90" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableV-1.png
{ "x1": 48.95899963378906, "x2": 563.029052734375, "y1": 266.3186340332031, "y2": 284.2760925292969 }
200
VI
14
Table
{ "x1": 124.92, "x2": 484.2, "y1": 423.71999999999997, "y2": 463.32 }
TABLE VI: Detailed results of Diffusion policy on LIBERO. The Diffusion policy can be further improved by our method, suggesting that our Any-point Trajectory Modeling framework is an important building block to apply to any policy model.
[ "Diffusion", "Policy", "67.67±", "1.25", "78.00±", "2.45", "35.00±", "3.74", "37.33±", "2.05", "33.85±", "1.71", "ATM", "Diffusion", "Policy", "79.00±", "3.74", "81.00±", "2.45", "58.67±", "4.64", "44.00±", "6.38", "62.89±", "1.10", "Method", "Libero-Spatial", "Libero-Object", "Libero-Goal", "Libero-Long", "Libero-90" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVI-1.png
{ "x1": 48.95899963378906, "x2": 563.0291748046875, "y1": 396.6045227050781, "y2": 414.5619812011719 }
200
11
14
Figure
{ "x1": 66.24, "x2": 551.16, "y1": 59.76, "y2": 186.12 }
Fig. 11: The attention maps of BC and Ours in the spatial transformer. We extract the attention weights between spatial CLS tokens and RGB tokens, highlighting the policy’s focus on specific spatial regions during decision-making. The heatmaps reveal our policy’s targeted attention on task-relevant areas, in contrast to BC’s tendency to focus on irrelevant backgrounds. This underscores the effectiveness of input tracks in the spatial transformer as good task prompts, guiding the CLS token to attend to appropriate areas.
[ "BC", "Ours", "put", "the", "wine", "bottle", "on", "the", "rack", "put", "the", "bowl", "on", "top", "of", "the", "cabinet", "put", "the", "cream", "cheese", "in", "the", "bowl" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure11-1.png
{ "x1": 48.95899963378906, "x2": 563.0291137695312, "y1": 194.58755493164062, "y2": 248.41009521484375 }
200
5
6
Figure
{ "x1": 49.68, "x2": 563.04, "y1": 54, "y2": 237.23999999999998 }
Fig. 5: Real robot experiments on a dining table setup consisting of five tasks. The left figure shows our real-world setup and the tasks. The top right figure shows an example of the predicted particle trajectories and the policy execution, which closely follows the predicted trajectories. From the quantitative results, we can see that ATM shows significant improvements over state-of-the-art video pre-training baselines on average.
[ "time", "Policy", "rollout:", "Put", "the", "tomato", "into", "the", "bowl", "wrist", "camera", "base", "camera", "Task", "1:", "Squeeze", "the", "mustard", "on", "the", "carrot", "Task", "2:", "Put", "the", "carrot", "into", "the", "basket", "Task", "3:", "Pour", "the", "cup", "into", "the", "bin", "Task", "4:", "Put", "the", "spoon", "into", "the", "bowl", "Task", "5:", "Put", "the", "tomato", "into", "the", "bowl" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure5-1.png
{ "x1": 48.95899963378906, "x2": 563.029296875, "y1": 244.41555786132812, "y2": 286.2840270996094 }
200
I
6
Table
{ "x1": 310.68, "x2": 566.28, "y1": 589.68, "y2": 652.3199999999999 }
TABLE I: Average success rates of human-to-robot experiments. ATM trained with human videos significantly outperforms BC and ATM trained with only 10 robot videos, demonstrating the cross-embodiment capability of ATM.
[ "BC", "\"", "%", "0%", "10%", "30%", "ATM", "\"", "%", "0%", "0%", "13%", "ATM", "\"", "\"", "63%", "63%", "60%", "Method", "Teleoperationdemos", "Human", "videos", "fold", "cloth", "put", "tomato", "sweep", "toys" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableI-1.png
{ "x1": 311.9729919433594, "x2": 563.0306396484375, "y1": 539.1205444335938, "y2": 580.9889526367188 }
200
6
6
Figure
{ "x1": 313.2, "x2": 562.3199999999999, "y1": 310.68, "y2": 468 }
Fig. 6: We implement ATM Diffusion Policy by adding the predicted future trajectories as additional conditioning and show consistent improvement over the base diffusion policies across the benchmark suites.
[ "Diffusion", "Policy", "ATM", "Diffusion", "Policy", "(Ours)", "80", "70", "60", "50", "40", "30", "20", "10", "Spatial", "Object", "Goal", "Long", "90", "Overall0" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure6-1.png
{ "x1": 311.9729919433594, "x2": 563.0305786132812, "y1": 479.2445373535156, "y2": 521.1119384765625 }
200
II
9
Table
{ "x1": 144, "x2": 466.2, "y1": 95.75999999999999, "y2": 134.28 }
TABLE II: Ablation study on image masking of track transformer, where “w/o image masking” represents that we do not mask out image patches during track transformer training and “w/ image masking” means we randomly mask 50% patches. We can see that mask image modeling in track transformer improves the policy performance.
[ "w/o", "image", "masking", "69.17±", "6.38", "65.00±", "3.89", "74.33±", "3.66", "30.83±", "11.43", "w/", "image", "masking", "(default)", "68.50±", "1.78", "68.00±", "6.18", "77.83±", "0.82", "39.33±", "15.80", "Image", "Mask", "Ratio", "Spatial", "Object", "Goal", "Long" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableII-1.png
{ "x1": 48.95899963378906, "x2": 563.029296875, "y1": 56.36655044555664, "y2": 86.279052734375 }
200
III
9
Table
{ "x1": 145.79999999999998, "x2": 463.32, "y1": 175.68, "y2": 234 }
TABLE III: Ablation study on the policy architecture. We explore the effect of the tracks fed into the policy in two positions: transformer input (early fusion) and MLP head (late fusion), as illustrated in Figure 3.
[ "\"", "\"", "68.50±", "1.78", "68.00±", "6.18", "77.83±", "0.82", "39.33±", "15.80", "\"", "%", "44.67±", "1.84", "56.67±", "3.09", "5.33±", "0.24", "22.33±", "4.94", "%", "\"", "65.50±", "3.89", "60.00±", "1.47", "72.83±", "4.73", "42.76±", "14.62", "early", "fusion", "late", "fusion", "Spatial", "Object", "Goal", "Long" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableIII-1.png
{ "x1": 48.95899963378906, "x2": 563.0289306640625, "y1": 148.57052612304688, "y2": 166.52801513671875 }
200
IV
13
Table
{ "x1": 322.92, "x2": 549, "y1": 178.92, "y2": 227.16 }
TABLE IV: Computation cost and inference time for different methods on a V100 GPU. ATM performs trajectory generation instead of predicting high-dimensional frames, making it the most computationally efficient and feasible for closed-loop control. UniPi employs a video diffusion model for open-loop future goal generation at the beginning, demanding significantly higher computational resources. UniPi-Replan generates fewer frames than UniPi using a smaller model, resulting in marginally faster generation. However, its use in closed-loop control remains computationally prohibitive.
[ "TFLOPS", "per", "generation", "1.56", "13.09", "39.29", "Time", "per", "generation", "(s)", "0.015", "4.51", "8.14", "Computation", "Close-Loop", "Open-LoopATM", "UniPi-Replan", "UniPi" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableIV-1.png
{ "x1": 311.9729919433594, "x2": 563.0305786132812, "y1": 56.36655044555664, "y2": 169.96612548828125 }
200
15
17
Figure
{ "x1": 56.16, "x2": 557.28, "y1": 84.96, "y2": 646.1999999999999 }
Fig. 15: The visualizations of human demos and rollout videos of ATM policies trained with and without human data. We can see that ATM is able to take advantage of out-of-domain videos, i.e., human videos, to generate more precise tracks, resulting in better policy performance.
[ "use", "the", "broom", "to", "sweep", "the", "toys", "into", "the", "dustpan", "and", "put", "it", "in", "front", "of", "the", "dustpan", "ATM", "trained", "w/", "human", "videos", "ATM", "trained", "w/o", "human", "videos", "human", "videos", "put", "the", "tomato", "into", "the", "pan", "and", "close", "the", "cabinet", "door", "ATM", "trained", "w/", "human", "videos", "ATM", "trained", "w/o", "human", "videos", "human", "videos", "fold", "the", "cloth", "and", "pull", "it", "to", "the", "right", "ATM", "trained", "w/", "human", "videos", "ATM", "trained", "w/o", "human", "videos", "human", "videos" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure15-1.png
{ "x1": 48.95899963378906, "x2": 563.0291137695312, "y1": 663.6385498046875, "y2": 693.552001953125 }
200
8
7
Figure
{ "x1": 51.48, "x2": 559.0799999999999, "y1": 304.92, "y2": 486 }
Fig. 8: Cross-morphology skill transfer for a pick-and-place task. Here, we collect 160 action-free videos of a Franka arm and 10 action-labeled demonstrations from a UR arm, with the final goal of learning a UR policy. We compare a vanilla BC baseline with ATM trained using types of data: using only the 10 UR videos, using only the 160 Franka videos, and using both Franka and UR videos (Franka ⇒ UR). In the right plot, we observe that the additional cross-embodiment data led to significantly better results. Surprisingly, even if the trajectory model is only trained using Franka videos, it exhibits much better performance than the BC without the Franka videos.
[ "UR5", "policy", "learning", "UR5", "Pick-Place", "Can", "ATM", "–", "Franka", "onlyATM", "–", "UR", "only", "ATM", "-", "Franka", "⟹", "UR160", "Franka", "Videos", "Franka", "videos" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure8-1.png
{ "x1": 48.95899963378906, "x2": 563.032958984375, "y1": 495.1395263671875, "y2": 560.9169921875 }
200
7
7
Figure
{ "x1": 53.64, "x2": 553.3199999999999, "y1": 59.04, "y2": 231.12 }
Fig. 7: Learning robotic skills from human videos for three tasks. We collect 100 videos of a human performing the tasks directly and 10 teleoperation demonstration trajectories. Each row from the top to the bottom shows three snapshots from the human videos, ATM trained without the human videos, and ATM trained with the human videos. By comparison, we can see that human videos are critical in learning accurate trajectory prediction and enable the policy to successfully perform the task.
[ "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "0", "step", "1", "step", "2", "step", "1", "step", "2", "(b)", "Task:", "Put", "the", "tomato", "into", "the", "pan", "and", "close", "the", "door", "(c)", "Task:", "Put", "the", "tomato", "into", "the", "pan", "and", "close", "the", "door", "step", "0", "(a)", "Task:", "Fold", "the", "cloth", "and", "pull", "it", "to", "the", "right", "s", "AT", "M", "tr", "ai", "ne", "d", "w", "/", "hu", "m", "an", "v", "id", "eo", "s", "id", "eo", "s", "AT", "M", "tr", "ai", "ne", "d", "w", "/o", "hu", "m", "an", "v", "id", "eo", "an", "v", "hu", "m" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure7-1.png
{ "x1": 48.95899963378906, "x2": 563.0292358398438, "y1": 245.24954223632812, "y2": 287.1180114746094 }
200
2
3
Figure
{ "x1": 52.919999999999995, "x2": 555.84, "y1": 59.04, "y2": 282.24 }
Fig. 2: Overview of our framework. (a) In the first stage, given an action-free video dataset, we first sample 2D points on one video frame and track their trajectories throughout the video using a pre-trained tracker. We then train a track transformer to predict future point trajectories given the current image observation, the language instruction, and the initial positions of the points. For the transformer input, we replace the future point positions with masked values. (2) In the second stage, we learn a track-guided policy to predict the control actions. Guidance from the predicted track enables us to learn robust policies from only a few action-labeled demonstrations.
[ "Language", "Instruction", "Off-the-shelf", "Tracker", "Track", "Transformer", "Track-guided", "Policy", "𝜋", "action", "(b)", "Stage", "2:", "Track-guided", "Policy", "Learning(a)", "Stage", "1:", "Any-point", "Trajectory", "Modeling", "Action-labeled", "Demos", ".", ".", ".", "track", "token", ".", ".", ".", "masked", "pointlanguage", "token", "image", "token", "Action-free", "Video", "Dataset", "Language", "Instruction", ".", ".", ".", "𝑡!", "𝑡⋯", "𝑡#", "𝑡!", "𝑡⋯", "𝑡#", "𝑡!", "𝑡⋯", "𝑡#", "Track", "Transformer", ".", ".", ".", "𝑡!", "𝑡⋯", "𝑡#", "𝑡!", "𝑡⋯", "𝑡#", "𝑡!", "𝑡⋯", "𝑡#" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure2-1.png
{ "x1": 48.95899963378906, "x2": 563.0291748046875, "y1": 296.5075378417969, "y2": 362.28594970703125 }
200
14
16
Figure
{ "x1": 68.39999999999999, "x2": 544.3199999999999, "y1": 57.96, "y2": 409.32 }
Fig. 14: Attention maps of Track Transformer trained with and without human videos. Including large-scale human video leads to much clearer attention maps, focusing on the object and robot arm, while training without human videos will attend to incorrect areas such as background walls.
[ "use", "the", "broom", "to", "sweep", "the", "toys", "into", "the", "dustpan", "and", "put", "it", "in", "front", "of", "the", "dustpan", "ATM", "trained", "w/", "human", "videos", "ATM", "trained", "w/o", "human", "videos", "human", "videos", "fold", "the", "cloth", "and", "pull", "it", "to", "the", "right", "ATM", "trained", "w/", "human", "videos", "ATM", "trained", "w/o", "human", "videos", "human", "videos" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure14-1.png
{ "x1": 48.95899963378906, "x2": 563.0291137695312, "y1": 422.7765197753906, "y2": 452.68896484375 }
200
10
8
Figure
{ "x1": 326.88, "x2": 548.28, "y1": 55.8, "y2": 196.2 }
Fig. 10: We plot the success rates of the policies learned with predicted trajectories of different lengths. Generally, longer trajectory length improves the performance, but the benefit tends to plateau after 16.
[ "LIBERO-Goal", "LIBERO-Long", "LIBERO-Spatial", "LIBERO-Object", "(%", ")", "at", "e", "es", "s", "R", "Su", "cc", "80", "70", "60", "50", "40", "30", "20", "10", "4", "8", "16", "32", "64", "track", "length" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure10-1.png
{ "x1": 311.9729919433594, "x2": 563.0305786132812, "y1": 207.46353149414062, "y2": 249.3310546875 }
200
9
8
Figure
{ "x1": 50.4, "x2": 299.15999999999997, "y1": 55.44, "y2": 267.12 }
Fig. 9: Success rate of our policy trained with 4%, 10% and 20% action-labeled demos. Our policy trained with only 4% demos performs comparably to BC baseline with 20% demos on LIBERO-Spatial, Object, and GOAL, and even better on LIBERO-Spatial. When trained on 20% demos, our performance approaches BC with all training data.
[ "BC", "w/", "20%", "BC", "w/", "100%", "ATM", "40", "LIBERO-Long", "35", "30", "25", "20", "15", "4", "10", "20", "training", "demos", "(%)", "LIBERO-Goal", "40", "45", "50", "55", "60", "65", "70", "75", "Su", "cc", "es", "s", "R", "at", "e", "(%", ")", "4", "10", "20", "training", "demos", "(%)", "70", "LIBERO-Object", "65", "60", "55", "4", "10", "20", "50", "LIBERO-Spatial", "4", "10", "20", "40", "45", "50", "55", "60", "65", "70", "Su", "cc", "es", "s", "R", "at", "e", "(%", ")" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure9-1.png
{ "x1": 48.95899963378906, "x2": 300.0165100097656, "y1": 278.5915222167969, "y2": 344.36993408203125 }
200
3
4
Figure
{ "x1": 79.92, "x2": 264.24, "y1": 54, "y2": 217.79999999999998 }
Fig. 3: A visual illustration of the architecture of the trackguided policy. Given the current observation and the predicted tracks from the frozen pre-trained track transformer, we train a track-guided policy from a limited demonstration dataset.
[ "action", "early", "fusion", "late", "fusion", "Transformer", "MLP", "predicted", "track", "tokens", "image", "patch", "tokens", "CLS", "token" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure3-1.png
{ "x1": 48.95899963378906, "x2": 300.0165710449219, "y1": 234.43551635742188, "y2": 276.30303955078125 }
200
VIII
15
Table
{ "x1": 346.68, "x2": 494.28, "y1": 75.96, "y2": 214.2 }
TABLE VIII: Hyperparameters of policy training.
[ "augmentation", "ColorJitter,RandomShift", "track", "length", "16", "frame", "stack", "10", "point", "sampling", "grid", "number", "of", "points", "32", "learning", "rate", "5e-4", "weight", "decay", "1e-4", "lr", "scheduler", "Cosine", "lr", "warm", "up", "0", "clip", "grad", "100", "epoch", "100", "batch", "size", "512", "optimizer", "AdamW", "Hyperparameters", "Policy" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVIII-1.png
{ "x1": 320.0450134277344, "x2": 523.7103271484375, "y1": 60.84952163696289, "y2": 66.85198974609375 }
200
13
15
Figure
{ "x1": 319.68, "x2": 555.12, "y1": 241.2, "y2": 331.2 }
Fig. 13: Given a video (left), we query 1000 randomly sampled points using an off-the-shelf TAP model (middle), where each colored dot represents the starting position of a track. We then filter the tracks using a heuristic of their position displacement across the video and re-sample around these points (right). We can see that extracted tracks are concentrated around informative objects, such as the robot’s gripper and manipulation targets.
[ "“pick", "up", "the", "milk", "and", "place", "it", "in", "the", "basket”", "1.", "random", "tracking", "2.", "filter", "&", "retrackvideo" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure13-1.png
{ "x1": 311.9729919433594, "x2": 563.0305786132812, "y1": 339.8705139160156, "y2": 429.55889892578125 }
200
VII
15
Table
{ "x1": 96.84, "x2": 245.16, "y1": 72, "y2": 218.16 }
TABLE VII: Hyperparameters of track transformer training.
[ "image", "mask", "ratio", "0.5", "augmentation", "ColorJitter,RandomShift", "track", "length", "16", "track", "patch", "size", "4", "point", "sampling", "variance", "filtering", "number", "of", "points", "32", "learning", "rate", "1e-4", "weight", "decay", "1e-4", "lr", "scheduler", "Cosine", "lr", "warm", "up", "5", "clip", "grad", "10", "epoch", "100", "batch", "size", "1024", "optimizer", "AdamW", "Hyperparameters", "Track", "Transformer" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVII-1.png
{ "x1": 49.60200119018555, "x2": 295.07049560546875, "y1": 56.36655044555664, "y2": 62.3690185546875 }
200
12
15
Figure
{ "x1": 63.72, "x2": 283.32, "y1": 237.95999999999998, "y2": 407.88 }
Fig. 12: To summarize spatial information, we perform selfattention on a sequence consisting of all views’ track and image patches and a CLS token. To integrate information across time, we perform casual self-attention between spatial CLS token, proprioception, and an action CLS token per timestep. To regress actions, we concatenate each timestep’s action CLS token and proposed tracks.
[ "early", "fusion", "late", "fusion", "𝑎!", "MLP", "Head", "action", "cls", "timetime", "timespatial", "state", "joint,", "gripper", "state", "Temporal", "Transformer", "Spatial", "Transformer", "wrist", "view", "predicted", "tracks", "base", "view" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure12-1.png
{ "x1": 48.95899963378906, "x2": 300.0165710449219, "y1": 419.21453857421875, "y2": 496.94793701171875 }
200
1
0
Figure
{ "x1": 307.8, "x2": 546.12, "y1": 250.92, "y2": 425.15999999999997 }
Figure 1. Performance comparison on the RealBlur-J [23] test dataset in terms of PSNR and GMACs. Our proposed MLWNet achieves superiority in comparison with other state-of-the-arts.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure1-1.png
{ "x1": 308.86199951171875, "x2": 545.1087646484375, "y1": 428.3247375488281, "y2": 455.64495849609375 }
200
1
5
Table
{ "x1": 49.68, "x2": 288, "y1": 267.84, "y2": 511.2 }
Table 1. Quantitative evaluations on the RealBlur dataset [23]. The experimental results were trained under the corresponding datasets respectively, and average runtime is tested on 256×256 patchs.
[ "MLWNet-S", "33.02", "0.933", "-", "-", "0.04s", "MLWNet-B", "33.84", "0.941", "40.69", "0.976", "0.05s", "DeblurGAN-v2", "[12]", "29.69", "0.870", "36.44", "0.935", "0.04s", "SRN", "[27]", "31.38", "0.909", "38.65", "0.965", "0.07s", "MPRNet", "[37]", "31.76", "0.922", "39.31", "0.972", "0.09s", "SDWNet", "[42]", "30.73", "0.896", "38.21", "0.963", "0.04s", "MIMO-UNet+", "[3]", "31.92", "0.919", "-", "-", "0.02s", "MIMO-UNet++", "[3]", "32.05", "0.921", "-", "-", "-", "DeepRFT+", "[19]", "32.19", "0.931", "39.84", "0.972", "0.09s", "BANet", "[29]", "32.00", "0.923", "39.55", "0.971", "0.06s", "BANet+", "[29]", "32.42", "0.929", "39.90", "0.972", "0.12s", "Stripformer", "[28]", "32.48", "0.929", "39.84", "0.974", "0.04s", "MSSNet", "[9]", "32.10", "0.928", "39.76", "0.972", "0.06s", "MSDI-Net", "[13]", "32.35", "0.923", "-", "-", "0.06s", "MAXIM-3S", "[30]", "32.84", "0.935", "39.45", "0.962", "-", "FFTformer", "[10]", "32.62", "0.933", "40.11", "0.973", "0.13s", "GRL-B", "[14]", "32.82", "0.932", "40.20", "0.974", "1.28s", "Method", "RealBlur-J", "RealBlur-R", "Avg.runtimePSNR", "SSIM", "PSNR", "SSIM" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table1-1.png
{ "x1": 50.11199951171875, "x2": 286.35870361328125, "y1": 516.1867065429688, "y2": 543.5069580078125 }
200
2
5
Table
{ "x1": 309.96, "x2": 544.3199999999999, "y1": 267.84, "y2": 404.28 }
Table 2. Quantitative evaluations trained on the RSBlur dataset [24], the RealBlur-J dataset was used for testing only.
[ "MLWNet-B", "34.94", "0.880", "30.53", "0.905", "SRN", "[27]", "32.53", "0.840", "29.86", "0.886", "MIMO-Unet", "[3]", "32.73", "0.846", "29.53", "0.876", "MIMO-Unet+", "[3]", "33.37", "0.856", "29.99", "0.889", "MPRNet", "[37]", "33.61", "0.861", "30.46", "0.899", "Restormer", "[38]", "33.69", "0.863", "30.48", "0.891", "Uformer-B", "[33]", "33.98", "0.866", "30.37", "0.899", "SFNet", "[4]", "34.35", "0.872", "30.26", "0.897", "Method", "RSBlur", "RealBlur-JPSNR", "SSIM", "PSNR", "SSIM" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table2-1.png
{ "x1": 308.86199951171875, "x2": 545.1087646484375, "y1": 408.59075927734375, "y2": 424.9520263671875 }
200
4
5
Figure
{ "x1": 49.68, "x2": 546.12, "y1": 72, "y2": 244.07999999999998 }
Figure 4. Visual comparisons on the RealBlur-J dataset [23]. The proposed method generates an image with clearer characters.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure4-1.png
{ "x1": 70.08799743652344, "x2": 525.1326904296875, "y1": 247.72177124023438, "y2": 253.1240234375 }
200
3
5
Table
{ "x1": 311.76, "x2": 542.16, "y1": 436.68, "y2": 573.12 }
Table 3. Quantitative evaluation for generalizability shows the results of models trained on the RealBlur-J dataset and tested on the RSBlur dataset, MACs are measured on 256 × 256 patches.
[ "MLWNet-B", "30.91", "0.818", "108.2", "Method", "[23]→", "[24]", "MACs(G)PSNR", "SSIM", "DeblurGAN-v2", "[12]", "30.15", "0.766", "42.0", "MPRNet", "[37]", "29.56", "0.785", "760.8", "MIMO-UNet+", "[3]", "29.69", "0.792", "154.4", "BANet", "[29]", "30.19", "0.806", "263.9", "BANet+", "[29]", "30.24", "0.809", "588.7", "MSSNet", "[9]", "29.86", "0.806", "154.0", "FFTformer", "[10]", "29.70", "0.787", "131.8" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table3-1.png
{ "x1": 308.86199951171875, "x2": 545.1088256835938, "y1": 577.65771484375, "y2": 604.97802734375 }
200
4
6
Table
{ "x1": 308.88, "x2": 545.04, "y1": 442.8, "y2": 663.12 }
Table 4. Quantitative evaluations trained and tested on the GoPro dataset [20]. Our proposed MLWNet obtains competitive results with a combination of time efficiency and accuracy.
[ "MLWNet-B", "33.83", "0.968", "0.05s", "DeblurGAN-v2", "[12]", "29.55", "0.934", "0.04s", "SRN", "[27]", "30.26", "0.934", "0.07s", "DMPHN", "[39]", "31.20", "0.945", "0.21s", "SDWNet", "[42]", "31.26", "0.966", "0.04s", "MPRNet", "[37]", "32.66", "0.959", "0.09s", "MIMO-UNet+", "[3]", "32.45", "0.957", "0.02s", "DeepRFT+", "[19]", "33.23", "0.963", "0.09s", "MAXIM-3S", "[30]", "32.86", "0.961", "-", "Stripformer", "[28]", "33.08", "0.962", "0.04s", "MSDI-net", "[13]", "33.28", "0.964", "0.06s", "Restormer", "[38]", "33.57", "0.966", "0.08s", "NAFNet", "[2]", "33.69", "0.967", "0.04s", "FFTformer", "[10]", "34.21", "0.969", "0.13s", "GRL-B", "[14]", "33.93", "0.968", "1.28s", "Method", "GOPRO", "Avg.runtimePSNR", "SSIM" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table4-1.png
{ "x1": 308.86199951171875, "x2": 545.1087646484375, "y1": 667.521728515625, "y2": 694.8419799804688 }
200
5
6
Figure
{ "x1": 49.68, "x2": 546.12, "y1": 70.92, "y2": 249.12 }
Figure 5. Visual comparisons on the RSBlur dataset [24]. The deblurring performance of the proposed method in low-light is impressive, The recovery of characters and texture structures far exceeds other advanced methods.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure5-1.png
{ "x1": 50.11199951171875, "x2": 545.1110229492188, "y1": 252.10073852539062, "y2": 268.46197509765625 }
200
6
6
Figure
{ "x1": 49.68, "x2": 546.12, "y1": 269.28, "y2": 424.08 }
Figure 6. Visual comparisons on the GoPro dataset [20]. Our method better preserves texture information without sharpening.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure6-1.png
{ "x1": 71.73300170898438, "x2": 523.4960327148438, "y1": 427.21875, "y2": 432.6210021972656 }
200
2
2
Figure
{ "x1": 49.68, "x2": 546.12, "y1": 70.92, "y2": 287.28 }
Figure 2. The overall architecture of the proposed MLWNet, the SEB is a simple module designed with reference [2], the WFB and WHB apply the LWN that implements the learnable 2D-DWT. In training phase, supervised learning is performed using Lmulti and selfsupervised restraint of the wavelet kernel is performed using Lwavelet. In testing phase, only the highest scale restored images is output.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure2-1.png
{ "x1": 50.11199951171875, "x2": 545.1110229492188, "y1": 292.7017517089844, "y2": 321.083984375 }
200
6
7
Table
{ "x1": 49.68, "x2": 295.2, "y1": 425.88, "y2": 509.03999999999996 }
Table 6. Ablation study on components of the proposed MLWNet. We set the baseline network to use SEB in its entirety, and models that do not use SIMO will represent single scales using SISO.
[ "✓", "32.37", "0.929", "19.29", "✓", "✓", "✓", "32.40", "0.928", "28.21", "✓", "✓", "✓", "32.49", "0.928", "25.28", "✓", "✓", "✓", "32.57", "0.929", "22.22", "✓", "✓", "✓", "✓", "32.62", "0.931", "28.21", "SIMO", "WFB", "WHB", "Lwavelet", "PSNR", "SSIM", "MACs(G)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table6-1.png
{ "x1": 50.11199951171875, "x2": 286.3587341308594, "y1": 513.9717407226562, "y2": 541.2919921875 }
200
7
7
Table
{ "x1": 310.68, "x2": 543.24, "y1": 324.71999999999997, "y2": 372.24 }
Table 7. Performance comparison at different noise difference levels, where L3 contains the noise difference mean.
[ "GoPro", "34.81", "34.66", "33.76", "33.19", "32.63", "RealBlur-J", "33.92", "33.81", "33.97", "33.93", "33.54", "Noise", "Level", "L1", "L2", "L3", "L4", "L5" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table7-1.png
{ "x1": 308.86199951171875, "x2": 545.1087036132812, "y1": 376.8697509765625, "y2": 393.23101806640625 }
200
8
7
Figure
{ "x1": 307.8, "x2": 546.12, "y1": 163.79999999999998, "y2": 278.28 }
Figure 8. The difference between realistic blur (a) and synthetic blur (b). In the green box, the synthetic blur appears with color averaging resulting in high and low frequency confusion, and in the blue box has unnatural discontinuous trajectories.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure8-1.png
{ "x1": 308.86199951171875, "x2": 545.108642578125, "y1": 281.5787353515625, "y2": 319.85699462890625 }
200
5
7
Table
{ "x1": 49.68, "x2": 288, "y1": 374.76, "y2": 411.12 }
Table 5. Comparison in various input and output modes.
[ "Method", "SISO", "MIMO", "SIMO", "PSNR/SSIM", "32.29/0.924", "32.19/0.928", "32.37/0.929", "MACs(G)", "19.24", "21.83", "19.29" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table5-1.png
{ "x1": 67.32099914550781, "x2": 269.1546630859375, "y1": 415.708740234375, "y2": 421.1109924316406 }
200
7
7
Figure
{ "x1": 49.68, "x2": 287.28, "y1": 546.84, "y2": 606.24 }
Figure 7. Feature maps representing high-and low-frequency components generated after learnable wavelet convolution. Zoom in on the screen for the best view.
[ "input" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure7-1.png
{ "x1": 50.11199951171875, "x2": 286.358642578125, "y1": 610.8157348632812, "y2": 638.135986328125 }
200
3
3
Figure
{ "x1": 49.68, "x2": 287.28, "y1": 349.91999999999996, "y2": 562.3199999999999 }
Figure 3. (a)The process of learnable 2D-wavelet convolution. (b)The construction process of the N ×N 2D-wavelet kernel.
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure3-1.png
{ "x1": 50.11199951171875, "x2": 286.3586730957031, "y1": 569.6597290039062, "y2": 586.02001953125 }
200
1
0
Figure
{ "x1": 329.4, "x2": 511.2, "y1": 252.72, "y2": 428.03999999999996 }
Figure 1. Mean word accuracy vs Parameters on the 6 common test benchmarks. P-Ti, P-S and P-B refer to PARSeq-Ti, PARSeqS and PARSeq-B, respectively. * indicates training with REBUSyn.
[ "MaskOCR", "SRN", "ABINet", "MAERec", "parseq", "parseq*", "CLIP4STR", "CLIP4STR*", "TrOCR", "ABINet", "MAERec", "SRN", "MaskOCR-B", "MaskOCR-L", "TrOCR-L", "TrOCR-B", "CLIP4STR-L", "CLIP4STR-B*", "CLIP4STR-L*", "CLIP4STR-B", "P-S*", "P-B*", "P-S", "P-B", "P-Ti", "[%", "]", "cu", "ra", "cy", "e", "W", "or", "d", "Ac", "Av", "er", "ag", "97", "96", "95", "94", "93", "92", "91", "0", "100", "200", "300", "400", "500", "Parameters(M)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure1-1.png
{ "x1": 308.86199951171875, "x2": 545.1087036132812, "y1": 442.6107482910156, "y2": 480.88995361328125 }
200
4
5
Figure
{ "x1": 325.44, "x2": 511.2, "y1": 319.68, "y2": 442.44 }
Figure 4. Average word error rate on 6 common test benchmarks, with respect to images seen (batch size times number of steps) during PARSeq training stage of different sizes.
[ "parseq_l", "parseq_b", "parseq_s", "5.0", "5.5", "Av", "er", "ag", "e", "Er", "ro", "r", "R", "at", "e[", "%", "]", "4.5", "4.0", "3.5", "102", "2×102", "Images", "Seen(M)" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure4-1.png
{ "x1": 308.86199951171875, "x2": 545.1087646484375, "y1": 457.1807556152344, "y2": 484.5009765625 }
200
3
5
Figure
{ "x1": 75.24, "x2": 242.28, "y1": 318.24, "y2": 435.24 }
Figure 3. The average word error rate on 6 common test benchmarks was calculated using the PARSeq model size. The solid line represents the fitted power law E(·), and the points on the dotted line correspond to the power law equation.
[ "E=", "(6.316", "⋅", "10−74/N)0.018", "]", "at", "e", "[%", "e", "W", "or", "d", "Er", "ro", "r", "R", "Av", "er", "ag", "3.15", "3.1", "3.05", "3.0", "2.95", "108", "2.9" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure3-1.png
{ "x1": 50.11199951171875, "x2": 286.35870361328125, "y1": 447.01776123046875, "y2": 485.2969970703125 }
200
2
5
Figure
{ "x1": 59.04, "x2": 528.12, "y1": 85.67999999999999, "y2": 204.12 }
Figure 2. Improvement in TrOCR model performance with increasing model size, data volume, and training computation. Model performance is measured by calculating the average word error rate on 6 common test benchmarks Left: Evaluation of model performance with changing model sizes. Center: Evaluation of model performance with varying data volumes. Right: Performance variations with different data sizes under varying computational resources. The x-axis represents the model’s training time, measured in 8 GPU hours. For optimal performance, all three factors must be scaled up in tandem. Empirical performance exhibits a power-law relationship with each individual factor when it is not constrained by the other two factors.
[ "(c)", "Computation", "(training", "hours)", "E=", "(4.45", "⋅", "104/C)−0.764", "3.75M", "7.5M", "15M", "]", "at", "e", "[%", "e", "W", "or", "d", "Er", "ro", "r", "R", "Av", "er", "ag", "6×101", "4×101", "3×101", "2×101", "1032×102", "3×102", "4×102", "6×102", "(b)", "Data", "volume", "(M)", "E=", "(1.84", "⋅", "105/D)−0.3271", "]", "at", "e", "[%", "e", "W", "or", "d", "Er", "ro", "r", "R", "Av", "er", "ag", "60", "50", "40", "30", "20", "106", "107", "(a)", "Model", "size", "(Params)", "E=", "(1.97", "⋅", "104/N)0.223", "]", "at", "e", "[%", "e", "W", "or", "d", "Er", "ro", "r", "R", "Av", "er", "ag", "18", "16", "14", "12", "10", "108", "109" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure2-1.png
{ "x1": 50.11199951171875, "x2": 545.1143798828125, "y1": 219.78976440429688, "y2": 279.9869384765625 }
200
1
10
Figure
{ "x1": 123.83999999999999, "x2": 471.24, "y1": 72, "y2": 649.0799999999999 }
Figure 1. Error analysis of the Union14M benchmark. We select three representative models and show their prediction results (Text in black represents correct prediction and red text vice versa).
[]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure1-1.png
{ "x1": 50.11199951171875, "x2": 545.1109619140625, "y1": 673.1716918945312, "y2": 689.5330200195312 }
200
10
6
Table
{ "x1": 333.71999999999997, "x2": 519.12, "y1": 191.88, "y2": 244.07999999999998 }
Table 10. Average accuracy achieved by using visual task pretraining and OCR task pre-training on 6 common test benchmarks.
[ "ImageNet-21k", "R+E+B+U+Syn", "ViT-L", "96.74", "R+E+B+U+Syn", "R+E+B+U", "ViT-S", "96.96", "Scratch", "R+E+B+U+Syn", "ViT-L", "97.03", "Pretrain", "Dataset", "Backbone", "Word", "Acc", "Scratch", "R+E+B+U", "ViT-S", "96.12", "Scratch", "R+E+B+U+Syn", "ViT-S", "96.85" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table10-1.png
{ "x1": 308.86199951171875, "x2": 545.10888671875, "y1": 256.0837707519531, "y2": 272.44500732421875 }
200
8
6
Table
{ "x1": 81.72, "x2": 255.23999999999998, "y1": 371.88, "y2": 407.15999999999997 }
Table 8. PARSeq-S average accuracy of integrating diverse synthetic and real data types.
[ "Real", "DataSet", "Syn", "DataSet", "Data", "Ratio", "Word", "Acc", "R+E+B+U", "Syn", "1:0.5", "96.19", "R+E+B+U", "MJ+ST", "1:2.5", "96.24", "R+E+B+U", "MJ+ST+Syn", "1:3", "96.85" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table8-1.png
{ "x1": 50.11199951171875, "x2": 286.3586730957031, "y1": 418.74273681640625, "y2": 435.1029968261719 }
200
9
6
Table
{ "x1": 115.92, "x2": 220.32, "y1": 580.68, "y2": 650.16 }
Table 9. PARSeq-S average accuracy on 6 common test benchmarks with varying ratios of synthetic and real data.
[ "Data", "Ratio", "Word", "Acc", "Real:Syn=1:0.5", "96.32", "Real:Syn=1:1", "96.50", "Real:Syn=1:2", "96.59", "Real:Syn=1:3", "96.85", "Real:Syn=1:4", "96.76", "Real:Syn=1:5", "95.70" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table9-1.png
{ "x1": 50.11199951171875, "x2": 286.3587341308594, "y1": 656.4867553710938, "y2": 672.8480224609375 }
200
4
9
Table
{ "x1": 92.88, "x2": 245.16, "y1": 460.79999999999995, "y2": 504 }
Table 4. Average accuracy using language-specific pretraining on benchmark test set, training model in real dataset of REB.
[ "Arabic", "PARSeq", "REB", "95.62", "Cn-En", "PARSeq", "REB", "95.81", "Latin", "PARSeq", "REB", "95.82", "Pretrain", "Model", "Datasets", "Word", "Acc", "From", "Scratch", "PARSeq", "REB", "95.60" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table4-1.png
{ "x1": 50.11199951171875, "x2": 286.358642578125, "y1": 526.24072265625, "y2": 542.6019897460938 }
200
3
9
Table
{ "x1": 54, "x2": 541.0799999999999, "y1": 72, "y2": 429.12 }
Table 3. Word accuracy on Union14M benchmark, * indicates training with REBU-Syn.
[ "PARSeq-S*", "REBU-Syn", "85.2", "89.4", "94.0", "88.0", "93.1", "89.9", "89.8", "89.9", "CLIP4STR-B*", "REBU-Syn", "88.6", "90.1", "96.4", "89.1", "96.3", "92.2", "91.9", "92.1", "CLIP4STR-L*", "REBU-Syn", "88.6", "90.4", "96.4", "89.3", "97.2", "90.7", "92.7", "92.2", "PARSeq-S", "[8]", "R", "81.7", "86.5", "91.1", "86.5", "89.3", "85.3", "84.6", "86.5", "CLIP4STR-B", "[97]", "R", "86.5", "92.2", "96.3", "89.9", "96.1", "88.9", "91.2", "91.6", "CLIP4STR-L", "[97]", "R", "87.2", "91.0", "97.0", "90.3", "96.6", "89.9", "91.5", "91.9", "SRN", "[92]", "Union14M", "47.6", "57.9", "48.7", "60.7", "20.0", "27.9", "41.6", "43.5", "ABINet", "[21]", "Union14M", "62.2", "66.3", "73.0", "75.6", "59.6", "43.1", "69.5", "64.2", "VisionLAN", "[87]", "Union14M", "54.4", "60.1", "68.8", "72.1", "55.2", "37.9", "64.7", "59.0", "MATRN", "[56]", "Union14M", "67.3", "71.0", "79.3", "78.4", "66.0", "53.8", "74.9", "70.0", "MAERec-S", "[31]", "Union14M-L", "68.9", "77.8", "79.3", "80.4", "69.5", "51.9", "75.1", "71.8", "MAERec-B", "[31]", "Union14M-L", "75.9", "80.7", "86.6", "83.8", "82.1", "56.2", "82.2", "78.2", "SATRN", "[39]", "Union14M", "64.3", "71.1", "73.0", "78.8", "64.7", "47.4", "69.2", "66.9", "RobustScanner", "[94]", "Union14M", "58.7", "72.7", "64.2", "73.5", "52.8", "47.8", "56.9", "60.9", "MORAN", "[51]", "Union14M", "44.3", "51.1", "42.4", "42.9", "12.4", "36.8", "41.0", "38.7", "ASTER", "[72]", "Union14M", "39.2", "47.9", "37.4", "64.4", "12.5", "34.5", "30.2", "38.0", "NRTR", "[69]", "Union14M", "51.8", "65.1", "47.9", "72.9", "39.1", "51.4", "40.1", "52.6", "SAR", "[42]", "Union14M", "58.0", "69.0", "66.9", "73.7", "54.7", "51.2", "57.0", "61.5", "DAN", "[85]", "Union14M", "47.0", "56.6", "44.6", "66.7", "22.1", "39.8", "41.5", "45.5", "SRN", "[92]", "MJ+ST", "34.1", "28.7", "63.4", "46.3", "25.3", "26.7", "56.5", "40.1", "ABINet", "[21]", "MJ+ST", "43.3", "38.3", "59.5", "55.6", "12.7", "50.8", "62.0", "46.0", "VisionLAN", "[87]", "MJ+ST", "47.8", "48.0", "57.7", "52.1", "14.2", "47.9", "64.0", "47.4", "MATRN", "[56]", "MJ+ST", "43.8", "41.9", "63.1", "57.0", "13.4", "53.2", "66.4", "48.4", "CRNN", "[70]", "Union14M", "31.9", "39.3", "18.9", "58.1", "4.3", "21.5", "15.1", "27.0", "SVTR", "[20]", "Union14M", "50.2", "63.0", "70.5", "74.7", "66.6", "42.6", "71.4", "62.7", "SATRN", "[39]", "MJ+ST", "48.0", "45.3", "51.1", "58.5", "15.8", "52.5", "62.7", "47.7", "RobustScanner", "[94]", "MJ+ST", "41.2", "42.6", "43.6", "39.5", "7.9", "46.9", "44.9", "38.1", "Method", "Training", "data", "Artistic", "Contextless", "Curve", "General", "Multi-Oriented", "Multi-Words", "Salient", "Avg", "CRNN", "[70]", "MJ+ST", "20.7", "25.6", "7.5", "32.0", "0.9", "25.6", "13.9", "18.0", "SVTR", "[20]", "MJ+ST", "37.9", "44.2", "63.0", "52.8", "32.1", "49.1", "67.5", "49.5", "MORAN", "[51]", "MJ+ST", "29.4", "20.7", "8.9", "35.2", "0.7", "23.8", "17.9", "19.5", "ASTER", "[72]", "MJ+ST", "27.7", "33.0", "34.0", "39.8", "10.2", "27.6", "48.2", "31.5", "NRTR", "[69]", "MJ+ST", "36.6", "37.3", "31.7", "48.0", "4.4", "54.9", "30.6", "34.8", "SAR", "[42]", "MJ+ST", "42.6", "44.2", "44.3", "50.5", "7.7", "51.2", "44.0", "40.6", "DAN", "[85]", "MJ+ST", "35.0", "40.3", "26.7", "42.1", "1.5", "42.2", "36.5", "32.0" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table3-1.png
{ "x1": 139.29400634765625, "x2": 455.9335632324219, "y1": 441.2557373046875, "y2": 446.6579895019531 }
200
6
13
Table
{ "x1": 49.68, "x2": 287.28, "y1": 382.68, "y2": 437.03999999999996 }
Table 6. Word accuracy with different model size of CLIP4STR. Test data: Union14M.
[ "Method", "Param", "(M)", "Avg", "PARSeq-S", "22.5", "89.89", "PARSeq-B", "104.0", "90.37", "PARSeq-L", "335.9", "90.81" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table6-1.png
{ "x1": 50.11199951171875, "x2": 286.3587341308594, "y1": 449.23675537109375, "y2": 465.5980224609375 }
200
7
13
Table
{ "x1": 109.8, "x2": 226.07999999999998, "y1": 469.79999999999995, "y2": 505.08 }
Table 7. Word accuracy with different model size of CLIP4STR. Test data: Union14M.
[ "Method", "Param", "(M)", "Avg", "CLIP4STR-S", "43.6", "91.90", "CLIP4STR-B", "86.7", "92.08", "CLIP4STR-L", "268.2", "92.19" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table7-1.png
{ "x1": 50.11199951171875, "x2": 286.3587341308594, "y1": 516.7887573242188, "y2": 533.1500244140625 }
200
8
13
Table
{ "x1": 123.83999999999999, "x2": 213.12, "y1": 657, "y2": 683.28 }
Table 8. Accuracy for CLIP4STR-L on FUNSD.
[ "Model", "Word", "Acc", "CLIP4STR-L", "96.02", "CLIP4STR-L*", "96.50" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table8-1.png
{ "x1": 80.98600006103516, "x2": 255.49005126953125, "y1": 695.2807006835938, "y2": 700.6829833984375 }
200
1
2
Table
{ "x1": 58.68, "x2": 278.28, "y1": 498.96, "y2": 551.16 }
Table 1. Architecture specifications of TrOCR variants.
[ "Model", "Encoder", "FLOPs", "(G)", "Params", "(M)layers", "hidden", "sizes", "heads", "TROCR-S", "12", "384", "6", "13.31", "43.09", "TROCR-B", "12", "768", "12", "62.01", "281.87", "TROCR-L", "24", "1024", "16", "191.00", "505.50", "TROCR-H", "48", "1200", "16", "497.91", "1037.61" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table1-1.png
{ "x1": 68.98899841308594, "x2": 267.4870910644531, "y1": 559.5577392578125, "y2": 564.9600219726562 }
200
2
2
Table
{ "x1": 316.8, "x2": 537.12, "y1": 200.88, "y2": 253.07999999999998 }
Table 2. Architecture specifications of PARSeq variants.
[ "Model", "Encoder", "FLOPs", "(G)", "Params", "(M)layers", "hidden", "sizes", "heads", "PARSeq-S", "12", "384", "6", "2.76", "22.51", "PARSeq-B", "12", "768", "12", "17.20", "104.01", "PARSeq-L", "24", "1024", "16", "49.90", "335.92", "PARSeq-H", "32", "1280", "16", "98.10", "682.14" ]
/home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table2-1.png
{ "x1": 326, "x2": 527.97705078125, "y1": 260.5127258300781, "y2": 265.91497802734375 }
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
20