yilunzhao commited on
Commit
b0cc66d
·
verified ·
1 Parent(s): b901b4f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240101/2106.11469v3.json +321 -0
  2. 20240101/2205.00442v2.json +77 -0
  3. 20240101/2208.09709v2.json +0 -0
  4. 20240101/2208.09894v3.json +0 -0
  5. 20240101/2211.01229v2.json +277 -0
  6. 20240101/2212.10772v5.json +0 -0
  7. 20240101/2302.14420v2.json +577 -0
  8. 20240101/2304.08842v3.json +0 -0
  9. 20240101/2304.14274v4.json +0 -0
  10. 20240101/2305.09126v3.json +0 -0
  11. 20240101/2305.14669v3.json +394 -0
  12. 20240101/2305.17760v6.json +560 -0
  13. 20240101/2306.00613v2.json +319 -0
  14. 20240101/2306.11250v2.json +0 -0
  15. 20240101/2306.13746v2.json +169 -0
  16. 20240101/2306.16846v3.json +552 -0
  17. 20240101/2307.12083v3.json +618 -0
  18. 20240101/2308.04102v3.json +599 -0
  19. 20240101/2308.12682v2.json +0 -0
  20. 20240101/2309.12269v4.json +0 -0
  21. 20240101/2309.14181v3.json +0 -0
  22. 20240101/2310.02128v2.json +0 -0
  23. 20240101/2310.15790v2.json +0 -0
  24. 20240101/2311.00912v3.json +96 -0
  25. 20240101/2311.04014v3.json +133 -0
  26. 20240101/2312.01324v2.json +369 -0
  27. 20240101/2312.09086v2.json +0 -0
  28. 20240101/2312.10661v2.json +583 -0
  29. 20240101/2312.10841v2.json +490 -0
  30. 20240101/2312.11706v3.json +250 -0
  31. 20240101/2312.13108v2.json +613 -0
  32. 20240101/2312.14557v2.json +0 -0
  33. 20240101/2312.16767v2.json +552 -0
  34. 20240101/2312.17046v2.json +829 -0
  35. 20240101/2312.17660v2.json +0 -0
  36. 20240101/2401.00617v1.json +0 -0
  37. 20240101/2401.00632v1.json +349 -0
  38. 20240101/2401.00633v1.json +544 -0
  39. 20240101/2401.00642v1.json +289 -0
  40. 20240101/2401.00644v1.json +0 -0
  41. 20240101/2401.00650v1.json +0 -0
  42. 20240101/2401.00652v1.json +808 -0
  43. 20240101/2401.00653v1.json +369 -0
  44. 20240101/2401.00657v1.json +384 -0
  45. 20240101/2401.00658v1.json +127 -0
  46. 20240101/2401.00661v1.json +0 -0
  47. 20240101/2401.00662v1.json +0 -0
  48. 20240101/2401.00663v1.json +261 -0
  49. 20240101/2401.00678v1.json +659 -0
  50. 20240101/2401.00682v1.json +110 -0
20240101/2106.11469v3.json ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Real-Time XFEL Data Analysis at SLAC and NERSC: a Trial Run of Nascent Exascale Experimental Data Analysis",
3
+ "abstract": "[Summary]X-ray scattering experiments using Free Electron Lasers\n(XFELs) are a powerful tool to determine the molecular structure and\nfunction of unknown samples (such as COVID-19 viral proteins). XFEL\nexperiments are a challenge to computing in two ways: i) due to the high\ncost of running XFELs, a fast turnaround time from data acquisition to data\nanalysis is essential to make informed decisions on experimental protocols;\nii) data-collection rates are growing exponentially, requiring new scalable\nalgorithms. Here we report our experiences analyzing data from two\nexperiments at the Linac Coherent Light Source (LCLS) during September\n2020. Raw data were analyzed on NERSC\u2019s Cori XC40 system, using the\nSuperfacility paradigm: our workflow automatically moves raw data between\nLCLS and NERSC, where it is analyzed using the software package CCTBX. We\nachieved real time data analysis with a turnaround time from data\nacquisition to full molecular reconstruction in as little as 10\nmin \u2013 sufficient time for the experiment\u2019s operators to make informed\ndecisions. By hosting the data analysis on Cori, and by automating\nLCLS-NERSC interoperability, we achieved a data analysis rate which matches\nthe data acquisition rate. Completing data analysis within 10 mins is a\nfirst for XFEL experiments and an important milestone if we are to keep up\nwith data-collection trends.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### X-ray scattering experiments using Free Electron Lasers (XFELs) are a powerful\ntool to determine the molecular structure and function of unknown samples, such\nas COVID-19 viral proteins. The X-ray light produced by XFELs is particularly\nuseful as a tool for probing microscopic samples as it is coherent and intense,\nallowing teams of scientists to probe structural details that leave only a weak\ntrace signal 1 ###reference_1###. However all of this comes at a significant\ncost: XFEL facilities require specialized equipment and large teams to operate.\nTo operate efficiently, it is essential that the experimental investigators\nhave immediate feedback from data analysis in order to make informed decisions\nabout their experiments in real time. By 2025 the next generation of XFEL\nexperiments will more than double the detector resolution, and increase the\nrate at which measurements are taken by a factor of over 400 compared\nto existing facilities2 ###reference_2###, 3 ###reference_3###. This will require\ncomputational intensity levels to escalate from petascale to exascale, for data\nanalysis to keep pace with data collection.\nTo rise to these challenges, the Linac Coherent Light Source (LCLS) at SLAC has\npartnered with the National Energy Scientific Computing center (NERSC) at LBNL\nusing a \u201cSuperfacility\u201d model 2 ###reference_2###, 4 ###reference_4###: data\ncollected at SLAC are immediately transferred to NERSC (via ESnet) where they\nare analysed on the Cori XC40 supercomputer111https://docs.nersc.gov/systems/cori/ ###reference_###.The results are then reported back to the experiment\u2019s operators in real time.\nIn this paper, we demonstrate the usefulness of this approach by reporting our\nexperiences from two experiments in September 2020: LV95, which consisted of\nsmall molecules related to materials science5 ###reference_5###; and P175,\nwhich consisted of COVID-19 viral proteins and potential bound ligands\n6 ###reference_6###. These experiments needed to test many samples during\nlimited beam-time. In order to know when to move on to the next sample and to\nmake changes to experimental protocol, a complete (or near complete) analysis\nof the collected data needs to happen at the same rates at which the data are\ncollected."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Analysing LCLS data at NERSC",
15
+ "text": "Data were collected at a peak rate of 120 images/second (approx. 1/42 of the\ndata-collection rate expected after future light source upgrades), totalling 15\nTB/day. A total of 130 TB of raw data comprising 28 million images were\ncollected during the experiments described in this paper. This is too much data\nto manage manually, therefore we use the Superfacility paradigm: our workflow\nautomatically moves raw data between LCLS and NERSC, where it is analyzed using\nthe CCTBX software package222The scripts to build CCTBX at NERSC, and the Docker image used for the data\nprocessing jobs are available here:\nhttps://gitlab.com/NERSC/lcls-software/-/tree/beamtime-2020-09/cctbx-production ###reference_ree/beamtime-2020-09/cctbx-production###7 ###reference_7###, 8 ###reference_8###.\nBy running on 64 Haswell nodes333Each \u201cHaswell node\u201d is equipped with dual sockets. Each populated by an\nIntel Xeon 2.3 GHz 16-core E5-2698 v3 \u201cHaswell\u201d processor. Each node has\n128 GB DDR4 2133 MHz memory (four 16 GB DIMMs per socket). We note that\neven though NERSC\u2019s newest Supercomputer \u201cPerlmutter\u201d was not used during\nthe beamtimes reported here, cctbx.xfel has been successfully\ndeployed on Perlmutter4 ###reference_4###. The software deployment on\nPerlmutter has been successfully demonstrated during LCLS beamtimes.\nFurthermore, the docker images used here (cf.\n2 ###reference_e2###) are fully portable from Cori to Perlmutter\u2019s CPU\nnodes. , we achieved real time data analysis with a 10 min peak turnaround time from\ndata acquisition to full molecular reconstruction \u2013 sufficient time for the\nexperiment\u2019s operators to make informed decisions between data-collecting runs.\nAt this computational intensity, the data analysis rate matches the data\nacquisition rate. This demonstrates the usefulness of the Superfacility\napproach: by automating job submission and data management, we were able to\nanalyze critical measurements within 10 mins, and most data in under 20 mins, a\nfirst for XFEL experiments and an important milestone if we are to keep up with\ninstrument data-collection trends.\nIn this paper we give a detailed step-by-step description showing how our\nworkflow is deployed on NERSC\u2019s systems; how it coordinates data movement\n(between SLAC and NERSC, discussed in section 2.1 ###reference_###) and data\nanalysis (via batch jobs at NERSC, discussed in\nsection 2.3 ###reference_###); and how CCTBX enables interactive data\nanalysis with several human operators in the loop (discussed in\nsection 2.2 ###reference_###). CCTBX (specifically the\ncctbx.xfel sub package) is a fully-automatic pipeline management system\nthat: i) tracks new incoming data, and relates it to experimental parameters\n(\u201ctags\u201d) provided by the scientists; ii) automatically submits new analysis\njobs (using containerized workers) as new data come in; and iii) reports\nanalysis results via a database hosted on NERSC\u2019s \u201cSpin\u201d micro-services\nplatform in real time (discussed in section 2.4 ###reference_###).\nThis allows a team of scientists to work on the same data via one integrated\nGUI, while CCTBX coordinates a \u201cswarm\u201d of workers behind the scenes.\nFig. 1 ###reference_### illustrates this workflow."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Transferring Data to NERSC",
21
+ "text": "###figure_2### The LCLS data movers are responsible for the transfer of the data between the\ndifferent storage resources used by LCLS. Within the LCLS systems the data\nare moved from the data acquisition storage to the high performance\nfast-feedback storage and the large long term analysis storage. The data\nacquisition uses local SSD based storage on each of its nodes. The\nfast-feedback storage is a shared 560TB nvme-SSD based file system using WekaFS\nand the analysis storage is a 4PB spinning disk based Lustre file system. The\nmovers also perform the data transfer to the remote HPC sites currently\nsupporting NERSC and the SLAC Shared Scientific Data Facility (SDF). At NERSC\nthe data mover copies data directly to the SCRATCH file system. Cori\nscratch was a Lustre file system designed for high performance temporary\nstorage of large files. It had 30 PB of disk space, an aggregate bandwidth of\n>700 GB/sec I/O bandwidth, and was made up of 10000+ disks and 248 I/O servers.\n The data mover is a component of the LCLS data management systems and\ncommunicates with other components by publishing and subscribing to streams of\nevents using Kafka. The main events for the datamover are subscribing to\nnew-files-created events and publishing that files have been transferred to a\nparticular storage resource. For the remote transfers the XRootD data server is\nused. Each remote site exports its shared file system through XRootD which runs\non multiple data transfer nodes that each site provides. All servers at a site\nare clustered into a single system using XRootD\u2019s clustering functionality. The\ndata mover uses the XRootD transfer tool xrdcp in third party copy mode.\nThe data are directly transferred between an XRootD server at the source and\ndestination, without involving the node the mover is running on. In this\ninstance the destination pulls the data from the source.\nFig. 2 ###reference_### shows the NERSC and LCLS XRootD setup. The main\nentry point into each cluster is the redirector (aka cluster manager). It\nredirects the client to the data server that should be used for reading and\nwriting the data.\nThe data mover is a Python application whose main task is to perform many\ntransfers in parallel. It has two options to discover which files to transfer:\neither monitor the experiment folder for new files or subscribe to a Kafka\nstream (the LCLS data \u201clogbook\u201d service) which signals that new files have\nbeen created. As new files are created the mover adds them to its internal\npersistent queue. The files are sorted by run and the oldest runs are\ntransferred first. A run typically consists of 12-18 files. A third of these\nfiles contain the detector data and are between a few to 100 GB in size. For\neach of these files there are two index files that allow random access to the\ndetector data. The size of the index files is less than 1% of the detector\ndata files. The observed peak ESNet transfer speeds showed bursts of 2.6 GB/s\nwhenever runs were completed. This measurement is the effective disk-to-disk\ntransfer rate, including the time it takes to read data from the Lustre file\nsystem at LCLS, and the write incoming data to the Lustre at NERSC. The bursts\nare due to the data only being transferred once the runs are \u201cconcluded\u201d in\nKafka.\nThe Kafka + data mover + XRootD pipeline is fully automated and scalable. Once\nan experimental run is concluded (a \u201crun\u201d is usually 5-30 mins worth of data\ncollection) the raw data files, as well as index files and calibration data,\nare automatically recorded in Kafka as \u201cready to be transferred\u201d and this\npipeline will begin the transfer to NERSC. The transfer is usually completed\nwithin 3 minutes and the status is updated in Kafka as \u201cavailable at NERSC\u201d.\nThe XRootD cluster is fully scalable, allowing us to transfer all the files\ngenerated in one run at once."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Pipeline Management",
27
+ "text": "###figure_3### ###figure_4### Typical XFEL experiments involve collecting multiple datasets for the same or\nsimilar samples, potentially moving them through some reaction condition and\ncapturing the structural changes as the reaction proceeds, or screening\nproteins with a variety of ligands that are biologically or pharmacologically\nrelevant, such as in the case for the COVID-19 viral proteins from experiment\nP175. Samples therefore accrue a great deal of metadata, and each run needs to\nbe associated with this metadata so datasets can be produced from the right\nsubsets of diffraction images. Therefore, the first task the user completes in\nthe cctbx.xfel GUI is tagging runs with short, descriptive terms, such\nas \u201cbatch1\u201d, \u201creactionstate2\u201d, or \u201cligand3\u201d. Multiple tags can be added\nto a data set.\nNext, the user needs to provide processing parameters for each dataset. These\nparameters include details needed to extract reflection data, the experimental\ngeometry such as the location of the detector in 3D space, and known crystal\nproperties. These parameters will need to be updated (with better estimates)\nas the experiment progresses, and so they are organized by trials, in which the\nuser can change the parameters and re-process the data. This organization into\ntrials is particularly helpful when keeping track of which parameters were used\nduring re-processing.\nFinally, the user specifies which tags will form a dataset, mixing and matching\nthem as needed. With these properties in place, the GUI will run through a\ncycle of determining which tasks are needed to be performed on which data, and\nsubmitting these tasks to the cluster to be processed. The GUI monitors the\nstate of each task and continues to submit new jobs as data arrive or as\nprocessing tasks finish, allowing downstream tasks to be submitted on upstream\nresults.\nThe cctbx.xfel GUI therefore provides the experiment\u2019s operators with a\ncomplete pipeline management tool, which lets multiple users simultaneously\nspecify analysis parameters and view analysis results. When new data or\nanalysis parameters are detected cctbx.xfel automatically builds Slurm\njob scripts and input files, and submits these to a set of reserved compute\nnodes. Please see the video444Video available here:\u2009\nhttps://doi.org/10.5281/zenodo.7439774 ###reference_###\nfor a run-through of the cctbx.xfel GUI. By acting as the interface\nwith the supercomputer, the cctbx.xfel pipeline management system allows\nscientists to treat HPC as a reactive element. Fig. 3 ###reference_###\nshows a time series of the CPU utilization during the P175 experiment. This\nusage pattern is typical of the cctbx.xfel workflow: whenever new data\nare available, they need to be analyzed as quickly as possible resulting in a\nsudden need for up to 64 Cori Haswell nodes."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Processing Data on Cori Compute Nodes",
33
+ "text": "###figure_5### Data were processed on up to 64 Haswell nodes on NERSC\u2019s Cori XC40 system. The\ncomputational workload is highly variable (cf. Fig. 3 ###reference_###) depending on the nature of the data being\ncollected. XFEL data analysis follows several sequential stages: i) Identifying\nBragg spots in a diffraction image (spot finding); ii) Associating each\nBragg spot with a Miller index (indexing); iii) Refining unknown model\nparameters (refinement); iv) integrating the Bragg spot intensities and\nsubtracting background (integrating). and v) scaling each image and\ncombining measurements of the same Miller indices collected over several images\n(merging). Table 1 ###reference_### shows that each stage is lossy: not every\nimage contains data of sufficient quality (ie. not enough\nhigh-intensity Bragg spots) to conclusively analyze. This means that each\nsubsequent stage processes fewer data \u2013 and therefore needs fewer\ncomputational resources. Hence stage (v) requries much smaller jobs on Cori\nthan stages (i)\u2013(iv).\nThe computational motif is identical (except for number of images) for each\nstage. cctbx.xfel uses MPI to distribute work over up to 64 Cori Haswell\nnodes. The work is distributed using a producer/consumer model, where each\nimage is processed largely independently. Fig. 4 ###reference_###\nsketches this computational motif. We use psana 9 ###reference_9### to read\nthe raw data files. In Fig. 4 ###reference_### we show an example\nconfiguration where rank 0 distributes work to available ranks. The producer\nrank uses MPI to distribute offsets into the raw data files (green \u201cbuckets\u201d\nin Fig. 4 ###reference_###). The worker ranks then process each image\nindependently, by accessing the data files (each run\u2019s data is stored across\nseveral files) using an offset and applying the detector calibration in memory.\nFrom here on stages (i) \u2013 (iv) are applied without communicating with any\nother ranks. Finally results are stored to the DataWarp burst buffer (blue\nbuckets) and a MySQL database (pink database icon in\nFig. 4 ###reference_###).\n###figure_6### Fig. 5 ###reference_### shows the average time to process an image. We see\nthat cctbx.xfel achieves near-ideal weak scaling, regardless of whether\na partial data analysis (red, and green symbols), or a complete data analysis\nis being performed. The performance variability in the wallclock per image does\nincrease with the number of MPI ranks. This is primarily due to shared resource\ncontention such as I/O and network latency. Fig. 6 ###reference_### shows\nthe probability density function of the wallclock time for each step. This\nvariability can have two sources: 1) algorithmic: e.g. the peaks in the\ngreen curve show different indexing algorithms being applied to the data (note\nthat to analyze LV95, a fast algorithm was used for small molecule\ndata5 ###reference_5###. For protein data, such as P175, indexing can take\nlonger.); and 2) resource contention. The distributions are strongly-peaked,\nand therefore the vast majority of images are analyzed within 7s. However as it\nis not possible to predict exactly how long it will take to analyze a batch of\nimages, we use a producer/consumer workflow as it is automatically\nload-balancing.\n###figure_7### ###figure_8### A helpful tool to identify performance variability due to resource contention\nis the computational weather plot as shown in Fig. 7 ###reference_###. The\nMPI ranks are enumerated on the -axis and wallclock time is plotted on the\n-axis. Each worker is plotted as a collection of horizontal lines (a new\nline for each image). As different images are analyzed, the horizontal line is\ngiven different colors: initialization and I/O (red); spot finding (green);\nindexing (blue); model refinement (dark green); and integration (black). MPI\ncommunication happens only when images are assigned to a particular worker and\ntherefore those regions are not plotted (ie. they are the white regions\nbetween images). Results are stored at the end of each processing step (raw\ndata files are only read during the initialization step).\nTo demonstrate this powerful diagnostic tool, the left and central panels shown\nin Fig. 7 ###reference_### show two different forms of contention. The left\npanel shows an MPI communication-bound job: most time was spent between images,\nwaiting for new work (which is distributed using MPI). Performance profiling\nrevealed a load-imbalance, causing ranks to wait for MPI communication. The\ncentral panel shows an example of I/O contention: at the end of each processing\nstep data is written to the Lustre SCRATCH file system, which resulted\nin several nodes hanging while trying to open files simultaneously. The right\npanel shows the same setup where each rank caches results to the DataWarp burst\nbuffer instead of SCRATCH. Note that all data processing steps open\nfiles when saving intermediate results and for logging. We find that the use of\nthe burst buffer reduces this I/O contention. For a 32-node (1024 rank) job,\nusing the burst buffer therefore leads to a performance speedup on\naverage."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Workflow Orchestration",
39
+ "text": "###figure_9### We use a relational database implemented via MySQL to maintain the associations\nbetween data and results in various stages of processing. In addition, since\nprocessing results are logged to the database quickly, we can access those\nresults from the experiment control room and display them to on-site users for\nrapid feedback on the data they are collecting.\nWe selected Spin, a NERSC microservices platform for container-based services,\nto host our MySQL server reliably and scalably555The scripts to deploy a MySQL database using Spin are available here:\u2009\nhttps://gitlab.com/NERSC/lcls-software/-/tree/beamtime-2020-09/spin/mysql-p175 ###reference_ree/beamtime-2020-09/spin/mysql-p175###.Spin hosts services that can be accessed from the Cori compute and login nodes.\nHaving access to both kinds of node is essential because the cctbx.xfel\nGUI runs on the login nodes and needs to be able to query the database in order\nto display the progress of data processing jobs, as well as determine which new\njobs to submit. The workers do not query the database, instead they commit the\nstatus of the images they are processing (e.g. number of spots found\nper image, the rate at which they are indexed, etc). Hence, even though\nthousands of ranks will be committing status updates to the MySQL database,\nthese transactions are light weight, with the MySQL service handling them well.\nWe found that database connections and transactions consumed between 1% and\n3% of total runtime. This includes latencies caused by accessing Spin via the\n(slower) TCP network. Furthermore, Spin is scalable, which enables us to\nflexibly increase the number of connections the database service can\nefficiently manage as we scale to ever larger data processing workloads.\nFig. 8 ###reference_### shows the effectiveness of this approach by\nplotting a histogram of the difference between the data-collection time and the\nprocessing time. We see that some images (a few thousand) were processed within\n3.5 minutes \u2013 this includes the transfer time of approx. 3 minutes. The\nmajority of images are processed between 10-20 minutes after data is collected.\nWhile this does not include reprocessing, or interpreting the results, it does\ndemonstrate that cross-site automation is crucial for fast turn-around."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "HPC Challenges",
45
+ "text": "While this is a relatively modest computing footprint compared to traditional\nHPC workloads, real-time data analysis requires the coordination of many moving\nparts ranging from traditional computing to networking and I/O. Data sets are\nexpected to grow at least 3000-fold with increasing detector resolution, beam\nintensity, and measurement rate2 ###reference_2###, 3 ###reference_3###, 10 ###reference_10###.\nTherefore, the performance profiling data we collected represents an important\nbenchmark, allowing us to extrapolate the overall performance of this\nSuperfacility workflow and predict future bottlenecks which would prevent\nscaling."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Urgent and Real-time Computing",
51
+ "text": "XFEL data reduction challenges computing clusters in two ways 1) unequal data\nprocessing needs per frame and 2) stochastic (cf.\nFig. 6 ###reference_###) and bursty (cf. Fig. 3 ###reference_###)\ncomputational needs. Together these result in a demand on the job scheduler\nwhere computational resources are urgently needed (the urgency is due to the\nneed for fast real-time data processing), with little advance warning (only\nafter all the data has been processed, do we know how many images resulted from\n\u201cgood\u201d measurements).\nAt NERSC we have enabled time-sensitive computing by allowing nodes on Cori to\nbe reserved ahead of time. These nodes will then be kept clear of jobs\nnot explicitly submitted to this reservation."
52
+ },
53
+ {
54
+ "section_id": "3.1.1",
55
+ "parent_section_id": "3.1",
56
+ "section_name": "3.1.1 Unequal Data Processing Time",
57
+ "text": "###figure_10### In each run, thousands of image frames are recorded, but how far each frame\nmakes it through the processing pipeline varies widely. A frame could be a\ncomplete miss, without a crystal. A crystal may not be of sufficient quality\nto be processed, and even if it is, it may not be isomorphous with the rest of\nthe data. At each step, the image can be rejected for a variety of reasons.\nThis is illustrated in Table 1 ###reference_###: each processing stage (row)\nhas a finite \u201csuccess\u201d rate, and therefore only a fraction of images go onto\nthe next stage.\nAs described in section 2.3 ###reference_###, we solve this problem by\nsplitting the pipeline into tasks and using fewer cores for downstream tasks.\nFor example, during P175, for indexing and integration, we used 28 nodes per\njob, but for scaling and merging which does not read the pixel data, we only\nused 1-2 nodes per job.\nFurther, for indexing and integration we use a producer/consumer approach,\nwhere a root MPI rank sends images to the other ranks. Each rank reports back\nwhen they finish an image and receive a new one to process. In this way, all\nthe ranks are kept busy until the images have all been processed."
58
+ },
59
+ {
60
+ "section_id": "3.1.2",
61
+ "parent_section_id": "3.1",
62
+ "section_name": "3.1.2 Stochastic and Bursty Compute",
63
+ "text": "Ideal processing parameters are rarely known when an experiment begins, and\nmidway through data collection, new parameters can be discovered which obviate\nall previous processing results. In classical computing scheduling this leads\nto two inefficiencies (Fig. 9 ###reference_###). First, if the set of\nreserved compute nodes is big enough to accommodate processing needs plus an\nadditional safety margin, then when data is not being collected or when typical\nprocessing patterns are being observed, the cluster can be underutilized.\nSecond, when batch-reprocessing needs to occur due to the addition of new\nparameters, real-time processing can fall behind.\nThese problems necessitate different scheduling systems than reservations or\nfirst in-first out. Our experiences with real-time data processing for LV95 and\nP175 have shown that reservations are able to guarantee enough computational\nresources for time-sensitive data processing. However reservations alone can be\na wasteful solution: any time the reservation goes unused (e.g. between\nmeasurements) will result in idle compute nodes.\nA more efficient arrangement could include a mix of reservations plus real-time\npriority access to compute resources, which can be released to lower-priority\njobs when not being immediately used. Furthermore, preemption is a promising\nsolution to allow underutilized reservations to be filled by preemptible jobs\nuntil the compute nodes are needed for urgent computing tasks. Preemptible jobs\nare programs that listen for a system interrupt (e.g. SIGINT),\nand \u2013 upon receipt \u2013 gracefully save and quit. At NERSC, together with\nSchedMD, we have developed a reservation system by which preemptible jobs can\nenter a reservation3 ###reference_3###, 11 ###reference_11###. These will then be stopped if\nnew jobs are submitted directly to the reservation (after a warning period\nduring which SIGINT is used to request that the job saves and quits). This\ntechnology is in its early stages, and we describe our initial experiences with\npreemptible reservations in 11 ###reference_11###."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "I/O and Network Performance",
69
+ "text": "###figure_11### The 100 Gb/s network connection (hosted by ESnet) between LCLS and NERSC made\nit possible to transfer most raw data files within 3 minutes after concluding\nthe run. Bandwidth on the ESNet link was reserved ahead of time using the SENSE\nAPI12 ###reference_12###. The XRootD clusters at LCLS and at NERSC performed well, and\ncould be scaled easily to accommodate more files if a backlog occurred.\nIn fact, the I/O speeds of the Lustre file systems at LCLS and at NERSC were\nthe rate limiting factor. Fig. 10 ###reference_### shows the end-to-end\ndata transfer rate from LCLS to NERSC. The different lines are measurements\ntaken on two different days. This makes it clear that there are \u201cgood\u201d and\n\u201cbad\u201d days for file system utilization. The 5-6 fold difference is due to a\nbug in NERSC\u2019s SCRATCH file system, where some of Lustre\u2019s Object\nStorage Targets (OST) have a slow write speed. On a \u201cbad day\u201d the slow Lustre\nwrite speed can become the dominant bottleneck in the data processing pipeline,\nwhere the data for run is not transferred before run commences. This\nhighlights that reliable high-performance I/O is crucial for experimental\nscience workflows."
70
+ },
71
+ {
72
+ "section_id": "3.3",
73
+ "parent_section_id": "3",
74
+ "section_name": "Workflow Orchestration",
75
+ "text": "###figure_12### Workflow orchestration at scale is always a challenge, as potentially hundreds\nof thousands of tasks need to be coordinated from a central place. In our\nworkflow manager, the database takes on the role orchestrating the distributed\ndata processing. Therefore database communication is a potential single point\nof failure and a bottleneck when experiments are scaled up to the kHz regime\nwith thousands of MPI ranks reporting results simultaneously.\nFig. 11 ###reference_### shows that a Spin-hosted MySQL database was able\nto accommodate the load of approx. 8000 transactions/sec.\nWhile the MySQL database server was selected with scaling in mind, some further\noptimizations became necessary when performing large-scale analysis runs.\nLimiting concurrent connections: In some configurations, the usable\nnumber of MPI ranks was limited by the concurrent connections that our\ndatabase could support. We refactored our database communication to\ncache all database queries for a small set of images before flushing\nthe cache via a single temporary database connection. This reduced the\npeak concurrent database connections to 1 per 10 MPI ranks.\nTransactions: For processing in the kHz regime during a different\nexperiment we encountered another bottleneck when many small queries\nhad to be executed sequentially, with later queries depending on\nearlier ones. Without access to the Spin system, we were overloading\nthe MySQL server , to the point where logging 50000 images could take\nover an hour. Using the MySQL statement LAST_INSERT_ID() we were\nable to combine many queries into a single transaction. With this\napproach, we could log these images using a single MySQL query\ncomprising 130K lines that takes 0.07 seconds.\nA related challenge to workflow orchestration is the variable processing time\nper image. Fig. 6 ###reference_### shows the variability due to\nalgorithmic differences between images (e.g. the peaks in the\ngreen line are due to the indexing algorithm \u201ctrying\u201d different approaches to\nfind a solution). Therefore we employ a producer/consumer model to distribute\nparallel tasks across MPI ranks while maintaining a balanced workload\n(cf. section 2.3 ###reference_###). As the data analysis for each\nimage can have a subtly different call tree, this can have a subtle impact on\noptimizing performance and diagnosing errors: we can not expect each logical\ntask to take roughly the same amount of time. We observe that between 2% and\n3% of images take significantly longer than 2s to process. Using the hatchet tool 13 ###reference_13### we were able to compare the profiles for jobs\nwith different call trees. Hatchets allows us to analyze each job\u2019s call tree\nhierarchically, and compare common sub-graphs. We found that the slow jobs were\na result of I/O contention while reading data, saving results and logging\nprogress. This highlights an important difference to many simulation codes:\ndata analysis workflows often have branching source codes, and invoke many\nlibraries \u2013 it is therefore not always possible to optimize the overall run\ntime by merely focusing on a handful of subroutines that are called over and\nover."
76
+ },
77
+ {
78
+ "section_id": "4",
79
+ "parent_section_id": null,
80
+ "section_name": "Superfacility API",
81
+ "text": "Over the years, NERSC staff have observed how many research workflow operations\nfall into natural patterns of recurring actions that are carried out when\nanalyzing data. The traditional approach for HPC centers is to provide\nhuman-readable interfaces and also to design the experience to meet the\ninteractive expectations of a human user. However this design collapses with\nworkflows that need to run at larger scale or at faster rates such as\nautomated, machine-driven workflows initiated at external facilities such as\nLCLS-II. We expect this mode of operation to become more prevalent in the\nfuture as more and more DOE facilities intend to link into ASCR computing\ninfrastructure to address their data and computing needs. Providing\nmachine-readable APIs for HPC resources is the logical prerequisite to make\nthis connection happen. It is also particularly fitting these days as the\nworkflows community comes together to discuss common needs which, in turn, can\ninform the development of such APIs\n14 ###reference_14### .\nProviding a modern API into NERSC is a central component of the Superfacility\nproject 2 ###reference_2### at Lawrence Berkeley National Laboratory (LBNL),\nwhich aims to lay the basis for a more unified, seamless environment that\ncombines hardware solutions, application software, and data management tools to\ndeliver breakthrough science. Automation is a key component of the\nSuperfacility concept, which envisions science teams at experiment facilities\norchestrating automated data analysis pipelines which move data from the\ninstrument to the computing site, perform analysis, and disseminate\nresults \u2013 all without any human in the loop.\nThe SF API provides RESTful API interfaces to resources and takes inspiration\nfrom work at various HPC centers15 ###reference_15###, 16 ###reference_16### as well as from NERSC\u2019s\nfirst API, the NERSC Web development Toolkit\n(NEWT)17 ###reference_17###. While NEWT was designed to serve primarily as backend\nservice for web science gateways, the new SF API is more targeted at workflows\nand provides a modern, token-based authentication mechanisms as well as\nasynchronous task execution. The SF API service itself is built as a set of\nDocker666https://www.docker.com ###reference_www.docker.com### containers and runs in\nSpin777https://www.nersc.gov/systems/spin/ ###reference_###, NERSC\u2019s\nContainers-as-a-Service platform. By and large, it orchestrates connections to\nbackend systems and databases, asynchronously manages any long-running tasks,\nhandles authentication and authorization, and hosts its own documentation.\nCurrently, the API provides the endpoints described in\ntable 2 ###reference_###. As the API is in active development, the most up\nto date documentation can be obtained online at the automatically generated\nSwagger page.888Superfacility API documentation generated using the Swagger toolset,\navailable at\u2009 https://api.nersc.gov/api/v1.2/ ###reference_api.nersc.gov/api/v1.2/###\n###table_1### information the API installation at the HPC center\nretrieve allocation info for a user or project\nbrowse, upload, and download files or a free form command\nmove data between sites with Globus, or between NERSC storage tiers\nretrieve system health status, including planned outages\nsubmit and manage jobs, check job status\ninformation about pending and completed tasks\nEnumerating all of the use cases for the API would be too much to cover in this\nmanuscript as NERSC envisions all of the common interactions with its\nsystems to become automatable. Instead, we close with describing two use cases,\nwhere one describes the abstract case of checking system health before a file\ntransfer and the other describes a current application of the SF API in the\nAutoSFX pipeline of LCLS-II (a similar pipeline as cctbx.xfel for serial\nfemtosecond crystallography data analysis)."
82
+ },
83
+ {
84
+ "section_id": "4.1",
85
+ "parent_section_id": "4",
86
+ "section_name": "Example: Checking system health before data transfer.",
87
+ "text": "Because the demand for compute capacity is driven by detector output that can\nvary cyclically, experiments often need HPC-scale computing at short notice.\nSome experiments may have even arranged for multiple compute sites to be\navailable to handle workloads in a given time period. To build a truly\nautomated and resilient workflow, scientists need to be able to query the\nhealth and status of a facility and make decisions based on the response; for\nexample, if a file system is unavailable, the workflow pipeline should choose\nnot to send data to it. To assess the status of a NERSC resource, the API\nprovides the /status/ endpoint. Keeping with the example of\nan imminent file transfer, the workflow could query\n/status/dtns and\n/status/community_filesystem in order to find out the\nhealth of NERSC\u2019s data transfer nodes and the community file system,\nrespectively. A json-formatted return of one of those queries would look like\nthis:\nA status indicated as \"active\" would now inform the workflow that the resources\nis operational and that it could start the data transfer. It could use its own\ntools for these transfers, but the API also provides the\n/storage endpoint to move data between\nGlobus-enabled999https://globus.org ###reference_globus.org### sites and between the NERSC\nstorage tiers. For planning further ahead, a query to\n/status/outages/planned would provide any scheduled outage\nin the future and would enable the workflow manager to choose an alternative\ndestination or date for the transfer."
88
+ },
89
+ {
90
+ "section_id": "4.2",
91
+ "parent_section_id": "4",
92
+ "section_name": "Example: Using the SF API in the LCLS AutoSFX pipeline.",
93
+ "text": "The LCLS data management system invokes the SF API to integrate its automation\nengine (ARP) with NERSC computing resources. Data management events (start/end\nruns, file transfers etc) automatically trigger analysis jobs, which are then\ninitiated, monitored and managed at NERSC using\n/compute/jobs/cori calls. Runtime progress bar updates from\nthe jobs, in addition to job statuses from\n/compute/jobs/cori, are then pushed to the browser and\ndynamically update the web UI. The entire AutoSFX workflow, consisting of\nmultiple index/merge steps, is expressed as an AirFlow Directed Acyclic Graph\n(DAG). Each node in the DAG is executed by the ARP by composing\n/utilities and /compute/jobs/cori calls\n(see table 2 ###reference_###). Summary results (for example, electron\ndensity maps) are copied back to the experiment folders using\n/utilities/download calls and displayed in the web UI. As\nmany of these calls target asynchronous endpoints (e.g.\n/compute/jobs) where each POST call generates a task, the\nworkflow frequently queries the /tasks to inquire the status\nof those tasks in order to advance in the DAG."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this paper we have demonstrated the power and possibility of using on-demand\nHPC to analyse data in real time for a running XFEL experiment at LCLS. This\nwill provide a new mode of sustainable operations for high data-rate\nexperiments (over 400 the rate of today\u2019s experiments) expected to come\nonline in 2025. To achieve on-demand and real-time feedback for experiment\ncontrol, we have addressed scaling problems in the application, work\nscheduling, data management and workflow management. We have identified areas\nfor future development based on a series of carefully-profiled experiments\nperformed in late 2020, which achieved the goal of having the analysis keep up\nwith the experiment operation. Most importantly, the experiments described in\nthis paper were not one-off demonstrations, but the start of a regular mode of\njoint operations between an experimental user facility and an HPC user facility\nthat is both sustainable and scalable. HPC centers are increasingly being used\nfor this kind of experiment-driven workflow, and the tools and techniques\ndeveloped in this work were designed to be generalizable to other science\nareas."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "Acknowledgements",
105
+ "text": "N.K.S. acknowledges support from National Institutes of Health grant GM117126.\nN.K.S, J.P.B., and D.B. acknowledge support from the Exascale Computing\nProject (grant 17-SC-20-SC), a collaborative effort of the Department of Energy\n(DOE) Office of Science and the National Nuclear Security Administration. Data\nwere collected at the Linac Coherent Light Source (LCLS) at the SLAC National\nAccelerator Laboratory, supported by the DOE Office of Science, OBES (contract\nNo. DE-AC02-76SF00515), and processed at the National Energy Research\nScientific Computing Center, supported by the DOE Office of Science under DOE\ncontract DEAC02-05CH11231."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S2.T1.1.1.1.1\">Experiment</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.1.2\">LV95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.1.3\">P175</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.1.2.2.1\">Spot finding</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.2.2.2\">17M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.2.2.3\">6%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.2.2.4\">11M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.2.2.5\">49%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.1.3.3.1\">Indexing</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.3.3.2\">2M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.3.3.3\">25%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.3.3.4\">582K</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.3.3.5\">7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.1.4.4.1\">Refinement</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.4.4.2\">564K</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.4.4.3\">99%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.4.4.4\">46K</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.4.4.5\">85%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.1.5.5.1\">Integrating</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.5.5.2\">559K</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.5.5.3\">99%</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.5.5.4\">33K</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.5.5.5\">97%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S2.T1.1.6.6.1\">Total CPU utilization</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" colspan=\"2\" id=\"S2.T1.1.6.6.2\">22663 core-hr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" colspan=\"2\" id=\"S2.T1.1.6.6.3\">31167 core-hr</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Sizes of the data sets collected during two experiments at the LCLS\n(LV95, and P175) as well as the size of different data analysis stages\n(described in section-<a class=\"ltx_ref\" href=\"#S2.SS3\" title=\"2.3 Processing Data on Cori Compute Nodes \u2023 2 Analysing LCLS data at NERSC \u2023 Real-Time XFEL Data Analysis at SLAC and NERSC: a Trial Run of Nascent Exascale Experimental Data Analysis\"><span class=\"ltx_text ltx_ref_tag\">2.3</span></a>). The percentages show\nthe average \u201csuccess rate\u201d for each stage\u00a0\u2013\u00a0<em class=\"ltx_emph ltx_font_italic\" id=\"S2.T1.3.1\">i.e.</em> the\npercentage of images to which the algorithm could find valid solutions\n(and thus can be used as inputs to the next stage). We use \u201cM\u201d to\ndenote \u201cmillions\u201d and \u201cK\u201d to denote \u201cthousands\u201d of diffraction\nimages. Each image has a resolution of approx. 4 megapixels, requiring\napprox. 8 megabytes of storage. The LV95 data set is available at\n<a class=\"ltx_ref ltx_url ltx_font_typewriter\" href=\"https://dx.doi.org/10.11577/1839200\" title=\"\">https://dx.doi.org/10.11577/1839200</a></figcaption>\n</figure>",
112
+ "capture": "Table 1: Sizes of the data sets collected during two experiments at the LCLS\n(LV95, and P175) as well as the size of different data analysis stages\n(described in section-2.3). The percentages show\nthe average \u201csuccess rate\u201d for each stage\u00a0\u2013\u00a0i.e. the\npercentage of images to which the algorithm could find valid solutions\n(and thus can be used as inputs to the next stage). We use \u201cM\u201d to\ndenote \u201cmillions\u201d and \u201cK\u201d to denote \u201cthousands\u201d of diffraction\nimages. Each image has a resolution of approx. 4 megapixels, requiring\napprox. 8 megabytes of storage. The LV95 data set is available at\nhttps://dx.doi.org/10.11577/1839200"
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>API endpoints.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_t\" id=\"S4.T2.1.1.1.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.1.1.1.1\">/meta</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_t\" id=\"S4.T2.1.1.1.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.1.1.2.1\">information the API installation at the HPC center</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.2.2.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.2.2.1.1\">/account</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.2.2.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.2.2.2.1\">retrieve allocation info for a user or project</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.3.3.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.3.3.1.1\">/utilities</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.3.3.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.3.3.2.1\">browse, upload, and download files or a free form command</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.4.4.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.4.4.1.1\">/storage</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.4.4.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.4.4.2.1\">move data between sites with Globus, or between NERSC storage tiers</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.5.5.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.5.5.1.1\">/status</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.5.5.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.5.5.2.1\">retrieve system health status, including planned outages</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.6.6.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.6.6.1.1\">/compute</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle\" id=\"S4.T2.1.6.6.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.6.6.2.1\">submit and manage jobs, check job status</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b\" id=\"S4.T2.1.7.7.1\" style=\"width:43.4pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold ltx_align_top\" id=\"S4.T2.1.7.7.1.1\">/tasks</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b\" id=\"S4.T2.1.7.7.2\" style=\"width:130.1pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.1.7.7.2.1\">information about pending and completed tasks</p>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
116
+ "capture": "Table 2: API endpoints."
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2106.11469v3_figure_1.png",
122
+ "caption": "Figure 1: Sketch of the Superfacility workflow: Top: Data are automatically\ntransferred from the LCLS spinning-disk storage system via XRootD to\nNERSC\u2019s Scratch file system (the orange and blue spikes show the data\ntransfer rate into and out of NERSC, respectively \u2013 spike height ranging\nfrom approx. 1.3 to 2.6 GB/s \u2013 over the ESNet network during the same time\nas the experiment, with each spike being a completed run). Bottom:\nAt NERSC the CCTBX workers (running in Shifter containers on the Cori\ncompute nodes) automatically analyze new data on Scratch, using the\nDataWarp burst buffer as a cache. Users at LCLS and NERSC connect to a\nMySQL database hosted at NERSC to orchestrate the workers, review the data\nanalysis and iterate analysis parameters.",
123
+ "url": "http://arxiv.org/html/2106.11469v3/x1.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2106.11469v3_figure_2.png",
127
+ "caption": "Figure 2: Schematic of how the data mover transfers data using the NERSC \u2013 LCLS\nXRootD clusters. Top: Kafka + data mover pipeline at LCLS\ntogether with the XRootD cluster used to send data (via ESNet) to the\ncorresponding cluster at NERSC. Bottom: XRootD cluster deployed\non two data transfer nodes at NERSC. Once a new file is created, and\nlogged as a file creation event in Kafka (the LCLS data \u201clogbook\u201d\nservice), the data mover initiates a data transfer using the XRootD\ncluster running at LCLS. The data is transferred via ESnet to its\ncounterpart at NERSC, where the data is deposited in the SCRATCH\nLustre file system. Once a file has been transferred, its status in\nKafka is recorded as \u201cavailable at NERSC\u201d \u2013 allowing\ncctbx.xfel to begin data analysis.",
128
+ "url": "http://arxiv.org/html/2106.11469v3/extracted/5324866/figures/xrootsetup_cug_paper.png"
129
+ },
130
+ "3(a)": {
131
+ "figure_path": "2106.11469v3_figure_3(a).png",
132
+ "caption": "Figure 3: CPU usage for the P175 experiment. Left: CPU usage on Cori\nHaswell for the whole duration of the experiment. Only the day shifts\ncollected data, therefore no data analysis was needed at night.\nRight: CPU usage for one day shift (on the second day of the\nexperiment). We see the \u201cbursty\u201d CPU utilization that results from\nurgent computing: whenever new data are available they need to be\nanalyzed as quickly as possible. Once data have been analyzed, the CPUs\non Cori go idle, while waiting for new data.",
133
+ "url": "http://arxiv.org/html/2106.11469v3/x2.png"
134
+ },
135
+ "3(b)": {
136
+ "figure_path": "2106.11469v3_figure_3(b).png",
137
+ "caption": "Figure 3: CPU usage for the P175 experiment. Left: CPU usage on Cori\nHaswell for the whole duration of the experiment. Only the day shifts\ncollected data, therefore no data analysis was needed at night.\nRight: CPU usage for one day shift (on the second day of the\nexperiment). We see the \u201cbursty\u201d CPU utilization that results from\nurgent computing: whenever new data are available they need to be\nanalyzed as quickly as possible. Once data have been analyzed, the CPUs\non Cori go idle, while waiting for new data.",
138
+ "url": "http://arxiv.org/html/2106.11469v3/x3.png"
139
+ },
140
+ "4": {
141
+ "figure_path": "2106.11469v3_figure_4.png",
142
+ "caption": "Figure 4: Structure of an analysis worker running on the Cori Haswell nodes. We\nrely on MPI parallelism to distribute work between nodes (OpenMP is\nalso available, but was not needed to achieve the desired throughput).\nWe employ a producer/consumer model to distribute work and achieve load\nbalancing. Data is provided by psana, which runs on the first\nMPI rank. psana reads an index file and distributes work to the\ncctbx.xfel workers. The resulting program is a flat tree of MPI\nranks with data analysis ranks located at leaves. Workers access data\ndirectly by reading the raw data files using offsets provided by the\n\u201cPSANA\u201d (root) tree node. Finally, the cctbx.xfel workers\nsave their results to disk (local to each MPI rank, using the DataWarp\nburst buffer) and report the analysis progress to a MySQL database\nhosted on NERSC\u2019s Spin micro-services platform. Arrows indicate the\noverall flow of data.",
143
+ "url": "http://arxiv.org/html/2106.11469v3/x4.png"
144
+ },
145
+ "5": {
146
+ "figure_path": "2106.11469v3_figure_5.png",
147
+ "caption": "Figure 5: The average time to process an image remains constant with the number\nof MPI ranks used. Colors show the different stages of the data\nanalysis pipeline. We also see that the variability grows with number\nof MPI ranks, in part due to increased resource contention. However,\nthe vast majority of images can be processed with near-constant time,\nachieving weak scaling on the Cori Haswell nodes.",
148
+ "url": "http://arxiv.org/html/2106.11469v3/x5.png"
149
+ },
150
+ "6": {
151
+ "figure_path": "2106.11469v3_figure_6.png",
152
+ "caption": "Figure 6: Probability distribution of the time taken to perform different data\nanalysis tasks. While most processing steps complete within a few seconds,\ndata analysis can occasionally take significantly longer. Due to this\nvariability, our workflow uses producer/consumer\nparallelism \u2013 which is automatically load-balanced. In LV95, a fast\nalgorithm was used for small molecule data5. For protein\ndata, such as P175, it can take longer to index.",
153
+ "url": "http://arxiv.org/html/2106.11469v3/x6.png"
154
+ },
155
+ "7": {
156
+ "figure_path": "2106.11469v3_figure_7.png",
157
+ "caption": "Figure 7: Computational weather plot illustrating two barriers to scaling (left\nand center), as well as near-optimal performance (right). Weather plots\nshow resource contention by plotting the data processing timeline of\neach rank. The colors represent different processing steps:\ninitialization and I/O (red); spot finding (green); indexing (blue);\nmodel refinement (dark green); and integration (black). MPI\ncommunication is not profiled and is included in the white areas. The\nleft plot shows an MPI communication-bound setup. Performance profiling\nrevealed a load-imbalance, causing ranks to wait for MPI communication.\nAfter optimizing the MPI work sharing code almost all white space\ndisappears. However this reveals I/O contention (note that all steps\nopen files when saving intermediate results and for logging) on the\nSCRATCH file system (central plot) as shown by some nodes\nworking normally, while others appear stuck. Switching to the DataWarp\nburst-buffer resolves this I/O contention resulting in near optimal\nperformance (right plot).",
158
+ "url": "http://arxiv.org/html/2106.11469v3/extracted/5324866/figures/weather.png"
159
+ },
160
+ "8": {
161
+ "figure_path": "2106.11469v3_figure_8.png",
162
+ "caption": "Figure 8: Delay time between recording an event and completion of the first data\nprocessing step for the P175 experiment. The graph shows the number of\nprocessed events (transactions), as a function of delay time. We find that\na few images (those at the end of a run) are processed within 3.5\nminutes \u2013 given that a data transfer usually takes approx 3 minutes, these\nimages were processed only a few seconds after arriving at NERSC. Most\nimages were processed within approx. 10-20 minutes. Tailing delay times\ngreater than 20 min are due to data reprocessing (cf.\nsection 3.1).",
163
+ "url": "http://arxiv.org/html/2106.11469v3/x7.png"
164
+ },
165
+ "9": {
166
+ "figure_path": "2106.11469v3_figure_9.png",
167
+ "caption": "Figure 9: An illustration of the XFEL urgent computing needs. The x\ud835\udc65xitalic_x-axis\nrepresents time, and y\ud835\udc66yitalic_y-axis represents the number of data sets\ncollected. To keep up with processing as data from runs arrive (green\nboxes), processing jobs are submitted as soon as possible (yellow\nboxes). For simplicity we assume that it takes roughly the same amount\nof time to process a data set as it takes to collect it (green and\nyellow boxes are the same size). Furthermore, to illustrate the problem\nof limited reservation sizes, we assume that our reservation has a\nmaximum size of 3 nodes (3 yellow boxes). When new parameters are\ndiscovered, all data must be re-processed in a batch, and on a limited\nreservation, this can lead to delays in live feedback (red boxes).\nFurthermore, the burden of reprocessing grows with the data set size.\nTherefore a reservation would potentially need to be as large as the\nfinal data set.",
168
+ "url": "http://arxiv.org/html/2106.11469v3/extracted/5324866/figures/batchreprocessing.png"
169
+ },
170
+ "10": {
171
+ "figure_path": "2106.11469v3_figure_10.png",
172
+ "caption": "Figure 10: Data transfer rate between the LCLS and NERSC (Lustre SCRATCH)\nfile systems. This rate includes disk read and write speeds, which\nultimately limited the rate at which data can be transferred to\nNERSC. Horizontal black lines show average transfer rates. The\norange line shows a representative \u201cgood\u201d data transfer speed.\nHowever, depending on contention in the Lustre file system at NERSC,\nthis transfer rate can be 5\u22126\u00d75-6\\times5 - 6 \u00d7 lower \u2013 shown by the blue line.",
173
+ "url": "http://arxiv.org/html/2106.11469v3/x8.png"
174
+ },
175
+ "11": {
176
+ "figure_path": "2106.11469v3_figure_11.png",
177
+ "caption": "Figure 11: Rate of database transactions during live data processing. The main\nplot shows the number (in thousands) of database transactions per\nminute during a 12 hour shift. The inset shows a 40-min snapshot of\nnumber (in thousands) per second. We see that the database receives up\nto 8000 commits/second, whenever data processing takes place (the\n\u201cbursts\u201d in the inset show individual data analysis jobs). Despite\nthis heavy load, the Spin microservices platform was capable of\nhandling this load level.",
178
+ "url": "http://arxiv.org/html/2106.11469v3/x9.png"
179
+ }
180
+ },
181
+ "validation": true,
182
+ "references": [
183
+ {
184
+ "1": {
185
+ "title": "doi:\n10.1107/S2059798320000418",
186
+ "author": "Sauter NK, Kern J, Yano J, Holton JM. Towards the spatial resolution of\nmetalloprotein charge states by detailed modeling of XFEL crystallographic\ndiffraction. Acta Crystallographica Section D: Structural Biology\n2020; 76(2): 176\u2013192.",
187
+ "venue": null,
188
+ "url": "http://dx.doi.org/10.1107/S2059798320000418"
189
+ }
190
+ },
191
+ {
192
+ "2": {
193
+ "title": "doi:\n10.1109/XLOOP51963.2020.00006",
194
+ "author": "Enders B, Bard D, Snavely C, et al. Cross-facility science with the\nSuperfacility Project at LBNL. 2020 IEEE/ACM 2nd Annual Workshop on\nExtreme-scale Experiment-in-the-Loop Computing (XLOOP) 2020: 1-7.",
195
+ "venue": null,
196
+ "url": "http://dx.doi.org/10.1109/XLOOP51963.2020.00006"
197
+ }
198
+ },
199
+ {
200
+ "3": {
201
+ "title": "doi:\n10.48550/ARXIV.2206.11992",
202
+ "author": "Bard D, Snavely C, Gerhardt L, et al. The LBNL Superfacility Project Report.\narXiv 2022; 2206.11992.",
203
+ "venue": null,
204
+ "url": "http://dx.doi.org/10.48550/ARXIV.2206.11992"
205
+ }
206
+ },
207
+ {
208
+ "4": {
209
+ "title": "doi:\n10.1080/08940886.2023.2245700",
210
+ "author": "Blaschke JP, Wittwer F, Enders B, Bard D. How a Lightsource Uses a\nSupercomputer for Live Interactive Analysis of Large Data Sets. Synchrotron Radiation News 2023; 0(0): 1-7.",
211
+ "venue": null,
212
+ "url": "http://dx.doi.org/10.1080/08940886.2023.2245700"
213
+ }
214
+ },
215
+ {
216
+ "5": {
217
+ "title": "doi:\n10.1038/s41586-021-04218-3",
218
+ "author": "Schriber EA, Paley DW, Bolotovsky R, et al. Chemical crystallography by serial\nfemtosecond X-ray diffraction. Nature 2022; 601(7893):\n360\u2013365.",
219
+ "venue": null,
220
+ "url": "http://dx.doi.org/10.1038/s41586-021-04218-3"
221
+ }
222
+ },
223
+ {
224
+ "6": {
225
+ "title": "doi:\n10.1038/s41598-021-00236-3",
226
+ "author": "Keable SM, K\u00f6lsch A, Simon PS, et al. Room temperature XFEL crystallography\nreveals asymmetry in the vicinity of the two phylloquinones in photosystem I.\nScientific Reports 2021; 11(1): 21787.",
227
+ "venue": null,
228
+ "url": "http://dx.doi.org/10.1038/s41598-021-00236-3"
229
+ }
230
+ },
231
+ {
232
+ "7": {
233
+ "title": "doi:\n10.1107/S0021889801017824",
234
+ "author": "Grosse-Kunstleve RW, Sauter NK, Moriarty NW, Adams PD. The Computational\nCrystallography Toolbox: Crystallographic algorithms in a reusable software\nframework. Journal of Applied Crystallography 2002;\n35(1): 126\u2013136.",
235
+ "venue": null,
236
+ "url": "http://dx.doi.org/10.1107/S0021889801017824"
237
+ }
238
+ },
239
+ {
240
+ "8": {
241
+ "title": "doi:\n10.1107/S0907444913000863",
242
+ "author": "Sauter NK, Hattne J, Grosse-Kunstleve RW, Echols N. New Python-based methods\nfor data processing. Acta Crystallographica Section D 2013;\n69(7): 1274\u20131282.",
243
+ "venue": null,
244
+ "url": "http://dx.doi.org/10.1107/S0907444913000863"
245
+ }
246
+ },
247
+ {
248
+ "9": {
249
+ "title": "doi:\n10.1107/S1600576716004349",
250
+ "author": "Damiani D, Dubrovin M, Gaponenko I, et al. Linac Coherent Light Source data\nanalysis using psana. Journal of Applied Crystallography 2016;\n49.",
251
+ "venue": null,
252
+ "url": "http://dx.doi.org/10.1107/S1600576716004349"
253
+ }
254
+ },
255
+ {
256
+ "10": {
257
+ "title": "doi:\n10.1109/BigData52589.2021.9671421",
258
+ "author": "Antypas KB, Bard DJ, Blaschke JP, et al. Enabling discovery data science\nthrough cross-facility workflows. 2021 IEEE International Conference on\nBig Data (Big Data) 2021: 3671-3680.",
259
+ "venue": null,
260
+ "url": "http://dx.doi.org/10.1109/BigData52589.2021.9671421"
261
+ }
262
+ },
263
+ {
264
+ "11": {
265
+ "title": "doi:\n10.1109/UrgentHPC54802.2021.00011",
266
+ "author": "Giannakou A, Blaschke JP, Bard D, Ramakrishnan L. Experiences with\nCross-Facility Real-Time Light Source Data Analysis Workflows. 2021\nIEEE/ACM HPC for Urgent Decision Making (UrgentHPC) 2021: 45-53.",
267
+ "venue": null,
268
+ "url": "http://dx.doi.org/10.1109/UrgentHPC54802.2021.00011"
269
+ }
270
+ },
271
+ {
272
+ "12": {
273
+ "title": "doi:\n10.1109/INDIS.2018.00007",
274
+ "author": "Monga I, Guok C, MacAuley J, et al. SDN for End-to-End Networked Science at the\nExascale (SENSE). 2018 IEEE/ACM Innovating the Network for\nData-Intensive Science (INDIS) 2018: 33-44.",
275
+ "venue": null,
276
+ "url": "http://dx.doi.org/10.1109/INDIS.2018.00007"
277
+ }
278
+ },
279
+ {
280
+ "13": {
281
+ "title": "doi:\n10.1145/3295500.3356219",
282
+ "author": "Bhatele A, Brink S, Gamblin T. Hatchet: Pruning the Overgrowth in Parallel\nProfiles. Proceedings of the International Conference for High\nPerformance Computing, Networking, Storage and Analysis 2019.",
283
+ "venue": null,
284
+ "url": "http://dx.doi.org/10.1145/3295500.3356219"
285
+ }
286
+ },
287
+ {
288
+ "14": {
289
+ "title": "doi: 10.5281/zenodo.4915801",
290
+ "author": "Silva F. dR, Casanova H, Chard K, et al. Workflows Community Summit: Advancing\nthe State- of-the-art of Scientific Workflows Management Systems Research and\nDevelopment. arXiv 2021; 2106.05177.",
291
+ "venue": null,
292
+ "url": "http://dx.doi.org/10.5281/zenodo.4915801"
293
+ }
294
+ },
295
+ {
296
+ "15": {
297
+ "title": "doi:\n10.1145/3219104.3219129",
298
+ "author": "Dooley R, Brandt SR, Fonner J. The Agave Platform: An Open,\nScience-as-a-Service Platform for Digital Science. Proceedings of the\nPractice and Experience on Advanced Research Computing 2018.",
299
+ "venue": null,
300
+ "url": "http://dx.doi.org/10.1145/3219104.3219129"
301
+ }
302
+ },
303
+ {
304
+ "16": {
305
+ "title": "doi:\n10.1109/SuperCompCloud51944.2020.00009",
306
+ "author": "Cruz FA, Dabin AJ, Dorsch JP, et al. FirecREST: a RESTful API to\nHPC systems. 2020 IEEE/ACM International Workshop on Interoperability\nof Supercomputing and Cloud Technologies (SuperCompCloud) 2020:\n21-26.",
307
+ "venue": null,
308
+ "url": "http://dx.doi.org/10.1109/SuperCompCloud51944.2020.00009"
309
+ }
310
+ },
311
+ {
312
+ "17": {
313
+ "title": "doi:\n10.1109/GCE.2010.5676125",
314
+ "author": "Cholia S, Skinner D, Boverhof J. NEWT: A RESTful service for building\nHigh Performance Computing web applications. 2010 Gateway Computing\nEnvironments Workshop (GCE) 2010: 1-11.",
315
+ "venue": null,
316
+ "url": "http://dx.doi.org/10.1109/GCE.2010.5676125"
317
+ }
318
+ }
319
+ ],
320
+ "url": "http://arxiv.org/html/2106.11469v3"
321
+ }
20240101/2205.00442v2.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On Binary Networked Public Goods Game with Altruism",
3
+ "abstract": "In the classical Binary Networked Public Goods (BNPG) game, a player can either invest in a public project or decide not to invest. Based on the decisions of all the players, each player receives a reward as per his/her utility function. However, classical models of BNPG game do not consider altruism which players often exhibit and can significantly affect equilibrium behavior. Yu et al. [24] extended the classical BNPG game to capture the altruistic aspect of the players. We, in this paper, first study the problem of deciding the existence of a Pure Strategy Nash Equilibrium (PSNE) in a BNPG game with altruism. This problem is already known to be -complete. We complement this hardness result by showing that the problem admits efficient algorithms when the input network is either a tree or a complete graph. We further study the Altruistic Network Modification problem, where the task is to compute if a target strategy profile can be made a PSNE by adding or deleting a few edges. This problem is also known to be -complete. We strengthen this hardness result by exhibiting intractability results even for trees. A perhaps surprising finding of our work is that the above problem remains -hard even for bounded degree graphs when the altruism network is undirected but becomes polynomial-time solvable when the altruism network is directed. We also show some results on computing an MSNE and some parameterized complexity results. In summary, our results show that it is much easier to predict how the players in a BNPG game will behave compared to how the players in a BNPG game can be made to behave in a desirable way.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In a binary networked public goods (in short BNPG) game, a player can either decide to invest in a public project or decide not to invest in it. Every player however incurs a cost for investing. Based on the decision of all the players, each player receives a reward as per his/her externality function. The net utility is decided based on the reward a player receives and the cost a player incurs. Usually, the externality function and cost of investing differ for every player, making the BNPG game heterogeneous. In some scenarios, the externality function and cost of investing can be the same for every player, making the BNPG game fully homogeneous. Many applications of public goods, for example, wearing a mask [12 ###reference_12###], getting vaccinated [3 ###reference_3###], practicing social distancing [5 ###reference_5###], reporting crimes etc., involve binary decisions. Such domains can be captured using BNPG game. A BNPG game is typically modeled using a network of players which is an undirected graph [11 ###reference_11###].\nWe also observe that there are some societies where few people wear masks and/or get themselves vaccinated, and there are some other societies where most people wear masks and get themselves vaccinated [4 ###reference_4###, 23 ###reference_23###]. This can be attributed to differences in altruistic behavior among various societies [2 ###reference_2###, 6 ###reference_6###]. In an altrusitic society, people consider their as well as their neighbors\u2019 benefit to take a decision. For example, young adults may wear a mask not only to protect themselves but also to protect their elderly parents and young children at home. Altruism can be modeled using an altruistic network which can be either an undirected graph or a directed graph [24 ###reference_24###]. Symmetric altruism (respectively asymmetric altruism) occurs when the altruistic network is undirected (respectively directed). The utility that a player receives depends on both the input network and the (incoming) incident edges in the altruistic network.\nWe study the BNPG game with altruism for two different problem settings. First, we look at the problem of deciding the existence of Pure Strategy Nash Equilibrium (PSNE) in the BNPG game with altruism. In any game, determining a PSNE is an important problem as it allows a social planner to predict the behaviour of players in a strategic setting and make appropriate decisions. It is known that deciding the existence of PSNE in a BNPG game (even without altruism) is NP-Complete [25 ###reference_25###]. This paper mainly focuses on deciding the existence of PSNE in special networks like trees, complete graphs, and graphs with bounded circuit rank. The circuit rank of an undirected graph is the minimum number of edges that must be removed from the graph to make it acyclic.\nIn the second problem setting, also known as Altruistic Network Modification (in short ANM), we can add or delete an edge from the altruistic network, and each such operation has a non-negative cost associated with it. The aim here is to decide if a target strategy profile can be made a PSNE by adding or deleting edges with certain budget constraints. This problem was first studied by [24 ###reference_24###] where they showed that ANM is an NP-Complete problem. This problem enables policymakers to strategically run campaigns to make a society more altruistic and achieve desirable outcomes like everyone wearing a mask and getting vaccinated. This paper mainly focuses on ANM in sparse input networks like trees and graphs with bounded degree."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Contribution",
15
+ "text": "Input graph type\nPSNE existence\nANM symmetric altruism\nANM asymmetric altruism\nTree\nClique\n-hard ()\n-hard ()\nBounded degree\n-hard ()\nBounded circuit rank\nWe show that the problem of deciding the existence of PSNE in BNPG game with asymmetric altruism is polynomial-time solvable if the input network is either a tree [Theorem 3.1 ###reference_heorem1###], complete graph [Theorem 3.3 ###reference_heorem3###] or graph with bounded circuit rank [Theorem 3.2 ###reference_heorem2###]. Moreover, in Theorem 3.1 ###reference_heorem1###, we formulated a non-trivial ILP (not the ILP that follows immediately from the problem definition) and depicted a greedy polynomial time algorithm [Algorithm 1 ###reference_###] to solve it. This strengthens the tractable results for tree, complete graph and graph with bounded circuit rank in [25 ###reference_25###, 18 ###reference_18###] as the previous results were depicted for BNPG games without altruism. Hence, existence of a PSNE can be efficiently decided in an intimately connected society where everyone knows others and thus the underlying graph is connected, and for sparsely connected society where the circuit rank could be low. However, the problem is open for graphs with bounded treewidth, and it is known that the problem even without altruism is -hard for the parameter treewidth [18 ###reference_18###]. The problem of deciding the existence of PSNE in BNPG game even without altruism is known to be -complete [25 ###reference_25###]. A natural but often under-explored question here is if an MSNE can be computed efficiently. We show that computing an MSNE in BNPG game with symmetric altruism is -hard [Theorem 3.4 ###reference_heorem4###].\nANM with either asymmetric or symmetric altruism is known to be -complete when the input network is a clique [24 ###reference_24###]. We complement this by showing that ANM with either asymmetric or symmetric altruism is -complete even when the input network is a tree (the circuit rank of which is zero) and the BNPG game is fully homogeneous [Theorems 4.1 ###reference_heorem1### and 4.2 ###reference_heorem2###]. We also show that ANM with symmetric altruism is known to be para- for the parameter maximum degree of the input network even when the BNPG game is fully homogeneous and the available budget is infinite [Theorem 4.5 ###reference_heorem5###]. However, with asymmetric altruism, the problem is for the parameter maximum degree of the input network [Theorem 4.4 ###reference_heorem4###]. To show this result, we designed an time binary search based algorithm [Algorithm 2 ###reference_###] for Minimum Knapsack problem. We are the first to provide an algorithm better than time for Minimum Knapsack problem to the best of our knowledge.\nIn summary, our paper provides a more fine-grained complexity theoretic landscape for deciding if a PSNE exists in a BNPG game with altruism and the ANM problem which could be of theoretical as well as practical interest.\nWe summarize all the main results (including that of prior work) in Table 1 ###reference_###. There () denotes the results from [24 ###reference_24###] and () denotes the results from [18 ###reference_18###]. All the hardness results in the table except for complete graph hold even for fully homogeneous BNPG game. We observe that the PSNE existence problem admits efficient algorithm for many settings compared to ANM. This seems to indicate that enforcing a PSNE is computationally more difficult than finding a PSNE."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Related Work",
21
+ "text": "Our work is related to [25 ###reference_25###] who initiated the study of computing a PSNE in BNPG games. Their results were strengthened by [18 ###reference_18###] who studied the parameterized complexity of deciding the existence of PSNE in BNPG game. Recently, [22 ###reference_22###] studied about public goods games in directed networks and showed intractibility for deciding the existence of PSNE and for finding MSNE. Our work is also related to [24 ###reference_24###] who intiated the study of Altruistic Network Modification in BNPG game. [16 ###reference_16###, 21 ###reference_21###] also discussed different ways to capture altruism. In the non-altruistic setting, [14 ###reference_14###] worked on modifying networks to induce certain classes of equilibria in BNPG game. Our work is part of graphical games where the fundamental question is to determine the complexity of finding an equilibrium [8 ###reference_8###, 9 ###reference_9###, 13 ###reference_13###]. Our model is also related to the best-shot games [7 ###reference_7###] as it is a special case of BNPG game. [11 ###reference_11###, 17 ###reference_17###, 20 ###reference_20###, 15 ###reference_15###] also discussed some important variations of graphical games."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "Let denote the set . Let be an input network with vertices (each denoting a player). The input network is always an undirected graph. Let be an altruistic network on the same set of vertices. The altruistic network can be directed or undirected graph. An undirected edge between is represented by . Similarly a directed edge from to is represented by . is the set of all neighbours (resp. out-neighbours) of the vertex in an undirected (resp. directed) altrusitic network . Note that is a subset of neightbours of in . A Binary Networked Public Goods (BNPG) game with asymmetric (resp. symmetric) altruism can be defined on the input graph and the directed (resp. undirected) altruistic network as follows. We are given a set of players , and the strategy set of every player in is . For a strategy profile , let . In this paper, we will be using playing 1 (resp. 0), investing (resp. not investing) and strategy (resp. ) interchangeably. Now the utility of player is defined as follows.\nwhere is a non-decreasing externality function in and are constants. can also interpreted as the cost of investing for player . We denote a BNPG game with altruism by . We also define where .\nIn this paper, we study a general case of BNPG game called heterogeneous BNPG game where every player need not have the same externality function and constant . If nothing is mentioned, by BNPG game, we are referring to a heterogeneous BNPG game. In this paper, we also study a special case of BNPG game called fully homogeneous BNPG game where for all and for all .\nIn this paper, we mainly focus on pure-strategy Nash Equilibrium (PSNE). A strategy profile is said to be a PSNE of a BNPG game with altruism if the following holds true for all and for all\nwhere .\nIn this paper, we also look at -Nash Equilibrium. Let be a distribution over that strategy set . We define Supp to be the support of the distribution , that is, Supp where denotes the probability of choosing the strategy by player . Now is an -Nash Equilibrium if the following holds true for all , for all , for all :\nwhere ."
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "Altruistic Network Modification",
33
+ "text": "In this paper we study a special case of Altruistic Network Modification (ANM) which was also studied by [24 ###reference_24###]. If nothing is mentioned, by ANM, we are referring to the special case which we will now discuss. We are given a target profile , BNPG game on an input graph , an initial altruistic network , a cost function and budget . In this setting, we can add or delete an edge from and each such operation has a non-negative cost associated with it. We denote an instance of ANM with altruism by . The aim of ANM with altruism is to add and delete edges in such that becomes a PSNE and the total cost for adding and deleting these edges is atmost . Note that if the altruism is asymmetric (resp. symmetric), then we can add or delete directed (resp. undirected) edges only. We are not allowed to add any edge between two nodes if ."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Standard Definitions",
39
+ "text": "[18 ###reference_18###]\nLet the number of edges and number of vertices in a graph be and respectively. Then circuit rank is defined to be ( is the number of connected components in the graph). Note that circuit rank is not the same as feedback arc set.\n[19 ###reference_19###]\nA tuple , where k is the parameter, is an instance of a parameterized problem. Fixed parameter tractability (FPT) refers to solvability in time for a given instance , where is a polynomial in the input size and is an arbitrary computable function of .\n[19 ###reference_19###]\nWe say a parameterized problem is para- if it is -hard even for some constant values of the parameter."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Results for computing equilibrium",
45
+ "text": "In this section, we present the results for deciding the existence of PSNE and finding MSNE in BNPG game with altruism. We have omitted few proofs. They are marked by () and they are available in the appendix.\n[25 ###reference_25###] showed that the problem of checking the existence of PSNE in BNPG game without altruism is polynomial time solvable when the input network is a tree. We now provide a non-trivial algorithm to show that the problem of checking the existence of PSNE in BNPG game with asymmetric altruism is polynomial time solvable when the input network is a tree.\nThe problem of checking the existence of PSNE in BNPG game with asymmetric altruism is polynomial time solvable when the input network is a tree.\nFor each player , let denote the degree of . At each node with parent , we maintain a table of tuples of valid configurations. A tuple is said to be a valid configuration if there exists a strategy profile such that the following holds true:\n,\nThe number of neighbours of and playing in is and respectively\nNone of the players in the sub-tree rooted at deviate from their strategy in\nNote that the root node doesn\u2019t have a parent. Hence, we consider an imaginary parent with and for all . Hence if there is a tuple in the table of root node then we can conclude that there is a PSNE otherwise we can conclude that there is no PSNE.\nLeaf nodes: We add a tuple to the table if , does not deviate if it plays and . Table for the leaf node can be clearly constructed in polynomial time.\nNon-leaf nodes: For each tuple we do the following. If there is no child of having a tuple of type in its table, then we don\u2019t add to the table of . Similarly if or , then we don\u2019t add to the table of . Otherwise we do the following. Let be the set of children of which have tuples of the type in their table but don\u2019t have tuples of the type . Let be the set of children of which have tuples of the type in their table but don\u2019t have tuples of the type . Let be the set of children of which have tuples of the type and in their table. First let us consider the case when . Now for each , we find the tuple in its table so that is maximized and let this value be . Similarly for each , we find the tuple in the table so that is maximized and let this value be . Also , . If then otherwise . Now we include the tuple in the table if the optimal value of the following ILP is at least .\nThe above ILP can be solved in polynomial time as follows. First sort the values in non-increasing order breaking ties arbitrarily and order the vertices in as as per this order, that is, if . Then we traverse the list of values in non-increasing order and for the corresponding to the value, we choose if otherwise we choose . We do this until or where . Remaining values are chosen in a way such that is satisfied. For a more detailed description, please see the Algorithm 1 ###reference_###.\nNow we show the correctness. Consider an optimal solution . Let be the smallest number such that as per our algorithm and in optimal solution it is . Similarly let be the smallest number such that as per our algorithm and in optimal solution it is . We now swap the values of the variables and (resp. and ) in the optimal solution without decreasing the value of the objective function. Let us assume that . Then it must be the case that otherwise , we will have as per our algorithm. Hence by swapping in the optimal solution, the value of the objective function increases by at least which is a non-negative quantity. Similarly when , it must be the case that otherwise , we will have as per our algorithm. Hence by swapping in the optimal solution, the value of the objective function increases by at least which is a non-negative quantity. By repeatedtly finding such indices and then swaping the value of and (resp. and ) in the optimal solution leads to our solution.\nAn analogous procedure exists for the case where . For each , we find the tuple in the table so that is minimized and let this value be . Similarly for each , we find the tuple in the table so that is minimized and let this value be . Also , . If then otherwise . Now we include the tuple in the table if the optimal value of the following ILP is at most .\nThe above ILP can be solved in polynomial time as follows. First sort the values in non-increasing order breaking ties arbitrarily and order the vertices in as as per this order, that is, if . Then we traverse the list in non-increasing order and for the corresponding to the value, we choose if otherwise we choose . We do this until or where . Remaining values are chosen in a way such that is satisfied.\nNow we show the correctness. Consider an optimal solution . Let be the smallest number such that as per our algorithm and in optimal solution it is . Similarly let be the smallest number such that as per our algorithm and in optimal solution it is . We now swap the values of the variables and (resp. and ) in the optimal solution without increasing the value of the objective function. Let us assume that . Then it must be the case that otherwise , we will have as per our algorithm. Hence by swapping in the optimal solution, the value of the objective function decreases by at least which is a non-negative quantity. Similarly when , it must be the case that otherwise , we will have as per our algorithm. Hence by swapping in the optimal solution, the value of the objective function increases by at least which is a non-negative quantity. By repeatedtly finding such indices and then swapping the value of and (resp. and ) in the optimal solution leads to our solution.\nAs mentioned earlier if there is any tuple in the table of the root , then we conclude that there is a PSNE otherwise we conclude that there is no such PSNE.\nGiven a BNPG game with asymmetric altruism on a tree , a set and a pair of tuples and , we can decide in polynomial time if there exists a PSNE for the BNPG game with asymmetric altruism such that for every and the number of neighbors of playing in is for every .\nIn the proof of Theorem 1, just discard those entries from the table of which don\u2019t have and .\n[18 ###reference_18###] showed that the problem of checking the existence of PSNE in BNPG game without altruism is polynomial time solvable when the input network is a graph with bounded circuit rank. By using the algorithm in Theorem 1 as a subroutine and extending the ideas of [18 ###reference_18###] to our setting, we show the following.\nThe problem of checking the existence of PSNE in BNPG game with asymmetric altruism is polynomial time solvable when the input network is a graph with bounded circuit rank.\n[25 ###reference_25###, 18 ###reference_18###] showed that the problem of checking the existence of PSNE in BNPG game without altruism is polynomial time solvable when the input network is a complete graph. By extending their ideas to our setting, we show the following.\nThe problem of checking the existence of PSNE in BNPG game with asymmetric altruism is polynomial time solvable when the input network is a complete graph.\nThe problem of deciding the existence of BNPG game with altruism where the atruistic network is empty is known to be NP-Complete[25 ###reference_25###]. Therefore, we look at the deciding the complexity of finding an -Nash equilibrium in BNPG game with symmetric altruism. We show that it is -Hard. Towards that, we reduce from an instance of Directed public goods game which is known to be -hard [22 ###reference_22###]. In directed public goods game, we are given a directed network of players and the utility of a player is . Here , is the number of in-neighbours of playing and if and if .\nFinding an -Nash equilibrium of the BNPG game with symmetric altruism is -hard, for some constant .\nLet be an input instance of directed public goods game. Now we create an instance of BNPG game with symmetric altruism.\nLet the constant be . , and . Now define the functions as follows:\nNow we show that given any -Nash equilibrium of the BNPG game with altruism , we can find an -Nash equilibrium of the directed public goods game in polynomial time. Let be an -Nash equilibrium of the BNPG game with symmetric altruism.\nFor all , we have the following:\nHence can\u2019t be in the support of . Therefore , .\nNow we show that is an -Nash equilibrium of the directed public goods game where . Now consider a strategy profile such that , we have . Now let be a strategy profile such that we have . Then we have the following:\nUsing the above equality and the fact that , , we have where . For all , for all , for all , we have the following:\nHence, given any -Nash equilibrium of the BNPG game with symmetric altruism , we can find an -Nash equilibrium of the directed public goods game in polynomial time. This concludes the proof of this theorem."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Results for Altruistic Network Modification",
51
+ "text": "In this section, we present the results for Altruistic Network Modification. First let us call ANM with altruism as heterogeneous ANM with altruism whenever the BNPG game is heterogeneous. Similarly let us call ANM with altruism as fully homogeneous ANM whenever the BNPG game is fully homogeneous. [18 ###reference_18###] depicted a way to reduce heterogeneous BNPG game to fully homogeneous BNPG game. By extending their ideas to our setting, we show Lemmata 1 ###reference_a1###, 2 ###reference_a2### and 3 ###reference_a3### which will be helpful to prove the theorems on hardness in this section.\nGiven an instance of heterogeneous ANM with asymmetric altruism such that cost of investing is same for all players in the heterogeneous BNPG game, we can reduce the instance heterogeneous ANM with asymmetric altruism to an instance of fully homogeneous ANM with asymmetric altruism.\nGiven an instance of heterogeneous ANM with symmetric altruism such that cost of investing is same for all players in the heterogeneous BNPG game, we can reduce the instance heterogeneous ANM with symmetric altruism to an instance of fully homogeneous ANM with symmetric altruism.\nGiven an instance of heterogeneous ANM with symmetric altruism such that input network has maximum degree 3, cost of investing is same for all players in the heterogeneous BNPG game and there are three types of externality functions, we can reduce the instance heterogeneous ANM with symmetric altruism to an instance of fully homogeneous ANM with symmetric altruism such that the input network has maximum degree 13.\nANM with asymmetric altruism is known to be -complete when the input network is a clique [24 ###reference_24###]. We show a similar result for trees by reducing from Knapsack problem.\nFor the target profile where all players invest, ANM with asymmetric altruism is -complete when the input network is a tree and the BNPG game is fully homogeneous.\nANM with symmetric altruism is known to be -complete when the input network is a clique [24 ###reference_24###]. We show a similar result for trees by reducing from Knapsack problem.\nFor the target profile where all players invest, ANM with symmetric altruism is -complete when the input network is a tree and the BNPG game is fully homogeneous.\nWe now show that ANM with symmetric altruism is known to be para- for the parameter maximum degree of the input network even when the BNPG game is fully homogeneous. Towards that, we reduce from an instance of -SAT which is known to be -complete [1 ###reference_1###]. -SAT is the special case of 3-SAT where each variable occurs exactly twice as negative literal and twice as positive literal .\nFor the target profile where all players invest, ANM with symmetric altruism is known to be para- for the parameter maximum degree of the input network even when the BNPG game is fully homogeneous.\nWe complement the previous result by showing that ANM with asymmetric altruism is for the parameter maximum degree of the input graph.\nFor any target profile, ANM with asymmetric altruism can be solved in time where is the maximum degree of the input graph.\n[24 ###reference_24###] showed that solving an instance of asymmetric altruistic design is equivalent to solving different instances of Minimum Knapsack problem and each of these instances have at most items. In Minimum Knapsack problem, we are give a set of items with costs and weights . The aim is to find a subset of items minimizing subject to the constraint that . We assume that otherwise we don\u2019t have any feasible solution. Let us denote the optimal value by . Let . Now consider the following integer linear program which we denote by ILP:\nThe above integer linear program can be solved in time [10 ###reference_10###]. Now observe that for all , the optimal value of ILP is at least . Similarly for all , the optimal value of the ILP is less than . Now by performing a binary search for on the range and then solving the above ILP repeatedly, we can compute in time . See Algorithm 2 ###reference_### for more details.\nAs discussed earlier, an Instance of asymmetric altruistic design is equivalent to solving different instances of Minimum Knapsack problem and each of these instances have at most items. Hence ANM with asymmetric altruism can be solved in time where is the input instance of ANM with asymmetric altruism.\nWe conclude our work by discussing about the approxibimility of ANM with symmetric altruism. [24 ###reference_24###] showed a approximation algorithm for ANM with symmetric altruism when the target profile has all players investing. However, for arbitrary target profile they showed that ANM with symmetric altruism is -complete when the input network is a complete graph and the budget is infinite. We show a similar result for graphs with bounded degree by reducing from -SAT.\nFor an arbitrary target profile, ANM with symmetric altruism is known to be para- for the parameter maximum degree of the input network even when the BNPG game is fully homogeneous and the budget is infinite."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion and Future Work",
57
+ "text": "In this paper, we first studied the problem of deciding the existence of PSNE in BNPG game with altruism. We depicted polynomial time algorithms to decide the existence of PSNE in trees, complete graphs and graphs with bounded We also that the problem of finding MSNE in BNPG game with altruism is -Hard. Next we studied Altruistic Network modification. We showed that ANM with either symmetric or asymmetric altruism is -complete for trees. We also showed that ANM with symmetric altruism is para- for the parameter maximum degree whereas ANM with asymetric altruism is for the parameter maximum degree. One important research direction in ANM is to maximize the social welfare while ensuring that the target profile remains a PSNE. Another research direction is to improve the approximation algorithms of [24 ###reference_24###] for ANM with asymmetric altruism for trees and graphs with bounded degree. Another interesting future work is to look at other graphical games by considering altruism."
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "Missing Proofs",
63
+ "text": "Let be an instance of BNPG game with altruism. Let the circuit rank of be . W.l.o.g. let the graph have connected component. First, let us compute the minimum spanning tree of . Let . Note that . Let be the set of endpoints of the edges in . Note that . Let . For all , let denote the set of out-neighbours of in . For every pair of tuples and , we do the following.\n, let be the number of neighbours of in who play in the tuple . Now below we define and for all .\nNext we decide if there exists a PSNE for the instance of BNPG game with altruism such that\nfor every\n, the number of neighbours of in whose strategies are in is .\nThis can be decided by a polynomial time algorithm due to Corollary 1 ###reference_llary1###. We return yes if such a PSNE exists.\nIf no such PSNE exists for every choice of pair of tuples and , then we return no. The running time of our algorithm is . Now we show that our algorithm returns correct output for every input instance.\n\nIn one direction, let there be a PSNE for the instance of BNPG game with altruism. For all , let be the number of neighbours of in whose strategies are in . We now show that is a PSNE for the instance of BNPG game with altruism where and . Let denote the number of neighbors of in whose strategies are in . Due to the definition of , we have for and for . Therefore, we have for and for . Consider a player . If , then and hence, we have . Therefore, does not deviate from its decision of playing in . Similarly if , then and hence, we have . Therefore, does not deviate from its decision of playing in . Hence, is a PSNE for the instance of BNPG game with altruism where and . This implies that our algorithm will return yes.\n\nIn the other direction, let our algorithm return yes. This means for a pair of tuples and , we have a PSNE for the instance of BNPG game with altruism. We now show that is a PSNE for the instance of BNPG game with altruism. Consider a player . If , then . Therefore, we have . Hence, does not deviate from its decision of playing in . Similarly, if , then . Therefore, we have . Hence, does not deviate from its decision of playing in . Hence, is a PSNE for the instance of BNPG game with altruism.\nLet be an integer. First observe that if players are playing , then for every player is . Let be the set of players which do not deviate from playing if their neighbours are playing . Let be the set of players which do not deviate from playing if their neighbours are playing . Now we claim that there is a PSNE where players are playing iff , and .\n\nIn one direction, suppose there is a PSNE with players playing , then there is a subset of at least players such that , . Therefore, . Now in we have players who are playing . It implies that there a subset of at least players such that , . Therefore, . Now observe that a player plays 0 in otherwise would deviate from its decision. This implies that as does not deviate from playing (recall, is a PSNE). And . Hence, must be equal to .\n\nIn the other direction, let it be the case that , and . Now let us construct a strategy profile as follows. First, , set . Now consider a subset such that . We always can choose such a subset due to the following:\nNow, , set . For the rest of the players who are not part of both and , we set their strategies as . Now we claim that is a PSNE. A player with won\u2019t deviate as they are part of . Similarly, a player with won\u2019t deviate as they are part of . Hence is a PSNE.\n\nHaving proved our claim, we now present the polynomial time algorithm. First check whether the strategy profile where all players do not invest is PSNE or not. This can checked in polynomial time. If it is not a PSNE, then check whether the strategy profile where all players invest is PSNE or not. This can also be checked in polynomial time. If it is not a PSNE, then check whether there is a such that , , and . If there is such a , then there exists a PSNE otherwise there is no PSNE.\nLet be an instance of heterogeneous ANM with asymmetric altruism. , let . Using this instance, we create an instance == of fully homogeneous ANM with asymmetric altruism. First we set and .\n\nNext we construct the graph as follows.\nLet and . We now recursively define a function as follows.\nNow for all , we set and . Now for all , set . For all , set . Now we construct the altruistic network as follows:\nNow we define the cost function . For all such that is allowed be added to , . For remaining edges which are allowed to be added to , .\n\nThis completes the description of fully homogeneous ANM with asymmetric altruism. We now claim that the instance of heterogeneous ANM with asymmetric altruism is a yes instance iff the instance of fully homogeneous ANM with asymmetric altruism is a yes instance.\nIn one direction, let heterogeneous ANM with asymmetric altruism be a yes instance. For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Remaining nodes in don\u2019t deviate from playing as and is at least . Hence fully homogeneous ANM with asymmetric altruism is a yes instance.\nIn other direction, let fully homogeneous ANM with asymmetric altruism be a yes instance. First observe that an edge not having both the endpoints in the set can\u2019t be added to otherwise the total cost of the edges added would exceed . For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Hence heterogeneous ANM with asymmetric altruism is a yes instance.\nLet be an instance of heterogeneous ANM with symmetric altruism. , let . Using this instance, we create an instance == of fully homogeneous ANM with symmetric altruism. First we set and .\n\nNext we construct the graph as follows.\nLet and . We now recursively define a function as follows.\nNow for all , we set and . Now for all , set . For all , set . Now we construct the altruistic network as follows:\nNow we define the cost function . For all such that is allowed be added to , . For remaining edges which are allowed to be added to , .\n\nThis completes the description of fully homogeneous ANM with symmetric altruism. We now claim that the instance of heterogeneous ANM with symmetric altruism is a yes instance iff the instance of fully homogeneous ANM with symmetric altruism is a yes instance.\nIn one direction, let heterogeneous ANM with symmetric altruism be a yes instance. For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Remaining nodes in don\u2019t deviate from playing as and is at least . Hence fully homogeneous ANM with symmetric altruism is a yes instance.\nIn other direction, let fully homogeneous ANM with symmetric altruism be a yes instance. First observe that an edge not having both the endpoints in the set can\u2019t be added to otherwise the total cost of the edges added would exceed . For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Hence heterogeneous ANM with symmetric altruism is a yes instance.\nLet be an instance of heterogeneous ANM with symmetric altruism. Let us partition into three sets , and such that and , we have and . Let be a function such that if .\nUsing this instance, we create an instance == of fully homogeneous ANM with symmetric altruism. First we set and .\n\nNext we construct the graph as follows.\nIt is easy to observe that the maximum degree of is 13.\nLet and . We now recursively define a function as follows.\nFor all , let . For all , . Now for all , we set . Now for all , set . For all , set . Now we construct the altruistic network as follows:\nNow we define the cost function . For all such that is allowed be added to , . For remaining edges which are allowed to be added to , .\n\nThis completes the description of fully homogeneous ANM with symmetric altruism. We now claim that the instance of heterogeneous ANM with symmetric altruism is a yes instance iff the instance of fully homogeneous ANM with symmetric altruism is a yes instance.\nIn one direction, let heterogeneous ANM with symmetric altruism be a yes instance. For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Remaining nodes in don\u2019t deviate from playing as and is at least . Hence fully homogeneous ANM with symmetric altruism is a yes instance.\nIn other direction, let fully homogeneous ANM with symmetric altruism be a yes instance. First observe that an edge not having both the endpoints in the set can\u2019t be added to otherwise the total cost of the edges added would exceed . For all , if was added to , then add to . Similarly for all , if was removed from , then remove from . Now we show that becomes a PSNE. For all , let the number of neighbours of playing in be . Then the number of neighbouts of playing in is . Now for all such that , we have . Hence doesn\u2019t deviate from playing . Similarly for all such that , we have . Hence doesn\u2019t deviate from playing . Hence heterogeneous ANM with symmetric altruism is a yes instance.\nWe reduce from the decision version of the KNAPSACK Problem. In Knapsack problem, we are give a set of items with profits and weights . The aim is to check whether there exists a subset of items such that and . Now we create an instance of ANM with asymmetric altruism. We set . The input graph (,) is defined as follows\nLet the initial altruistic graph be empty. Let the target profile have all the players investing (i.e, playing 1). Now we define the functions for all . for all . For all , for all . For all , for all . For all , . Now we define the cost of the introducing an atruistic edge. The cost of introducing the edge is . For remaining edges which are allowed to be added to , cost of adding is . Let the total budget be . This completes the description of the instance of ANM with asymmetric altruism.\n\nNow we show that KNAPSACK Problem is a yes instance iff ANM with asymmetric altruism is a yes instance. In other direction, let KNAPSACK problem be a yes instance. Then there is a subset of items such that and . Now if we introduce the set of altruistic edges , then the target profile becomes a PSNE and the total of introducing these edges is at most . Hence ANM with asymmetric altruism is a yes instance.\n\nIn the other direction, let the ANM with asymmetric altruism be a yes instance. Then there is a set of altruistic edges of total cost at most such that when they are introduced the target profile becomes a PSNE. Let . Hence if the subset of items is chosen then we have and . Hence the KNAPSACK problem is a yes instance.\n\nApplying Lemma 1 ###reference_a1### concludes the proof of this theorem.\nWe reduce from the decision version of the KNAPSACK Problem. In Knapsack problem, we are give a set of items with profits and weights . The aim is to check whether there exists a subset of items such that and . Now we create an instance of ANM with symmetric altruism as follows. We set . The input graph (,) is defined as follows\nLet the initial altruistic graph be empty. Let the target profile have all the players investing (i.e, playing 1). Now we define the functions for all . for all . For all , for all . For all , for all . For all , . Now we define the cost of the introducing an atruistic edge. For all , the cost of introducing the edge is . For all , the cost of introducing the edge is . Let the total budget be . This completes the description of the instance of ANM with symmetric altruism.\nNow we show that KNAPSACK Problem is a yes instance iff ANM with symmetric altruism is a yes instance. In other direction, let KNAPSACK problem be a yes instance. Then there is a subset of items such that and . Now if we introduce the set of altruistic edges , then the target profile becomes a PSNE and the total of introducing these edges is at most . Hence ANM with symmetric altruism is a yes instance.\nIn the other direction, let the ANM with symmetric altruism be a yes instance. Then there is a set of altruistic edges of total cost at most such that when they are introduced the target profile becomes a PSNE. Let . Hence if the subset of items is chosen then we have and . Hence the KNAPSACK problem is a yes instance.\nApplying Lemma 2 ###reference_a2### concludes the proof of this theorem.\nTo show the -hardness, we reduce from an instance of -SAT which we denote by . We define a function as and for all . We now create an instance of ANM with symmetric altruism as follows. Let the initial altruistic network be empty and . Input graph =(,) for the input BNPG game is as follows:\nNow observe that the degree of every vertex in is at most . We now define and . . . . . Target profile is defined as follows. For all , . For all , and . Now define the cost of adding edges to . Cost of adding any edge is . The budget is infinite. This completes the construction of the instance of ANM with symmetric altruism.\nNow we prove that the instance of -SAT is a yes instance iff the instance of ANM with symmetric altruism is a yes instance. In one direction, let -SAT be a yes instance and its satisfying assignment be . For all , we now do the following:\nLet be part of the clauses and . Then add the edges to if , otherwise add the edge .\nLet be part of the clauses and . Then add the edges to if , otherwise add the edge .\nIt is easy to observe that after adding the above edges, becomes a PSNE. Hence the instance of ANM with symmetric altrusim is a yes instance.\n\nIn the other direction, let ANM with symmetric altruism be a yes instance. First observe that for all , either or must have been added to . Let . Let . Now for all , there is no such that is added to otherwise will deviate from playing . Now consider the following assignment of -SAT instance. For all , if we have otherwise we have . Now we show that is a satisfying assignment. If not there is a clause which is not satisfied. Hence are not added to . But then would deviate from playing which is a contradiction. Hence is a satisfying assignment. Hence the instance of -SAT is a yes instance.\nApplying Lemma 3 ###reference_a3### concludes the proof of this theorem.\nTo show the -hardness, we reduce from an instance of -SAT which we denote by . We define a function as and for all . We now create an instance of ANM with symmetric altruism as follows. Let the initial altruistic network be empty and . Input graph =(,) for the input BNPG game is as follows:\nNow observe that the degree of every vertex in is at most . We now define and . . . . . Target profile has all the players investing. Now define the cost of adding edges to . Cost of adding an edge from the set is . Cost of adding an edge from the set is . The budget is . This completes the construction of the instance of ANM with symmetric altruism.\n\nNow we prove that the instance of -SAT is a yes instance iff the instance of ANM with symmetric altruism is a yes instance. In one direction, let -SAT be a yes instance and its satisfying assignment be . For all , we now do the following:\nLet be part of the clauses and . Then add the edges to if , otherwise add the edge .\nLet be part of the clauses and . Then add the edges to if , otherwise add the edge .\nThe total cost of adding the above edges is . It is easy to observe that after adding the above edges, becomes a PSNE. Hence the instance of ANM with symmetric altruism is a yes instance.\n\nIn the other direction, let ANM with symmetric altruism be a yes instance. First observe that for all , either or must have been added to . Also for no , both and are added to otherwise the total cost of edges added would exceed . Let . Let . Now for all , must have been added to otherwise will deviate from playing . Here are part of the clauses and . No other edge can be added to otherwise the total cost of edges added would exceed . Now consider the following assignment of -SAT instance. For all we have and for all we have . Observe that there is no such that . Now we show that is a satisfying assignment. If not there is a clause which is not satisfied. Hence are not added to . But then would deviate from playing which is a contradiction. Hence is a satisfying assignment. Hence the instance of -SAT is a yes instance.\nApplying Lemma 3 ###reference_a3### concludes the proof of this theorem."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table ltx_align_center\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.2.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S1.T1.3.2\" style=\"font-size:90%;\">List of results (our results are in bold). PSNE existence results hold for both symmetric and asymmetric altruism.</span></figcaption>\n</figure>",
70
+ "capture": "Table 1: List of results (our results are in bold). PSNE existence results hold for both symmetric and asymmetric altruism."
71
+ }
72
+ },
73
+ "image_paths": {},
74
+ "validation": true,
75
+ "references": [],
76
+ "url": "http://arxiv.org/html/2205.00442v2"
77
+ }
20240101/2208.09709v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2208.09894v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2211.01229v2.json ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Fast convergent PML method for scattering with periodic surfaces: the exceptional case",
3
+ "abstract": "In the author\u2019s previous paper [20], exponential convergence was proved for the perfectly matched layers (PML) approximation of scattering problems with periodic surfaces in 2D. However, due to the overlapping of singularities, an exceptional case, i.e., when the wave number is a half integer, has to be excluded in the proof. However, numerical results for these cases still have fast convergence rate and this motivates us to go deeper into these cases. In this paper, we focus on these cases and prove that the fast convergence result for the discretized form. Numerical examples are also presented to support our theoretical results.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "We focus on the scattering problem with a periodic surface in a two dimensional space. Let be a periodic surface in which is defined as the graph of a periodic and bounded function . is the half space above . Let be a straight line lying above .\nFor the visulization we refer to Figure 1 ###reference_###.\n###figure_1### The scattering problem is described by the following equations with a compact supported source term :\nwith the upward propagating radiation condition (UPRC):\nAs is proved in [8 ###reference_8###], the UPRC is equivalent to the following transparent boundary condition:\nIn [7 ###reference_7###], it is proved that the problem is uniquely solvable in the weighted Sobolev space for , where is defined by the norm\nWhen the source term is quasi-periodic, there is a well-established framework to study these problems and both integral equations (see [11 ###reference_11###, 2 ###reference_2###, 22 ###reference_22###]) and finite element methods ([3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###]) are applied to produce good numerical approximations. However, for non-periodic incident fields, this framework no longer works thus new methods are necessary. If we simply ignore the periodicity of the surface, then techniques for rough surfaces can be applied, see [17 ###reference_17###, 1 ###reference_1###] for integral equation methods, and\n[7 ###reference_7###] for finite section methods.\nFor periodic domains, a well-known tool, the Floquet-Bloch transform, is widely used to rewrite the original problem defined in an unbounded periodic domain into an equivalent new problem defined in a bounded domain in a higher dimensional space. A series of numerical methods have been proposed based on this transform, see [13 ###reference_13###, 12 ###reference_12###, 14 ###reference_14###, 16 ###reference_16###, 15 ###reference_15###, 19 ###reference_19###].\nExcept for the difficulty brought by the unbounded periodic domain, the proposed methods also suffer from the complexity of the non-local Dirichlet-to-Neumann map in (2 ###reference_###). To this end, the perfectly matched layers are adopted to avoid this difficult. We add an absorbing layer above the physical domain, then the propagating wave is absorbed and decays very fast within this layer. Then on the outer boundary of the layer, the Dirichlet or Neumann data is approximated by. In this case, since the boundary condition becomes local and standard, it is very convenient to be implemented by the finite element methods. Since the PML only provides approximated solutions, the convergence result is of essential importance. For periodic domains we refer to [10 ###reference_10###] and for rough surfaces we refer to [9 ###reference_9###] for the convergence analysis. Although only linear convergence was proved globally for rough surfaces, the authors made a conjecture that actually exponential convergence holds locally. In [21 ###reference_21###], the author proved the exponential convergence for periodic surfaces except for the cases when the wave numbers are half integers, although numerical results show that fast convergence also happens in these cases. This interesting phenomenon motivates us to go deeper into this topic.\nFollowing [20 ###reference_20###], we still apply the Floquet-Bloch transform to both the original and the PML problem, and then the solutions are written as the integral of a family of periodic problems, with respect to the Floquet parameter. With the help of the Gauss quadrature rule, the integral is discretized and convergence analysis is carried out on each node. Then finally we get the fast convergence of the discretized form.\nThe rest of the paper is organized as follows. In the second section, the Floquet-Bloch transform is applied. Then in the third section, we introduce the PML approximation. Convergence analysis is discussed in Section 4, and numerical examples are presented in the last section."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Application of the Floquet-Bloch transform",
15
+ "text": "Since the problem is formulated in the periodic strip , it is convenient to define the following\ndomains in one periodicity cell. Let , and . For the visulization of the domains we refer to Figure 1 ###reference_###. For simplicity, suppose that is compactly supported in .\nWith where is the Floquet-Bloch transform (see Appendix), then is periodic w.r.t. and satisfies the following equations\nwith the transparent boundary condition:\nwhere is the -th Fourier coefficient of .\nThe variational form for (3 ###reference_###)-(4 ###reference_###) is, find such that\nholds for any test function .\nThe original solution is given by the inverse Floquet-Bloch transform as:\nNote that from [11 ###reference_11###], the problem (5 ###reference_###) is always well-posed for any . Thus the inverse Floquet-Bloch representation of is well defined.\nFrom [20 ###reference_20###], there are two critical points in , such that at leaset one of the square roots equals to . Let , then there are two integers , such that\nthen are the two critical points. For any , the term for all . We define the following operators and functions as follows:\nwhere .\nThe problem (3 ###reference_###)-(4 ###reference_###) is written as the following operator equation:\nwhere and depend analytically on near . From [11 ###reference_11###], the problem is uniquely solvable in , and the solution depends continuously on and there is a constant such that\nRecall that when , exponential convergence of the method has already been proved in [20 ###reference_20###], thus we only need to focus on the case that in this paper. When is even, then ; when is odd, then . Thus we choose the interval to be when is even, and when is odd. For simplicity, we only focus on the case that is an integer, thus and .\nWhen is close to , the square roots have small absolute values. In this case, is a small perturbation of thus is uniformly invertible.\nThen\nFor simplicity, let and .\nFrom Neumann series,\nwhere depends analytically on and is the series of all the repetitions of times and times . The number of terms in this series is . Then\nwhere depends analytically on .\nLet for , and for . Then and\nNote that all the functions in this form depend analytically for . We also get the original solution by changing the variable:\nMoreover, the integrands are extended analytically to a small neighbourhood of ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "PML approximation",
21
+ "text": "We add a PML layer above with thickness , described by the complex valued function (see Figure 2 ###reference_###). Let , where be a parameter, be a sufficiently smooth function:\nand is fixed with positive real and imaginary parts, an integer. Let\nThen is the approximated solution by the PML, with the equation:\nSince when , then the problem is rewritten in with a modificed transparent boundary condition:\n###figure_2### Let , then from [20 ###reference_20###], we get the formulation for the solution:\nwith the transparent boundary condition:\nHere .\nSimilar to the operator , we define by:\nThe problem (9 ###reference_###)-(10 ###reference_###) is written as the following operator equation:\nwhere depends analytically on and is also uniformly invertible for near . Define and .\nFrom Neumann series,\nSince are analytic functions, is extended analytically to a neighbourhood of in the complex plane.\nTo be align with the integral representation of the original solution , we also change the variable in the same way. Then and\nWe also get the solution from the inverse Floquet-Bloch transform by changing the variable:\nSimilarly, the two integrands are extended analytically to small neighbourhoods of ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Discretization and convergence analysis",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Gauss-Legendre quadrature rule",
33
+ "text": "We apply the Gauss-Legendre quadrature rule to discretize the integral on the interval . Let the integrand be denoted by (we omit the variable here for simplicity), where is analytic in . Thus we apply the Gauss-Legendre quadrature rule in each of the interval. Let , then\nFor any positive integer , let be the nodes and be the weights of the Gauss-Legendre quadrature rule, where . Then\nThen and hav the discretized forms:\nand\nTo estimate the error of the quadrature rule, we need the following theorem.\nLet be the ellipse with foci at and sum of the half-axes .\nLet be real analytic with complex analytic extension to . Denote by the integral over with integrand and by its approximation by the -point Gauss-Legendre quadrature. Then\nFrom analytic extension, the functions and are extended analytically to small neighbourhoods of and in the complex plane. From the change of variable, the integrals are written as the integral on , and the complex neighbourhoods are transformed into small neighbourhoods of . We can easily find a small ellipse that lies inside the small neighbourhood with . In this case, there is a constant such that"
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Convergence of the PML approximation",
39
+ "text": "First, we need some basic properties for and . From the definition of the weights, we have the following property:\nWe also need the estimations for . For details we refer to [6 ###reference_6###].\nLet be the Legendre polynomial. Let be the zeros of in increasing order. Then\nTo estimate the locations of the nodes , we also need the following basic property of the cosine function.\nFor ,\nfor ,\nFor , from Taylor\u2019s series, . Let .\nFirst, . Moreover,\nThis implies that for . The second argument comes from the fact that .\n\u220e\nLet , from the properties of the cosine function, we have:\nMoreover, from Lemma 3 ###reference_rem3###, for where , since ,\nLet , , then\nFor , since , . This implies that , ,\nLet and , where .\nFrom the properties of\nThus for all integers .\nFor each fixed , the discretized forms\nTo compare the above formulas, we need to estimate the function\nThe key result is described in the following lemma.\nFor any and , there are two constants such that\nSince is real, is either purely real or purely imaginary.\ni) When , is purely real and\nThen\nThus\nFrom the mean value theorem, there is a such that\nThen\nii) When , is purely imaginary and\nThen\nThus\nWith similar arguments as i), we get\nSince for positive , we can easily find the constants and to finish the proof.\n\u220e\nFrom above result, we conclude immediately that\nwhen .\nIn particular, for any and (), since is a positive integer, holds uniformly for all . Thus by modifying the constant ,\nFor , since , there is a constant such that\nThe following lemma is proved by this result immediately.\nThe function converges to uniformly w.r.t. ,\nfor the fixed constant .\nBased on above results, we conclude the following estimation.\nFor sufficiently large and , the following estimation holds:\nfor the fixed constants .\nFrom the discretization of and , using Lemma 5 ###reference_rem5### and (13 ###reference_###), (14 ###reference_###)-(15 ###reference_###),\nThe proof is finished by choosing proper constants and .\n\u220e\nTogether with (12 ###reference_###), we get the final conclusion.\nFor sufficient large and , there are two constants such that\nFrom above results, we get the following inequality immediately:\nNote that for a , there is a constant such that , then . Let where (if it is not an integer, then it is the closest integer), then\nLet , then we have two constants such that\n\u220e"
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Numerical examples",
45
+ "text": "In this section, we present four numerical examples to support our theoretical result. For all the examples, the periodic surface is given by\nThe PML lies in the strip and . The source term is compactly supported and defined as\nFor the visualization of the structures and sources we refer to Figure 3 ###reference_###.\n###figure_3### ###figure_4### We take , and let the numerical solution when as the reference. To solve the periodic problem (8 ###reference_###), we apply the finite element method with the mesh size . For each numerical result, we take the value at and compare the relative -norm on this line segment:\nThe relative errors for are listed in Table 1 ###reference_###, and the relative errors are shown in Figure 4 ###reference_###. Note that in the pictures, the -axis is , and the -axis is . The red dots are computational results and the the black dashed lines are the linear regressions.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### From the pictures, the linear regressions fit well for the computational results for all the four wave numbers. This implies that the numerical results coincide with our theoretical approach in Theorem 7 ###reference_rem7###. Although the convergence is not exponential, it is still faster than any algebraic order thus it converges super algebraically. This implies that the PML also provides very efficient numerical approximations for the exceptional cases which are excluded in [21 ###reference_21###]."
46
+ }
47
+ ],
48
+ "appendix": [],
49
+ "tables": {
50
+ "1": {
51
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.37\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.5\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T1.9.9.5\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.9.9.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.13.13.5\">4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.12.12.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.13.13.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.17.17\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.17.17.5\">6</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.14.14.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.15.15.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.16.16.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.17.17.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.21.21.5\">8</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.19.19.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.20.20.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.21.21.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.25.25.5\">10</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.24.24.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.25.25.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.29.29\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.29.29.5\">12</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.26.26.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.27.27.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.28.28.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.29.29.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.33.33\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T1.33.33.5\">14</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.30.30.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.31.31.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.32.32.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.33.33.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.37.37\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T1.37.37.5\">16</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.34.34.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.35.35.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.36.36.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.37.37.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Relative errors with different \u2019s.</figcaption>\n</figure>",
52
+ "capture": "Table 1: Relative errors with different \u2019s."
53
+ }
54
+ },
55
+ "image_paths": {
56
+ "1": {
57
+ "figure_path": "2211.01229v2_figure_1.png",
58
+ "caption": "Figure 1: Periodic structures: domains and notations.",
59
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/sample_ne.png"
60
+ },
61
+ "2": {
62
+ "figure_path": "2211.01229v2_figure_2.png",
63
+ "caption": "Figure 2: Perfectly matched layers.",
64
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/sample_pml.png"
65
+ },
66
+ "3(a)": {
67
+ "figure_path": "2211.01229v2_figure_3(a).png",
68
+ "caption": "Figure 3: Left: structure; right: source term.",
69
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/ne.png"
70
+ },
71
+ "3(b)": {
72
+ "figure_path": "2211.01229v2_figure_3(b).png",
73
+ "caption": "Figure 3: Left: structure; right: source term.",
74
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/source.png"
75
+ },
76
+ "4(a)": {
77
+ "figure_path": "2211.01229v2_figure_4(a).png",
78
+ "caption": "Figure 4: Semi-log plots for relative errors.",
79
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/k_1.png"
80
+ },
81
+ "4(b)": {
82
+ "figure_path": "2211.01229v2_figure_4(b).png",
83
+ "caption": "Figure 4: Semi-log plots for relative errors.",
84
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/k_15.png"
85
+ },
86
+ "4(c)": {
87
+ "figure_path": "2211.01229v2_figure_4(c).png",
88
+ "caption": "Figure 4: Semi-log plots for relative errors.",
89
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/k_25.png"
90
+ },
91
+ "4(d)": {
92
+ "figure_path": "2211.01229v2_figure_4(d).png",
93
+ "caption": "Figure 4: Semi-log plots for relative errors.",
94
+ "url": "http://arxiv.org/html/2211.01229v2/extracted/5325195/k_5.png"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [
99
+ {
100
+ "1": {
101
+ "title": "Solvability and spectral properties of integral equations on the real\nline: II. -spaces and applications.",
102
+ "author": "T. Arens, K. Haseloh, and S. N. Chandler-Wilde.",
103
+ "venue": "J. Int. Equ. Appl., 15:1\u201335, 2003.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "2": {
109
+ "title": "Scattering by biperiodic layered media: The integral equation\napproach, 2010.",
110
+ "author": "Tilo Arens.",
111
+ "venue": "Habilitation Thesis, Universit\u00e4t Karlsruhe.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "3": {
117
+ "title": "Finite element approximation of time harmonic waves in periodic\nstructures.",
118
+ "author": "G. Bao.",
119
+ "venue": "SIAM Journal on Numerical Analysis, 32(4):1155\u20131169, 1995.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "4": {
125
+ "title": "Numerical analysis of diffraction by periodic structures: TM\npolarization.",
126
+ "author": "G. Bao.",
127
+ "venue": "Numer. Math., 75:1\u201316, 1996.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "5": {
133
+ "title": "Adaptive finite-element method for diffraction gratings.",
134
+ "author": "G. Bao, Z. Chen, and H. Wu.",
135
+ "venue": "J. Opt. Soc. Am. A, 22:1106\u20131114, 2005.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "6": {
141
+ "title": "Zur Theorie der Kugelfunktionen.",
142
+ "author": "H. Bruns.",
143
+ "venue": "Journal f\u00fcr Mathematik, 90:322\u2013328, 1881.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "7": {
149
+ "title": "Variational approach in weighted Sobolev spaces to scattering by\nunbounded rough surfaces.",
150
+ "author": "S. N. Chandler-Wilde and J. Elschner.",
151
+ "venue": "SIAM. J. Math. Anal., 42:2554\u20132580, 2010.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "8": {
157
+ "title": "Existence, uniqueness, and variational methods for scattering by\nunbounded rough surfaces.",
158
+ "author": "S. N. Chandler-Wilde and P. Monk.",
159
+ "venue": "SIAM. J. Math. Anal., 37:598\u2013618, 2005.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "9": {
165
+ "title": "The PML for rough surface scattering.",
166
+ "author": "S. N. Chandler-Wilde and P. Monk.",
167
+ "venue": "Applied Numerical Mathematics, 59:2131\u20132154, 2009.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "10": {
173
+ "title": "An adaptive finite element method with perfectly matched absorbing\nlayers for the wave scattering by periodic structures.",
174
+ "author": "Z. Chen and H. Wu.",
175
+ "venue": "SIAM Journal on Numerical Analysis, 41(3):799\u2013826, 2003.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "11": {
181
+ "title": "Diffraction by periodic structures.",
182
+ "author": "A. Kirsch.",
183
+ "venue": "In L. P\u00e4varinta and E. Somersalo, editors, Proc. Lapland\nConf. on Inverse Problems, pages 87\u2013102. Springer, 1993.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "12": {
189
+ "title": "The Floquet-Bloch transform and scattering from locally perturbed\nperiodic surfaces.",
190
+ "author": "A. Lechleiter.",
191
+ "venue": "J. Math. Anal. Appl., 446(1):605\u2013627, 2017.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "13": {
197
+ "title": "Scattering of Herglotz waves from periodic structures and mapping\nproperties of the Bloch transform.",
198
+ "author": "A. Lechleiter and D.-L. Nguyen.",
199
+ "venue": "Proc. Roy. Soc. Edinburgh Sect. A, 231:1283\u20131311, 2015.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "14": {
205
+ "title": "A convergent numerical scheme for scattering of aperiodic waves from\nperiodic surfaces based on the Floquet-Bloch transform.",
206
+ "author": "A. Lechleiter and R. Zhang.",
207
+ "venue": "SIAM J. Numer. Anal, 55(2):713\u2013736, 2017.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "15": {
213
+ "title": "A Floquet-Bloch transform based numerical method for scattering\nfrom locally perturbed periodic surfaces.",
214
+ "author": "A. Lechleiter and R. Zhang.",
215
+ "venue": "SIAM J. Sci. Comput., 39(5):B819\u2013B839, 2017.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "16": {
221
+ "title": "Non-periodic acoustic and electromagnetic scattering from periodic\nstructures in 3d.",
222
+ "author": "A. Lechleiter and R. Zhang.",
223
+ "venue": "Comput. Math. Appl., 74(11):2723\u20132738, 2017.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "17": {
229
+ "title": "A Nystr\u00f6m method for a class of integral equations on the real\nline with applications to scattering by diffraction gratings and rough\nsurfaces.",
230
+ "author": "A. Meier, T. Arens, S. N. Chandler-Wilde, and A. Kirsch.",
231
+ "venue": "J. Int. Equ. Appl., 12:281\u2013321, 2000.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "18": {
237
+ "title": "Boundary Element Methods.",
238
+ "author": "S. Sauter and C. Schwab.",
239
+ "venue": "Springer, Berlin-New York, 2007.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "19": {
245
+ "title": "A high order numerical method for scattering from locally perturbed\nperiodic surfaces.",
246
+ "author": "R. Zhang.",
247
+ "venue": "SIAM J. Sci. Comput., 40(4):A2286\u2013A2314, 2018.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "20": {
253
+ "title": "Exponential convergence of perfectly matched layers for scattering\nproblems with periodic surfaces.",
254
+ "author": "R. Zhang.",
255
+ "venue": "SIAM J. Numer. Math., 60(2):804\u2013823, 2022.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "21": {
261
+ "title": "High order complex contour discretization methods to simulate\nscattering problems in locally perturbed periodic waveguides.",
262
+ "author": "R. Zhang.",
263
+ "venue": "To appear in SIAM J. Sci. Comput., 2022.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "22": {
269
+ "title": "Near-field imaging of periodic inhomogeneous media.",
270
+ "author": "R. Zhang and B. Zhang.",
271
+ "venue": "Inverse Problems, 30(4):045004, 2014.",
272
+ "url": null
273
+ }
274
+ }
275
+ ],
276
+ "url": "http://arxiv.org/html/2211.01229v2"
277
+ }
20240101/2212.10772v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2302.14420v2.json ADDED
@@ -0,0 +1,577 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Estimation-of-Distribution Algorithms for Multi-Valued Decision Variables",
3
+ "abstract": "The majority of research on estimation-of-distribution algorithms (EDAs) concentrates on pseudo-Boolean optimization and permutation problems, leaving the domain of EDAs for problems in which the decision variables can take more than two values, but which are not permutation problems, mostly unexplored.\nTo render this domain more accessible, we propose a natural way to extend the known univariate EDAs to this setting.\nDifferent from a na\u00efve reduction to the binary case, our approach avoids additional constraints.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Estimation-of-distribution algorithms (EDAs [41 ###reference_41###]) are randomized search heuristics that evolve a probabilistic model of the search space (that is, a probability distribution over the search space). In contrast to solution-based algorithms such as classic evolutionary algorithms, which only have the choice between the two extreme decisions of keeping or discarding a solution, EDAs can take into account the information gained from a function evaluation also to a smaller degree. This less short-sighted way of reacting to new insights leads to several proven advantages, e.g., that EDAs can be very robust to noise [26 ###reference_26###, 34 ###reference_34###]. Since the evolved distributions often have a larger variance, EDAs can also be faster in exploring the search space, in particular, when it comes to leaving local optima, where they have been shown to significantly outperform simple evolutionary algorithms [28 ###reference_28###, 10 ###reference_10###, 52 ###reference_52###, 4 ###reference_4###, 16 ###reference_16###, 55 ###reference_55###].\nWhile EDAs have been employed in a variety of settings and to different types of decision variables [41 ###reference_41###, 32 ###reference_32###], the number of results in which they have been used for discrete optimization problems with decision variables taking more than two values, other than permutation problems, is scarce [43 ###reference_43###, 44 ###reference_44###, 46 ###reference_46###, 45 ###reference_45###, 38 ###reference_38###].\nAll of these results have in common that they propose specific EDAs to deal with multi-valued problems.\nTo the best of our knowledge, no systematic way to model EDAs for the multi-valued domain exists, even not for the easiest case of EDAs that do not model dependencies, so-called univariate EDAs (we note that multi-variate EDAs are much less understood, e.g., despite some theoretical works in this direction [33 ###reference_33###, 17 ###reference_17###], there are no proven runtime guarantees for these algorithms).\nSince this might be a lost opportunity, we undertake the first steps towards a framework of univariate EDAs for problems with decision variables taking more than two values (but different from permutation problems).\nWe first note that the strong dependencies that distinguish a permutation problem from just a problem defined on have led to very particular EDAs for permutation problems. We did not see how to gain insights from these results for general multi-valued problems.\nWe therefore define EDAs for multi-valued decision variables from scratch, that is, without building on any related existing work. We note that, in principle, one could transform a multi-valued problem into a binary one by having, for each variable taking different values, binary variables, each indicating that the variable has the corresponding value. This would lead to a constrained optimization problem with the additional constraints that exactly one of these variables can take the value . This might be a feasible approach, but since such constraints generally impose additional difficulties, we propose a way that does not need an additional treatment of constraints (in other words, we set up our EDAs in a way that these constraints are satisfied automatically).\nWe defer the details to Section 4.2 ###reference_### and only sketch the rough idea of our approach here. For each variable taking values, without loss of generality the values , we have sampling frequencies that always add up to . When sampling a value for the variable, we do this mutually exclusively, that is, the variable takes the value with probability . This mutual exclusion in the sampling immediately gives that the frequency update does not violate the property that the frequencies add up to . Consequently, this appears to be a convenient (and in fact very natural) set-up for a multi-valued EDA. We note that there are some non-trivial technical questions to be discussed when working with frequencies borders, such as in the classical binary case, but we also come up with a simple and natural solution for this aspect.\nAs a first step towards understanding this multi-valued EDA framework, we study how prone it is to genetic drift. Genetic drift in EDAs means that sampling frequencies not only move because of a clear signal induced by the objective function, but also due random fluctuations in the sampling process. This has the negative effect that even in the complete absence of a fitness signal, the EDA develops a preference for a particular value of this decision variable. From a long sequence of works, see Section 5 ###reference_### for the details, it is well understood how the time for this genetic-drift effect to become relevant depends on the parameters of the EDA [21 ###reference_21###]. Consequently, if one plans to run the EDA for a certain number of iterations, then this quantification tells the user how to set the parameters as to avoid genetic drift within this time period.\nSince such a quantification is apparently helpful in the application of EDAs, we first extend this quantification to multi-valued EDAs. When looking at the relatively general tools used in [21 ###reference_21###], this appears straightforward, but it turns out that such a direct approach does not give the best possible result. The reason is that for multi-valued decision variables, the martingale describing a frequency of a neutral variable over time has a lower variance (in the relevant initial time interval). To profit from this, we use a fairly technical martingale concentration result of McDiarmid [37 ###reference_37###], which, to the best our our knowledge, has not been used before in the analysis of randomized search heuristics. Thanks to this result, we show that the time for genetic drift to become relevant is (only) by a factor of lower than in the case of binary decision variables (Theorem 3 ###reference_rem3###).\nWe use this result to conduct a mathematical runtime analysis of the multi-valued univariate marginal distribution algorithm (/\u0304UMDA) on the -valued LeadingOnes problem in the regime with low genetic drift. This problem is interesting since a typical optimization process optimizes the variable sequentially in a fixed order. Consequently, in a run of an EDA on LeadingOnes, there is typically always one variable with undecided sampling frequency that has a strong influence on the fitness. Hence, this problem is suitable to study how fast an EDA reacts to a strong fitness signal.\nOur runtime analysis shows that also in the multi-valued setting, EDAs can react fast to a strong fitness signal. Since now the frequencies start at the value , the time to move a frequency is a little longer, namely instead of constant when the sample size is by a sufficient constant factor larger than the selection size . This still appears to be a small price for having to deal with decision alternatives. This larger time also requires that the model update has to be chosen more conservatively as to prevent genetic drift (for this, we profit from our analysis of genetic drift), leading to another factor in the runtime. In summary, we prove (Theorem 6 ###reference_rem6###) that the UMDA can optimize the -valued LeadingOnes problem in time , a bound that agrees with the one shown in [15 ###reference_15###] for the classical case . Our upper bound is tight apart from a factor logarithmic in , that is, we prove a lower bound of order in Theorem 10 ###reference_rem10###.\nOverall, our work shows that -valued EDAs can be effective problem solvers, suggesting to apply such EDAs more in practice.\nThis work extends our prior extended abstract [3 ###reference_3###] by adding a lower bound for the runtime of the -valued UMDA on the -valued LeadingOnes problem. Also, it contains all proofs that were omitted in the conference version for reasons of space. To avoid misunderstandings, we note that this work bears no similarity or overlap with the paper Generalized Univariate Estimation-of-Distribution Algorithms [12 ###reference_12###], which studies generalized update mechanisms for EDAs for binary decision variables.\nThis article is organized as follows. We describe previous works in the following section and set the notation in the subsequent section. In Section 4 ###reference_###, we propose our multi-valued EDA framework. Our main technical results, the analysis of genetic drift and the runtime analysis for the LeadingOnes problem, can be found in Sections 5 ###reference_### and 6 ###reference_###. The paper ends with a short conclusion."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Since the technical sections of this work contain three relatively independent topics\u2014the definition of multi-valued EDAs, genetic drift, and a runtime analysis on the LeadingOnes benchmark\u2014we present the previous works relevant to these topics in the respective sections. We hope that this eases the reading of this paper.\nThis being a theoretical work, we do not discuss in detail how EDAs have been successfully used to solve real-worlds optimization problems and refer to the surveys [32 ###reference_32###, 41 ###reference_41###].\nTheoretically oriented works have accompanied the development and use of EDAs for a long time, see, e.g., the early works on genetic drift described in Section 5 ###reference_###. The first mathematical runtime analysis of an EDA was conducted by Droste [23 ###reference_23###]. This seminal work, showing an asymptotically tight bound for the runtime of the compact genetic algorithm on the OneMax benchmark, already contains many ideas that are now frequently used in the runtime analysis of EDAs. It also observed that EDAs optimize problems in a very different manner, visible from the different runtimes shown on two linear functions, which contrasts the famous analysis of how the EA optimizes linear functions by Drose, Jansen, and Wegener [24 ###reference_24###]. Interestingly, apart from the works of one research group [6 ###reference_6###, 5 ###reference_5###, 7 ###reference_7###], Droste\u2019s ground-breaking work [23 ###reference_23###] was not followed up by other runtime analyses for around ten years. Since then, starting with works like [8 ###reference_8###, 25 ###reference_25###, 50 ###reference_50###, 31 ###reference_31###], the runtime analysis of EDAs has become very active and has, despite the technical challenges in analyzing such complex algorithms, produced many fundamental results and a good understanding of some of the working principles of EDAs. We refer to the recent survey [30 ###reference_30###] for more details."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminaries",
21
+ "text": "We denote by the set of all natural numbers, including , and by the set of all real numbers.\nAdditionally, for , let , and let .\nWhen we say that a random process is a martingale and do not specify a filtration, then we mean that the process is a martingale with respect to its natural filtration.\nFurther, for all and , we denote the -norm of , that is, the sum of the entries of , by .\nLet and .\nWe consider the maximization of functions of the form , which we call r-valued fitness functions.\nWhenever we mention an -valued fitness function, we implicitly assume that its dimension and the cardinality of its domain are given.\nWe call each an individual, and we call the fitness of .\nWe say that a random variable stochastically dominates another random variable , not necessarily defined on the same probability space, denoted by , if and only if for all , we have ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Multi-Valued EDAs",
27
+ "text": "In this section, we generalize the three common univariate EDAs for the binary decision variable to multi-valued decision variables.\nWe call these variants multi-valued EDAs.\nTo this end, we briefly discuss the binary case in Section 4.1 ###reference_### before presenting our framework in Section 4.2 ###reference_###. In our presentation, we concentrate on the UMDA [39 ###reference_39###] and then briefly present the generalizations of the other two common univariate EDAs.\nWe note that for classic evolutionary algorithms, multi-valued decision variables have been discussed to some extent [13 ###reference_13###, 19 ###reference_19###, 20 ###reference_20###, 29 ###reference_29###, 56 ###reference_56###, 36 ###reference_36###, 11 ###reference_11###]. Due to the very different working principles, we could not see how these results help in designing and analyzing multi-valued EDAs."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Binary EDAs",
33
+ "text": "Binary EDAs refer to EDAs for pseudo-Boolean optimization, that is, the optimization of functions .\nThis setting is a special case of optimizing -valued fitness functions, for .\nThe probabilistic model of univariate EDAs in this domain is a length- vector of probabilities (the frequency vector), where the probability (the frequency) at position denotes the probability that a sample has a at position , independent of the other positions.\nFormally, for all , it holds that , where we assume that .\nBinary EDAs commonly take at least a parameter (the population size) as well as a pseudo-Boolean fitness function as input and optimize as follows:\nInitially, the frequency vector models the uniform distribution, that is, each frequency is .\nThen, in an iterative manner, the algorithm produces samples (the population) independently via , and it updates based on these samples and their fitness.\nThis process is repeated until a user-defined termination criterion is met.\nIn order to prevent frequencies from only producing a single value (which is the case if a frequency is or ), after the frequency vector is updated, it is typically restricted to the interval .\nThat is, if the frequency is less than , it is set to , and if it is greater than , it is set to .\nThe extreme values of this interval are referred to as the borders, and the value is called the margin of the algorithm.\nUMDA.\nAlgorithm 1 ###reference_hm1### shows the univariate marginal distribution algorithm (UMDA) [39 ###reference_39###], which is a well established binary EDA, both in the empirical [41 ###reference_41###] and the theoretical [18 ###reference_18###] domain.\nNext to the population size and a fitness function, the UMDA also utilizes a parameter , called the selection size.\nIn each iteration, the UMDA selects out of the samples that have the best fitness (breaking ties uniformly at random).\nEach frequency is then set to the relative frequency of s at the respective position (algorithm 1 ###reference_hm1###).\nAfterwards, the frequencies are restricted to lie within the frequency borders."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "The Multi-Valued EDA Framework",
39
+ "text": "We propose a framework for EDAs for optimizing -valued fitness functions.\nWe call the resulting EDAs -valued EDAs.\nOur framework closely follows the one presented in Section 4.1 ###reference_###.\nThat is, an -valued EDA starts with a probabilistic model initialized to represent the uniform distribution, and it then generates iteratively samples independently, based on its model.\nThis model is then updated and afterwards restricted such that it does not contain the extreme probabilities and .\nThe difference to the framework for binary EDAs lies in how the probabilistic model of -valued EDAs is represented and how it is restricted from containing extreme probabilities.\nThe probabilistic model.\nThe probabilistic model of an -valued EDA is an matrix (the frequency matrix), where each row forms a vector (the frequency vector at position ) of probabilities (the frequencies) that sum to .\nAs in the binary case, samples from are created independently for each position.\nWhen creating an individual , then, for all and all , the probability that has value is .\nFormally, for all , it holds that , where we assume that .\nThe frequency matrix is initialized such that each frequency is , representing the uniform distribution.\nWhen performing an update to , it is important to make sure that each row sums to .\nRestricting the probabilistic model.\n\nThe aim of restricting the frequency matrix is to clamp all frequencies, for some values (the lower and upper border, respectively) with , to .\nThat is, if a frequency is less than , it should be after the restriction, and if it is greater than , it should be afterwards.\nFor such a restriction, it is important for each row that the frequency vector sums to after the restriction.\nThis process is not straightforward.\nIf , and is updated to , then this creates a change in probability mass of .\nHence, simply updating to can result in all frequencies of summing to a value other than after the restriction.\nWe address the problem above as follows.\nTo this end, let be the lower and upper border, respectively, with and .\nFurther, let be a row of the frequency matrix we wish to restrict, let be the frequency vector after the update but before the restriction (with ), and let be the vector after clamping it to but before taking care that the frequencies sum to .\nWe define the restriction of to , denoted by , to be the vector where each frequency\u2019s share above is reduced by the surplus of the probability relatively to the share above .\nFormally, for all , it holds that\nNote that denotes how much probability mass should be in the frequency vector, above .\nThe resulting frequency vector sums to , since\nFurther, each frequency is at least , since this value is added at the end of eq. 1 ###reference_### and since by definition of .\nLast, since each frequency is at least after restricting, the largest a frequency can be is .\nIn order to disallow the extreme frequencies and but to stay close to the binary case, we propose to choose the upper border as .\nFollowing our ideas above, this implies that the lower border is .\nThis is consistent with the binary case but generalizes to the -valued domain.\nWe say that an EDA is without margins if and only if the lower border is and the upper border is .\nThat is, the restriction of the frequencies does not take place.\n/\u0304UMDA.\nWe generalize the UMDA (Algorithm 1 ###reference_hm1###) to the /\u0304UMDA (Algorithm 2 ###reference_hm2###), utilizing our framework.\nThis leads to the same generalization mentioned by Santana et al. [46 ###reference_46###].\nLike the UMDA, the /\u0304UMDA has three parameters, namely the population size , the selection size , and the -valued fitness function .\nIt also updates its frequencies analogously to the UMDA by choosing best individuals from the population of size and then setting each frequency at position for value to the relative frequency of value at position among the best individuals (algorithm 2 ###reference_hm2###).\nWe note that this results in a valid frequency vector for each row , since\n-PBIL.\nAnother popular univariate EDA is population-based incremental learning (PBIL [2 ###reference_2###]).\nIt operates very similarly to the UMDA, with the only difference being in how it performs an update.\nIn contrast to the UMDA, the PBIL does not set a frequency to the relative frequency of respective values at a position but, instead, computes the convex combination of the relative frequency with the current frequency value in its frequency vector.\nTo this end, it utilizes a parameter , the scaling factor.\nWe generalize the PBIL to the -PBIL (Algorithm 3 ###reference_hm3###).\nEach frequency vector of the -PBIL sums to (before the restriction) because it is a convex combination of the -UMDA\u2019s update (which sums to ) and the current frequency vector (which also sums to ).\n-cGA.\nAnother popular univariate EDA is the compact genetic algorithm (cGA [27 ###reference_27###]).\nThe cGA only has a single parameter , the hypothetical population size, and it creates only two samples each iteration.\nIt ranks these two samples by fitness and then adjusts each frequency by such that the frequency of the value of the better sample is increased and that of the worse sample decreased.\nWe generalize the cGA to the -cGA (Algorithm 4 ###reference_hm4###).\nEach frequency vector of the -cGA sums to after the update (before the restriction) because exactly one entry is increased by and exactly one value is decreased by this amount (noting that this can be the same frequency, in which case no change is made overall)."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Genetic Drift",
45
+ "text": "We prove an upper bound on the effect of genetic drift for -valued EDAs (Theorem 3 ###reference_rem3###) in a similar fashion as Doerr and Zheng [21 ###reference_21###] for binary decision variables.\nThis allows us to determine parameter values for EDAs that avoid the usually unwanted effect of genetic drift.\nThe main novelty of our result over that by Doerr and Zheng [21 ###reference_21###] is that we use a slightly technical martingale concentration result due to McDiarmid [37 ###reference_37###] that allows one to profit from small variances.\nSuch an approach is necessary. If one directly applies the methods presented by Doerr and Zheng [21 ###reference_21###], one obtains estimates for the genetic drift times that are by a factor of lower than ours (that is, the genetic drift effect appears times stronger).\nIn Sections 5.1 ###reference_### and 5.2 ###reference_###, we first present a general introduction to the phenomenon of genetic drift.\nIn Section 5.3 ###reference_###, we then prove a concentration result on neutral positions (Theorem 3 ###reference_rem3###).\nLast, in Section 5.4 ###reference_###, we consider the setting of weak preference."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Introduction to Genetic Drift",
51
+ "text": "In EDAs, genetic drift means that a frequency does not reach or approach one of the extreme values or because of a clear signal from the objective function but due to random fluctuations from the stochasticity of the process.\nWhile there is no proof that genetic drift is always problematic, the general opinion is that this effect should better be avoided. This is supported by the following observations and results: (i) When genetic drift is strong, many frequencies (in the binary case) approach the extreme values and and, consequently, the behavior of the EDA comes close to the one of a mutation-based EA, so the advantages of an EDA might be lost. (ii) The vast majority of the runtime results for EDAs, especially those for harder scenarios like noise [26 ###reference_26###] or multimodality [10 ###reference_10###], have only been shown in regimes with low genetic drift. (iii) For some particular situations, a drastic performance from genetic drift was proven. For example, the UMDA with standard selection pressure but small population size has a runtime exponential in on the DeceptiveLeadingBlocks problem [33 ###reference_33###]. In contrast, when the population size is large enough to prevent genetic drift, here , then the runtime drops to with high probability.\nGenetic drift in EDAs has been studied explicitly since the ground-breaking works of Shapiro [47 ###reference_47###, 48 ###reference_48###, 49 ###reference_49###], and it appears implicitly in many runtime analyses such as [22 ###reference_22###, 53 ###reference_53###, 54 ###reference_54###, 51 ###reference_51###, 35 ###reference_35###, 16 ###reference_16###]. Experimental evidences for the negative impact of genetic drift can further be found in [30 ###reference_30###, 21 ###reference_21###, 40 ###reference_40###].\nThe most final answer to the genetic-drift problem for univariate EDAs, including clear suggestions to choose the parameters as to avoid genetic drift, was given by Doerr and Zheng [21 ###reference_21###]. In the case of the UMDA (and binary decision variables, that is, the classic model), their work shows that a neutral frequency (defined in Section 5.2 ###reference_###) stays with high probability in the middle range for the first iterations if . This bound is tight. When regarding frequencies together, a value of with implicit constant computable from [21 ###reference_21###, Theorem ] ensures with high probability that all frequencies stay in the middle range for at least iterations. Hence these bounds give a clear indication how to choose the selection size when aiming to run the UMDA for a given number of iterations. We note that the quantification of genetic drift can also be used to design automated ways to choose parameters, see the work by Zheng and Doerr [57 ###reference_57###], when no a-priori estimate on is available.\nGiven the importance of a good understanding of genetic drift, we now analyze genetic drift for multi-valued EDAs, more specifically, for the /\u0304UMDA. We are optimistic that, analogous to the work by Doerr and Zheng [21 ###reference_21###], very similar arguments can be applied for other main univariate EDAs."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Martingale Property of Neutral Positions",
57
+ "text": "Genetic drift is usually studied via neutral positions of a fitness function. Let be an -valued fitness function. We call a position (as well as, for an individual , its corresponding variable and the associated frequencies of an EDA) neutral (w.r.t. to ) if and only if, for all , the value has no influence on the value of , that is, if and only if for all individuals such that for all it holds that , we have .\nAn important property of neutral variables that we capitalize on in our analysis of genetic drift is that their frequencies in typical EDAs without margins form martingales [21 ###reference_21###]. This observation extends the corresponding one for EDAs for binary representations. We make this statement precise for the /\u0304UMDA.\nLet be an -valued position, and let be a neutral position of .\nConsider the /\u0304UMDA without margins optimizing .\nFor each , the frequencies are a martingale.\nLet .\nSince the algorithm has no margins, in each iteration , no restriction takes place, so it holds that . Since is neutral, the selection of the best individuals is not affected by the values at position of the samples.\nConsequently, for each , the value follows a Bernoulli distribution with success probability .\nHence, .\nFurther, by linearity of expectation, we get\nproving the claim.\n\u220e\nAs in previous works on genetic drift, the martingale property of neutral frequencies allows to use strong martingale concentration results. Since in our setting the frequencies start at a value of , we can only tolerate smaller deviations from this value, namely up to in either direction. With the methods of Doerr and Zheng [21 ###reference_21###], this reduces the genetic drift by a factor of . We therefore use a stronger martingale concentration result, namely [37 ###reference_37###, Theorem ], which allows to exploit the lower sampling variance present at frequencies in .\nWe note that we adjust the theorem by incorporating comments by McDiarmid, especially [37 ###reference_37###, eq. ()], mentioning that the absolute value in eq. should be around the sum, not around the maximum, as also observed by Doerr and Zheng [21 ###reference_21###].\nLet be a martingale with respect to a filtration .\nFurther, for all , denote the deviation by .\nIn addition, let , and assume that is finite.\nLast, for all , let .\nThen for all and all , it holds that"
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Upper Bound on the Genetic-Drift Effect of a Neutral Position",
63
+ "text": "By utilizing Theorem 2 ###reference_rem2###, we show for how long the frequencies of the /\u0304UMDA at neutral positions stay concentrated around their initial value of .\nLet be an -valued fitness function, and let be a neutral position of .\nConsider the /\u0304UMDA optimizing .\nLet and .\nThen\nWe apply the same proof strategy as in the proof of [21 ###reference_21###, Theorem ].\nThat is, we aim to apply Theorem 2 ###reference_rem2###.\nNaturally, one would apply the theorem to the sequence of frequencies .\nHowever, since the deviation of is very large, namely , we consider instead a more fine-grained process , which, roughly speaking, splits each iteration of the /\u0304UMDA into sections, each of which denotes that an additional sample is added to the update.\nFormally, for all and , let\nNote that, for all , it holds that .\nThus, the natural filtration of allows us to measure .\nIn order to apply Theorem 2 ###reference_rem2###, we check that its assumptions are met.\nTo this end, we first show that is a martingale.\nSince is neutral, the selection of the best individuals is not affected by the values at position of the samples. Consequently, for all , the random variable follows a Bernoulli distribution with success probability .\nThus, we get for all and that\nand further, by the definition of , that\nshowing that is a martingale.\nWe take an alternative view of the event , whose probability we aim to bound.\nNote that this event is equivalent to .\nA superset of this event is the event where we stop at the first iteration such that the inequality holds.\nTo this end, let be a stopping time (with respect to ).\nFrom now on, we consider the stopped process of with respect to .\nThat is, for all , it holds that .\nSince is a martingale, so is .\nLet , and let be a Bernoulli random variable with success probability that is -measurable.\nNote that by eqs. 2 ###reference_### and 3 ###reference_###, disregarding the expected values, by eq. 4 ###reference_###, it holds that\nThus, the maximum deviation of is .\nFurther, let denote the sum of variances, as defined in Theorem 2 ###reference_rem2###.\nThen, since and are -measurable and since, due to being stopped, it holds that , we get\nHence, .\nLet denote the stopped process of with respect to .\nApplying Theorem 2 ###reference_rem2### with and our estimates above, noting that , yields\nSince we only need to consider the stopped process, as explained above, and since is identical to until the process stops, the result follows.\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "5.4",
67
+ "parent_section_id": "5",
68
+ "section_name": "Upper Bound for Positions with Weak Preference",
69
+ "text": "A position is rarely neutral for a given fitness function. However, we prove that the results on neutral positions translate to positions where one value is better than all other values. This is referred to as weak preference.\nFormally, we say that an -valued fitness function has a weak preference for a value at a position if and only if, for all , it holds that\nWe now adapt Lemma by Doerr and Zheng [21 ###reference_21###] to the /\u0304UMDA.\nConsider two r-valued fitness functions to optimize using the /\u0304UMDA, such that without loss of generality, the first position of f weakly prefers 0 and the first position of g is neutral.\nLet correspond to the frequency matrix of and to the frequency matrix of , both defined by the /\u0304UMDA. Then, for all , it holds that .\nWe prove our claim by induction on the number of iterations .\nFor the base case , all frequencies are .\nHence, .\nFor the induction step, let and let .\nFurther, let .\nSince is a neutral position of , the selection of the best individuals is not affected by the values at position of the samples.\nThus, .\nFurther, since weakly prefers s, defining , it holds that .\nAnalogously to Doerr and Zheng [21 ###reference_21###], we note that since stochastically dominates by induction hypothesis, there exists a coupling of the two probability spaces that describe the states of the two algorithms at iteration in such a way that for any point in the coupling probability space.\nFor such a , it then follows that , as the success probability of the former is bounded from above by that of the latter.\nHence, , which proves the claim.\n\u220e\nWe now apply Theorem 4 ###reference_rem4### and extend Theorem 3 ###reference_rem3### to positions with weak preference.\nLet be an -valued fitness function with a weak preference for at position .\nConsider the /\u0304UMDA optimizing .\nLet .\nThen\nLet be an -valued fitness function with neutral position .\nLet be the frequency matrix of the /\u0304UMDA optimizing .\nBy Theorem 4 ###reference_rem4###, it follows for all that stochastically dominates .\nApplying Theorem 3 ###reference_rem3### to for position , we have\nUsing the stochastic domination yields the tail bound for .\n\u220e"
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Runtime Analysis of the /\u0304UMDA",
75
+ "text": "We analyze the runtime of the /\u0304UMDA (Algorithm 2 ###reference_hm2###) on an -valued variant of LeadingOnes. We start by describing the previous runtime results of EDAs on LeadingOnes (Section 6.1 ###reference_###), then define the /\u0304LeadingOnes problem formally (Section 6.2 ###reference_###), and finally state and prove our main result (Theorem 6 ###reference_rem6###, Section 6.3 ###reference_###)."
76
+ },
77
+ {
78
+ "section_id": "6.1",
79
+ "parent_section_id": "6",
80
+ "section_name": "Previous Runtime Analyses of EDAs on LeadingOnes",
81
+ "text": "In contrast to OneMax (another popular theory benchmark function), LeadingOnes is not that extensively studied for EDAs.\nThis is surprising, as LeadingOnes is interesting as a benchmark for univariate EDAs, since the function introduces dependencies among the different positions of a bit string, but the model of univariate EDAs assumes independence.\nHowever, since LeadingOnes only has a single local maximum, known runtime results are rather fast.\nIn a first mathematical runtime analysis of an EDA, however, using the unproven no-error-assumption (which essentially states that there is no genetic drift), it was shown that the UMDA optimizes the LeadingOnes benchmark in expected time .\nThis was made rigorous by Chen et al. [7 ###reference_7###] with a proof that the UMDA with population size optimizes LeadingOnes in time with high probability.\nHere the relatively large required population stems from the, then, incomplete understanding of genetic drift.\nIn a remarkable work [8 ###reference_8###], Dang and Lehre prove a runtime of , only assuming that the sample size is at least logarithmic.\nHence this result applies both to regimes without and with genetic drift.\nIn the regime with genetic drift, however, the dependence on is slightly worse than in the result by Chen et al. [7 ###reference_7###].\nThis was improved by Doerr and Krejca [15 ###reference_15###], where an upper bound was shown for the whole regime of low genetic drift.\nMore precisely, when and , both with sufficiently large implicit constants, then the runtime of the UMDA on LeadingOnes is with high probability.\nWe note that the analysis by Doerr and Krejca [15 ###reference_15###] is technically much simpler than the previous ones, in particular, it avoids the complicated level-based method used by Dang and Lehre [8 ###reference_8###].\nWe note that also lower bounds [34 ###reference_34###, 15 ###reference_15###] and runtimes in the presence of noise have been regarded.\nSince we have no such results, we refer to the original works.\nBesides the UMDA, LeadingOnes was considered in the analysis of newly introduced univariate EDAs.\nInterestingly, each of these algorithms optimizes LeadingOnes in with high probability.\nThis runtime is faster by a factor of when compared to classical EAs, and it suggests that LeadingOnes is a rather easy problem for EDAs.\nFriedrich, K\u00f6tzing, and Krejca [25 ###reference_25###] proved the first of these results for their stable compact genetic algorithm (scGA), which introduces an artificial bias into its update process that is overcome by the LeadingOnes function.\nHowever, it was later proven that the scGA fails on the typically easy OneMax function [14 ###reference_14###], highlighting that the scGA is not a good EDA in general.\nThe next result was proven by Doerr and Krejca [14 ###reference_14###], who introduce the significance-based compact genetic algorithm (sig-cGA).\nThe sig-cGA saves a history of good individuals and only updates a frequency when the number of bits in the history of that position significantly deviates from its expectation.\nThis algorithm also performs well on OneMax.\nThe last result was proven recently by Ajimakin and Devi [1 ###reference_1###], who introduce the competing genes evolutionary algorithm (cgEA).\nThe cgEA utilizes the Gauss\u2013Southwell score as a quality metric for the positions of its samples.\nIteratively, it picks the position with the best score and creates a new population by letting each individual of the previous population compete against a copy of it where the bit at position is flipped.\nBased on the best individuals created this way, the frequency at position is immediately set to either or , whichever value turns out to be better.\nThis approach works very well for a variety of theory benchmarks, as proven by the authors."
82
+ },
83
+ {
84
+ "section_id": "6.2",
85
+ "parent_section_id": "6",
86
+ "section_name": "The /\u0304LeadingOnes Benchmark",
87
+ "text": "The /\u0304LeadingOnes function (eq. 6 ###reference_###) is a generalization of the classical LeadingOnes benchmark [42 ###reference_42###] from the binary to the multi-valued domain.\nBefore we define the generalization, we briefly present the LeadingOnes function.\nLeadingOnes.\nLeadingOnes [42 ###reference_42###] is one of the most commonly mathematically analyzed benchmark functions, both in the general domain of evolutionary computation [18 ###reference_18###] as well as in the domain of EDAs [30 ###reference_30###].\nFor a bit string of length , it returns the number of consecutive s, starting from the leftmost position.\nFormally, is defined as .\nThe function has a single local maximum at the all-s string, which is also its global maximum.\n/\u0304LeadingOnes.\nInspired by LeadingOnes from the binary domain, we define as the function that returns the number of consecutive s, starting from the leftmost position.\nFormally,\nIn contrast to the binary case, the single local optimum of /\u0304LeadingOnes is the all-s string, which is also its global optimum."
88
+ },
89
+ {
90
+ "section_id": "6.3",
91
+ "parent_section_id": "6",
92
+ "section_name": "Runtime Results",
93
+ "text": "We analyze the runtime of the /\u0304UMDA (Algorithm 2 ###reference_hm2###) on the /\u0304LeadingOnes benchmark (eq. 6 ###reference_###) in the regime with low genetic drift.\nFor the upper bound (Theorem 6 ###reference_rem6###), compared to the binary case [15 ###reference_15###, Theorem ], we get an extra factor of order in the runtime.\nThe factor of is a result of the increased waiting time to see a certain position out of .\nThe factor of stems from the choice to stay in the regime with low genetic drift as well as for the time it takes a frequency to get to the upper border.\nFor the lower bound, (Theorem 10 ###reference_rem10###), compared to the binary case [15 ###reference_15###, Theorem ], we get an extra factor of order .\nOur two bounds differ by a factor in the order of (for polynomial population sizes).\nWe believe that our lower bound is missing a factor of , as we currently do not account for the time it takes a frequency to get from its starting value to for this bound.\nWe prove the upper bound in Section 6.3.1 ###reference_SSS1### and the lower bound in Section 6.3.2 ###reference_SSS2###.\nBoth bounds are a generalization of the binary case."
94
+ },
95
+ {
96
+ "section_id": "6.3.1",
97
+ "parent_section_id": "6.3",
98
+ "section_name": "6.3.1 Upper Bound",
99
+ "text": "Our upper bound shows that the number of iterations until an optimum is found for the first time is almost linear in and in , only adding a factor in the order of .\nLet .\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes with , , and . Then with a probability of at least , the frequency vector corresponding to the value converges to in iterations.\nThis implies that after fitness function evaluations, the /\u0304UMDA samples the optimum with the success probability above.\nThe basic premise for our proof is that for the entirety of the considered iterations, frequencies corresponding to the value remain above a given threshold since /\u0304LeadingOnes weakly prefers at all positions. We define this threshold as , and we show that in a sequential manner, position by position, the frequencies corresponding to are brought to within a given number of iterations until all positions are covered.\nFirst, we provide a guarantee on the concentration of all the probabilities during the entirety of the algorithm\u2019s runtime, in a way to avoid genetic drift and to remain above a minimal threshold for all frequencies.\nLet .\nConsider the /\u0304UMDA with optimizing a function that weakly prefers at every position. Then with a probability of at least , for each , the frequency remains above for the first iterations.\nBy Theorem 5 ###reference_rem5### with , we have for all that\nSince , we get\nHence, it follows that\nApplying a union bound over all positions yields the result.\n\u220e\nIn the proof of our next result, we apply the following Chernoff bound.\nWe apply it in order to quantify the number of iterations necessary to converge every position .\nLet , and let be the sum of independent random variables each taking values in . Then\nAn important concept for our analysis, following the approach by Doerr and Krejca [15 ###reference_15###], is that a position is critical.\nInformally, a position is critical if and only if the frequencies corresponding to value are for all smaller positions at the upper border.\nOur runtime proof relies on showing that the /\u0304UMDA quickly increases the frequency of a critical position to the upper border, thus making the next position critical.\nFormally, let .\nWe call a position critical for the /\u0304UMDA on /\u0304LeadingOnes in iteration , if and only if for all , it holds that , and that .\nWe now show that once a position becomes critical, with high probability, with being an appropriate value separating from (that is, defining the selection pressure), it takes less than iterations to bring the frequency of the value to the upper border .\nWe also prove that it remains there for a sufficient number of iterations until the convergence of the frequency matrix.\nLet .\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes with and .\nConsider an iteration such that position is critical, and let such that .\nThen with a probability of at least , it holds for all that .\nWe start by proving that, for all , the frequency multiplies by at least during an update, with high probability (and is then restricted).\nTo this end, let , and assume that , and that position or a position greater than is critical (where we assume, for convenience, that if all frequencies for value are , then position is critical).\nFurthermore, let denote the number of sampled individuals in iteration that have at least leading s.\nNote that by assumption as well as that is critical in iteration .\nWe discuss later via induction why these assumptions also hold for iteration .\nWe consider the process of sampling a single individual.\nSince position at least is critical, by definition, for all , we have .\nHence, the probability that all these positions are sampled as for this individual is .\nThis yields , and since , this yields .\nBy the Chernoff bound (Theorem 8 ###reference_rem8###) and by the assumption , we get\nWe consider as defined in Section 4.2 ###reference_###, which is the updated frequency before being restricted to .\nSince by the definition of the update of the /\u0304UMDA, we have\nIn order to update , the frequency vector is restricted to the interval , which entails that the updated frequency may reduce when compared to .\nHowever, since the restriction adds at most the lower border (that is, ) to a frequency, any restriction rule adds at most a probability mass of to the frequency vector.\nWe assume pessimistically that, in order for the frequencies to sum to , this mass is entirely subtracted from during the restriction (noting that this does not take place once , as this means that it is set to the upper border instead).\nFurther, the assumption yields that .\nHence, we get that\nBy induction on the iteration (starting at ), it follows that, with an additional failure probability of at most per iteration, the assumptions that and that position at least is critical are satisfied.\nStarting from iteration , a union bound over the next iterations yields that the frequency continues growing exponentially with a factor of for the next iterations with probability at least .\nSince, by assumption, , it reaches after at most iterations during that time, concluding the proof.\n\u220e\nWe now prove our main result.\nSince /\u0304LeadingOnes weakly prefers s at all positions , by Lemma 7 ###reference_rem7###, with a probability of at least , for all , the frequency remains above for the first iterations.\nFor each position , we apply Lemma 9 ###reference_rem9### with and , noting that the assumption is satisfied, since we assume .\nHence, for each , with a probability of at least , after at most iterations, the frequency is set to and remains there for at least iterations.\nFurther, by a union bound over all frequency vectors, the above holds for all frequency vectors, with probability at least .\nCombining everything, with probability at least , it holds by induction on position that once position is critical, the frequency reaches in at most iterations and remains there until at least iteration .\nSince position is critical in iteration , it follows that the frequencies for value are set, in increasing order of their position, to .\nAfter at most iterations, all such frequencies are at the upper border, which proves the first part of the claim.\nFor the second part, note that once , the population of the /\u0304UMDA in that iteration contains at least times the optimum.\nFurther, each iteration accounts for fitness function evaluations.\nThis proves the second claim.\n\u220e"
100
+ },
101
+ {
102
+ "section_id": "6.3.2",
103
+ "parent_section_id": "6.3",
104
+ "section_name": "6.3.2 Lower Bound",
105
+ "text": "As the upper bound (Theorem 6 ###reference_rem6###), the lower bound shows an almost linear dependency of the number of iterations until the optimum is sampled for the first time with respect to and , only adding a factor of order .\nThe difference of to the upper bound stems from the bound on , which is larger by a factor of around in the upper bound.\nLet be a constant.\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes with .\nFurthermore, let and let .\nThen with probability at least , the /\u0304UMDA does not sample the optimum in iteration or earlier.\nThis corresponds to more than fitness function evaluations until the optimum is sampled for the first time.\nOur proof of Theorem 10 ###reference_rem10### follows closely the proof for a lower bound on the runtime of the UMDA on LeadingOnes in the binary case by Doerr and Krejca [15 ###reference_15###, Theorem ].\nThe proof mainly relies on the leftmost position in a population that never had at least samples with a so far.\nThis position increases each iteration with high probability by only about .\nBefore this position is sufficiently close to , it is very unlikely that the /\u0304UMDA samples the optimum of /\u0304LeadingOnes.\nHence, the runtime is with high probability in the order of .\nTo make this outline formal, we say that a position is selection-relevant in iteration (for /\u0304LeadingOnes) if and only if the population of the /\u0304UMDA optimizing /\u0304LeadingOnes has in iteration at least individuals with at least leading s.\nNote that multiple positions can be selection-relevant in the same iteration, and that position is always selection-relevant.\nFurthermore, for each iteration , we say that position is the maximum selection-relevant position if and only if is the largest value among all selection-relevant positions in iteration .\nThe following lemma shows that the frequency for value in positions that were not yet selection-relevant remain close to their starting value of , as they are neutral up to that point.\nLet .\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes with .\nFor all , let denote the first iteration such that position is selection-relevant, and let .\nThen with probability at least , it holds for each and each that .\nLet .\nWe show that the sequence remains in as long as by aiming to apply Theorem 3 ###reference_rem3###.\nWe then conclude the proof via a union bound of the failure probabilities (that is, the probabilities that a frequency does not remain in said interval) over all possible values for .\nConditional on , since only becomes selection-relevant the earliest in iteration , position is neutral up to (including) iteration .\nThat is, for all , position has no influence on the fitness of each individual in population (and thus on the updated frequency ).\nHence, by Theorem 3 ###reference_rem3###, by , and by the lower bound on , we get that\nBy the law of total probability, this bound also holds independently of the outcome of .\nTaking the union bound of the above bound over all values for yields that the overall failure probability is at most , concluding the proof.\n\u220e\nFor the next lemma, we make use of the following Chernoff bound, which we apply in order to show that new offspring does not extend the prefix of leading s by too much.\nIt is a non-trivial extension of the typical Chernoff bound to the case where we have an upper bound on the expected value of the sum of independent Bernoulli random variables.\nThis extension is non-trivial as the upper bound on the expectation also results in a stronger probability bound.\nLet , and let be the sum of independent random variables each taking values in .\nMoreover, let such that .\nThen\nIn the following lemma, we show that the maximum selection-relevant position increases each iteration with high probability by roughly .\nTo this end, we tie it to the concept of a critical position, as defined in Section 6.3.1 ###reference_SSS1###.\nThis proof is heavily inspired by the proof of Doerr and Krejca [15 ###reference_15###, Lemma ], but we fix a mistake in their proof, where the penultimate estimate of the application of the Chernoff bound bounds the exponent in the wrong direction.\nLet be a constant.\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes with .\nFurthermore, consider an iteration such that position is critical and that, for all positions , it holds that .\nLet .\nThen, with probability at least , the maximum selection-relevant position in iteration is at most .\nWe note that by the definition of the /\u0304UMDA and since , it holds that .\nFurthermore, we assume that , that is, it holds that .\nFor , we statement claims that the maximum selection-relevant position is at most , which is trivially the case, as all positions are in .\nFor a position to become the maximum selection-relevant position in iteration , by definition, it is necessary that at least individuals in population have at least leading s.\nWe show via Theorem 12 ###reference_rem12### that it is very unlikely that such a prefix of leading s extends by much.\nTo this end, let , and let denote the number of individuals from with at least leading s.\nSince we assume that each frequency of value at a position larger than is at most , as well as due to the independent sampling of the /\u0304UMDA and due to the definition of , it follows that\nHence, by applying Theorem 12 ###reference_rem12### with , recalling that , and by applying the bound on , we get that\nConsequently, with probability at least , the population contains fewer than offspring that have at least leading s.\nThat is, the largest position where at least offspring have at least leading s is at most , which is equivalent to the maximum selection-relevant position being at most .\n\u220e\nThe next lemma is the last one before we prove our lower bound.\nThe lemma shows that it is very unlikely for the /\u0304UMDA to sample the optimum of LeadingOnes while many frequencies for value are not high yet (which is measured by the critical position).\nConsider the /\u0304UMDA optimizing /\u0304LeadingOnes, and consider an iteration and a position such that, for all positions , it holds that .\nThen, with probability at least , the /\u0304UMDA does not sample the optimum in this iteration.\nWe bound the probability for sampling the optimum this iteration from above.\nThe probability for a single offspring to be the optimum is, due to the upper bound on the last frequencies, at most , as all positions need to be a .\nTaking a union bound over all samples of this iteration concludes the proof.\n\u220e\nLemmas 7 ###reference_rem7###, 13 ###reference_rem13### and 14 ###reference_rem14### are sufficient for proving Theorem 10 ###reference_rem10###.\nWe only show the bound on the number of iterations.\nSince we start counting iterations at and since the /\u0304UMDA creates exactly offspring each iteration, the bound on the number of fitness function evaluations follows immediately.\nFor the entirety of the proof, we assume that during the first iterations, all frequencies for value remain in as long as they did not become selection-relevant yet.\nBy Lemma 11 ###reference_rem11### with , noting that is sufficiently large, this occurs with probability at least .\nFurthermore, we assume that , as Theorem 10 ###reference_rem10### yields a trivial lower bound of otherwise.\nWe continue by proving via induction on that with probability at least it holds that each position is not relevant up to (including) iteration .\nFor the base case , by the definition of the /\u0304UMDA, for all positions , it holds that .\nThis especially means that position is critical this iteration.\nApplying Lemma 13 ###reference_rem13###, noting that the requirements for and are met, proves the base case, as, with probability at least , the maximum selection-relevant position in iteration is .\nFor the inductive step, assume that the inductive hypothesis holds up to (including) iteration .\nHence, with probability at least , the maximum selection relevant-position in iteration (and up to there) is at most .\nThis implies that the critical position in iteration is also at most .\nFurthermore, all frequencies for value at positions greater than have not been selection-relevant yet.\nThus, by our argument at the beginning of the proof, these frequencies are at most .\nOverall, by Lemma 13 ###reference_rem13###, in iteration , with probability at most , the maximum selection-relevant position in iteration is at least .\nVia a union bound with the failure probability of the inductive hypothesis, this proves the claim, that is, with probability at least , the maximum-selection relevant position in iteration is at most .\nThis claim shows that, for , with probability at least , each position greater than is never selection-relevant up to (including) iteration .\nHence, by our argument at the beginning of the proof, these frequencies are at most .\nApplying Lemma 14 ###reference_rem14### with then yields that the /\u0304UMDA does not sample the optimum in each iteration up to with a probability of at least per iteration.\nA union bound over at most iterations then shows that with probability at least , it holds that up to (including) iteration , the /\u0304UMDA does not sample the optimum.\nLast, a union bound over the three error probabilities of the three arguments above then shows that with probability at least , the /\u0304UMDA does not sample the optimum up to (including) iteration , concluding the proof.\n\u220e"
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "Conclusion",
111
+ "text": "We have proposed the first systematic framework of EDAs for problems with multi-valued decision variables. Our analysis of the genetic-drift effect and our runtime analysis on the multi-valued version of LeadingOnes have shown that the increase in decision values does not result in significant difficulties. Although there may be a slightly stronger genetic drift (requiring a more conservative model update, that is, a higher selection size for the UMDA) and slightly longer runtimes, these outcomes are to be expected given the increased complexity of the problem. We hope that our findings will inspire researchers and practitioners to embrace the benefits of EDAs for multi-valued decision problems, beyond the previously limited application to mostly permutations and binary decision variables."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {},
116
+ "image_paths": {},
117
+ "validation": true,
118
+ "references": [
119
+ {
120
+ "1": {
121
+ "title": "The competing genes evolutionary algorithm: Avoiding genetic drift\nthrough competition, local search, and majority voting.",
122
+ "author": "Adetunji David Ajimakin and V. Susheela Devi.",
123
+ "venue": "IEEE Transactions on Evolutionary Computation, 27:1678\u20131689,\n2023.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "2": {
129
+ "title": "Population-based incremental learning: A method for integrating\ngenetic search based function optimization and competitive learning.",
130
+ "author": "Shumeet Baluja.",
131
+ "venue": "Technical report, Carnegie Mellon University, 1994.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "3": {
137
+ "title": "Estimation-of-distribution algorithms for multi-valued decision\nvariables.",
138
+ "author": "Firas Ben Jedidia, Benjamin Doerr, and Martin S. Krejca.",
139
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2023,\npages 230\u2013238. ACM, 2023.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "4": {
145
+ "title": "A rigorous runtime analysis of the 2-MMAS on jump\nfunctions: ant colony optimizers can cope well with local optima.",
146
+ "author": "Riade Benbaki, Ziyad Benomar, and Benjamin Doerr.",
147
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2021,\npages 4\u201313. ACM, 2021.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "5": {
153
+ "title": "When is an estimation of distribution algorithm better than an\nevolutionary algorithm?",
154
+ "author": "Tianshi Chen, Per Kristian Lehre, Ke Tang, and Xin Yao.",
155
+ "venue": "In Congress on Evolutionary Computation, CEC 2009, pages\n1470\u20131477. IEEE, 2009.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "6": {
161
+ "title": "On the analysis of average time complexity of estimation of\ndistribution algorithms.",
162
+ "author": "Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao.",
163
+ "venue": "In Congress on Evolutionary Computation, CEC 2007, pages\n453\u2013460. IEEE, 2007.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "7": {
169
+ "title": "Analysis of computational time of simple estimation of distribution\nalgorithms.",
170
+ "author": "Tianshi Chen, Ke Tang, Guoliang Chen, and Xin Yao.",
171
+ "venue": "IEEE Transactions on Evolutionary Computation, 14:1\u201322,\n2010.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "8": {
177
+ "title": "Simplified runtime analysis of estimation of distribution algorithms.",
178
+ "author": "Duc-Cuong Dang and Per Kristian Lehre.",
179
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2015,\npages 513\u2013518. ACM, 2015.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "9": {
185
+ "title": "Probabilistic tools for the analysis of randomized optimization\nheuristics.",
186
+ "author": "Benjamin Doerr.",
187
+ "venue": "In Benjamin Doerr and Frank Neumann, editors, Theory of\nEvolutionary Computation: Recent Developments in Discrete Optimization,\npages 1\u201387. Springer, 2020.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "10": {
193
+ "title": "The runtime of the compact genetic algorithm on Jump functions.",
194
+ "author": "Benjamin Doerr.",
195
+ "venue": "Algorithmica, 83:3059\u20133107, 2021.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "11": {
201
+ "title": "Static and self-adjusting mutation strengths for multi-valued\ndecision variables.",
202
+ "author": "Benjamin Doerr, Carola Doerr, and Timo K\u00f6tzing.",
203
+ "venue": "Algorithmica, 80:1732\u20131768, 2018.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "12": {
209
+ "title": "General univariate estimation-of-distribution algorithms.",
210
+ "author": "Benjamin Doerr and Marc Dufay.",
211
+ "venue": "In Parallel Problem Solving From Nature, PPSN 2022, Part II,\npages 470\u2013484. Springer, 2022.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "13": {
217
+ "title": "Runtime analysis of the (1+1) evolutionary algorithm on strings over\nfinite alphabets.",
218
+ "author": "Benjamin Doerr, Daniel Johannsen, and Martin Schmidt.",
219
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2011, pages\n119\u2013126. ACM, 2011.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "14": {
225
+ "title": "Significance-based estimation-of-distribution algorithms.",
226
+ "author": "Benjamin Doerr and Martin S. Krejca.",
227
+ "venue": "IEEE Transactions on Evolutionary Computation, 24:1025\u20131034,\n2020.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "15": {
233
+ "title": "A simplified run time analysis of the univariate marginal\ndistribution algorithm on LeadingOnes.",
234
+ "author": "Benjamin Doerr and Martin S. Krejca.",
235
+ "venue": "Theoretical Computer Science, 851:121\u2013128, 2021.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "16": {
241
+ "title": "The univariate marginal distribution algorithm copes well with\ndeception and epistasis.",
242
+ "author": "Benjamin Doerr and Martin S. Krejca.",
243
+ "venue": "Evolutionary Computation, 29:543\u2013563, 2021.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "17": {
249
+ "title": "Bivariate estimation-of-distribution algorithms can find an\nexponential number of optima.",
250
+ "author": "Benjamin Doerr and Martin S. Krejca.",
251
+ "venue": "Theoretical Computer Science, 971:114074, 2023.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "18": {
257
+ "title": "Theory of Evolutionary Computation\u2014Recent Developments in\nDiscrete Optimization.",
258
+ "author": "Benjamin Doerr and Frank Neumann, editors.",
259
+ "venue": "Springer, 2020.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "19": {
265
+ "title": "Run-time analysis of the (1+1) evolutionary algorithm optimizing\nlinear functions over a finite alphabet.",
266
+ "author": "Benjamin Doerr and Sebastian Pohl.",
267
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2012,\npages 1317\u20131324. ACM, 2012.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "20": {
273
+ "title": "When do evolutionary algorithms optimize separable functions in\nparallel?",
274
+ "author": "Benjamin Doerr, Dirk Sudholt, and Carsten Witt.",
275
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2013, pages 48\u201359.\nACM, 2013.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "21": {
281
+ "title": "Sharp bounds for genetic drift in estimation-of-distribution\nalgorithms.",
282
+ "author": "Benjamin Doerr and Weijie Zheng.",
283
+ "venue": "IEEE Transactions on Evolutionary Computation, 24:1140\u20131149,\n2020.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "22": {
289
+ "title": "Not all linear functions are equally difficult for the compact\ngenetic algorithm.",
290
+ "author": "Stefan Droste.",
291
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO\n2005, pages 679\u2013686. ACM, 2005.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "23": {
297
+ "title": "A rigorous analysis of the compact genetic algorithm for linear\nfunctions.",
298
+ "author": "Stefan Droste.",
299
+ "venue": "Natural Computing, 5:257\u2013283, 2006.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "24": {
305
+ "title": "On the analysis of the (1+1) evolutionary algorithm.",
306
+ "author": "Stefan Droste, Thomas Jansen, and Ingo Wegener.",
307
+ "venue": "Theoretical Computer Science, 276:51\u201381, 2002.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "25": {
313
+ "title": "EDAs cannot be balanced and stable.",
314
+ "author": "Tobias Friedrich, Timo K\u00f6tzing, and Martin S. Krejca.",
315
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2016,\npages 1139\u20131146. ACM, 2016.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "26": {
321
+ "title": "The compact genetic algorithm is efficient under extreme Gaussian\nnoise.",
322
+ "author": "Tobias Friedrich, Timo K\u00f6tzing, Martin S. Krejca, and Andrew M. Sutton.",
323
+ "venue": "IEEE Transactions on Evolutionary Computation, 21:477\u2013490,\n2017.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "27": {
329
+ "title": "The compact genetic algorithm.",
330
+ "author": "Georges R. Harik, Fernando G. Lobo, and David E. Goldberg.",
331
+ "venue": "IEEE Transactions on Evolutionary Computation, 3:287\u2013297,\n1999.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "28": {
337
+ "title": "On the runtime dynamics of the compact genetic algorithm on jump\nfunctions.",
338
+ "author": "V\u00e1clav Hasen\u00f6hrl and Andrew M. Sutton.",
339
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO\n2018, pages 967\u2013974. ACM, 2018.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "29": {
345
+ "title": "(1+1) EA on generalized dynamic OneMax.",
346
+ "author": "Timo K\u00f6tzing, Andrei Lissovoi, and Carsten Witt.",
347
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2015, pages 40\u201351.\nACM, 2015.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "30": {
353
+ "title": "Theory of estimation-of-distribution algorithms.",
354
+ "author": "Martin Krejca and Carsten Witt.",
355
+ "venue": "In Benjamin Doerr and Frank Neumann, editors, Theory of\nEvolutionary Computation: Recent Developments in Discrete Optimization,\npages 405\u2013442. Springer, 2020.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "31": {
361
+ "title": "Lower bounds on the run time of the univariate marginal distribution\nalgorithm on OneMax.",
362
+ "author": "Martin S. Krejca and Carsten Witt.",
363
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2017, pages\n65\u201379. ACM, 2017.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "32": {
369
+ "title": "Estimation of Distribution Algorithms.",
370
+ "author": "Pedro Larra\u00f1aga and Jos\u00e9 Antonio Lozano, editors.",
371
+ "venue": "Springer, 2002.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "33": {
377
+ "title": "On the limitations of the univariate marginal distribution algorithm\nto deception and where bivariate EDAs might help.",
378
+ "author": "Per Kristian Lehre and Phan Trung Hai Nguyen.",
379
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2019, pages\n154\u2013168. ACM, 2019.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "34": {
385
+ "title": "Runtime analysis of the univariate marginal distribution algorithm\nunder low selective pressure and prior noise.",
386
+ "author": "Per Kristian Lehre and Phan Trung Hai Nguyen.",
387
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO\n2019, pages 1497\u20131505. ACM, 2019.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "35": {
393
+ "title": "The complex parameter landscape of the compact genetic algorithm.",
394
+ "author": "Johannes Lengler, Dirk Sudholt, and Carsten Witt.",
395
+ "venue": "Algorithmica, 83:1096\u20131137, 2021.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "36": {
401
+ "title": "MMAS versus population-based EA on a family of dynamic fitness\nfunctions.",
402
+ "author": "Andrei Lissovoi and Carsten Witt.",
403
+ "venue": "Algorithmica, 75:554\u2013576, 2016.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "37": {
409
+ "title": "Concentration.",
410
+ "author": "Colin McDiarmid.",
411
+ "venue": "In Probabilistic Methods for Algorithmic Discrete Mathematics,\nvolume 16, pages 195\u2013248. Springer, Berlin, 1998.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "38": {
417
+ "title": "The equation for response to selection and its use for prediction.",
418
+ "author": "Heinz M\u00fchlenbein.",
419
+ "venue": "Evolutionary Computation, 5(3):303\u2013346, 1997.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "39": {
425
+ "title": "From recombination of genes to the estimation of distributions I.\nBinary parameters.",
426
+ "author": "Heinz M\u00fchlenbein and Gerhard Paass.",
427
+ "venue": "In Parallel Problem Solving from Nature, PPSN 1996, pages\n178\u2013187. Springer, 1996.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "40": {
433
+ "title": "The compact genetic algorithm struggles on Cliff functions.",
434
+ "author": "Frank Neumann, Dirk Sudholt, and Carsten Witt.",
435
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2022,\npages 1426\u20131433. ACM, 2022.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "41": {
441
+ "title": "Estimation of distribution algorithms.",
442
+ "author": "Martin Pelikan, Mark Hauschild, and Fernando G. Lobo.",
443
+ "venue": "In Janusz Kacprzyk and Witold Pedrycz, editors, Springer\nHandbook of Computational Intelligence, pages 899\u2013928. Springer, 2015.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "42": {
449
+ "title": "Convergence Properties of Evolutionary Algorithms.",
450
+ "author": "G\u00fcnter Rudolph.",
451
+ "venue": "Verlag Dr. Kov\u01cec, 1997.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "43": {
457
+ "title": "Protein folding in simplified models with estimation of distribution\nalgorithms.",
458
+ "author": "Roberto Santana, Pedro Larra\u00f1aga, and Jos\u00e9 Antonio Lozano.",
459
+ "venue": "IEEE Transactions on Evolutionary Computation, 12:418\u2013438,\n2008.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "44": {
465
+ "title": "Learning factorizations in estimation of distribution algorithms\nusing affinity propagation.",
466
+ "author": "Roberto Santana, Pedro Larra\u00f1aga, and Jos\u00e9 Antonio Lozano.",
467
+ "venue": "Evolutionary Computation, 18:515\u2013546, 2010.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "45": {
473
+ "title": "Model-based template-recombination in markov network estimation of\ndistribution algorithms for problems with discrete representation.",
474
+ "author": "Roberto Santana and Alexander Mendiburu.",
475
+ "venue": "In World Congress on Information and Communication Technologies,\nWICT 2013, pages 170\u2013175, 2013.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "46": {
481
+ "title": "Solving problems with integer representation using a tree based\nfactorized distribution algorithm.",
482
+ "author": "Roberto Santana, Alberto Ochoa-Rodriguez, and Marta Soto.",
483
+ "venue": "In International NAISO Congress on Neuro Fuzzy Technologies,\n2002.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "47": {
489
+ "title": "The sensitivity of PBIL to its learning rate, and how detailed\nbalance can remove it.",
490
+ "author": "Jonathan L. Shapiro.",
491
+ "venue": "In Foundations of Genetic Algorithms, FOGA 2002, pages\n115\u2013132. Morgan Kaufmann, 2002.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "48": {
497
+ "title": "Drift and scaling in estimation of distribution algorithms.",
498
+ "author": "Jonathan L. Shapiro.",
499
+ "venue": "Evolutionary Computing, 13:99\u2013123, 2005.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "49": {
505
+ "title": "Diversity loss in general estimation of distribution algorithms.",
506
+ "author": "Jonathan L. Shapiro.",
507
+ "venue": "In Parallel Problem Solving from Nature, PPSN 2006, pages\n92\u2013101. Springer, 2006.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "50": {
513
+ "title": "Update strength in EDAs and ACO: How to avoid genetic drift.",
514
+ "author": "Dirk Sudholt and Carsten Witt.",
515
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO 2016,\npages 61\u201368. ACM, 2016.",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "51": {
521
+ "title": "On the choice of the update strength in estimation-of-distribution\nalgorithms and ant colony optimization.",
522
+ "author": "Dirk Sudholt and Carsten Witt.",
523
+ "venue": "Algorithmica, 81:1450\u20131489, 2019.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "52": {
529
+ "title": "Choosing the right algorithm with hints from complexity theory.",
530
+ "author": "Shouda Wang, Weijie Zheng, and Benjamin Doerr.",
531
+ "venue": "In International Joint Conference on Artificial Intelligence,\nIJCAI 2021, pages 1697\u20131703. ijcai.org, 2021.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "53": {
537
+ "title": "Domino convergence: why one should hill-climb on linear functions.",
538
+ "author": "Carsten Witt.",
539
+ "venue": "In Genetic and Evolutionary Computation Conference, GECCO\n2018, pages 1539\u20131546. ACM, 2018.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "54": {
545
+ "title": "Upper bounds on the running time of the univariate marginal\ndistribution algorithm on OneMax.",
546
+ "author": "Carsten Witt.",
547
+ "venue": "Algorithmica, 81:632\u2013667, 2019.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "55": {
553
+ "title": "How majority-vote crossover and estimation-of-distribution algorithms\ncope with fitness valleys.",
554
+ "author": "Carsten Witt.",
555
+ "venue": "Theoretical Computer Science, 940:18\u201342, 2023.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "56": {
561
+ "title": "Switch analysis for running time analysis of evolutionary algorithms.",
562
+ "author": "Yang Yu, Chao Qian, and Zhi-Hua Zhou.",
563
+ "venue": "IEEE Transactions on Evolutionary Computation, 19:777\u2013792,\n2015.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "57": {
569
+ "title": "From understanding genetic drift to a smart-restart mechanism for\nestimation-of-distribution algorithms.",
570
+ "author": "Weijie Zheng and Benjamin Doerr.",
571
+ "venue": "Journal of Machine Learning Research, 24:1\u201340, 2023.",
572
+ "url": null
573
+ }
574
+ }
575
+ ],
576
+ "url": "http://arxiv.org/html/2302.14420v2"
577
+ }
20240101/2304.08842v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2304.14274v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2305.09126v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2305.14669v3.json ADDED
@@ -0,0 +1,394 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-world Video Super-Resolution",
3
+ "abstract": "The capability of video super-resolution (VSR) to synthesize high-resolution (HR) video from ideal datasets has been demonstrated in many works. However, applying the VSR model to real-world video with unknown and complex degradation remains a challenging task. First, existing degradation metrics in most VSR methods are not able to effectively simulate real-world noise and blur. On the contrary, simple combinations of classical degradation are used for real-world noise modeling, which led to the VSR model often being violated by out-of-distribution noise. Second, many SR models focus on noise simulation and transfer. Nevertheless, the sampled noise is monotonous and limited. To address the aforementioned problems, we propose a Negatives augmentation strategy for generalized noise modeling in Video Super-Resolution (NegVSR) task. Specifically, we first propose sequential noise generation toward real-world data to extract practical noise sequences. Then, the degeneration domain is widely expanded by negative augmentation to build up various yet challenging real-world noise sets. We further propose the augmented negative guidance loss to learn robust features among augmented negatives effectively. Extensive experiments on real-world datasets (e.g., VideoLQ and FLIR) show that our method outperforms state-of-the-art methods with clear margins, especially in visual quality. Project page is available at: https://negvsr.github.io/.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Video super-resolution (VSR) is the process of changing from low-resolution (LR) video to high-resolution (HR) video. Currently, VSR is divided into traditional VSR and real-world VSR (Chan et al. 2022b ###reference_4###), depending on the existence of the HR labels. Nevertheless, the VSR model frequently suffers from overfitting to a specific dataset within a fixed domain, which leads to the test results are often violated by unknown degeneration (Ji et al. 2020 ###reference_12###). Due to the domain gap, traditional VSR methods often fail to reconstruct real-world images effectively. Thus, it is crucial to develop a more robust restoration system for VSR.\nThe primary objective in the real-world VSR task is to extract more representative spatial structures and reasonable texture details from images. Many works (Ji et al. 2020 ###reference_12###; Wang et al. 2021 ###reference_27###; Shi et al. 2020 ###reference_24###; Wei et al. 2020 ###reference_28###) have ensured that the real-world model can produce high-quality images across various domains. For instance, Real-ESRGAN (Wang et al. 2021 ###reference_27###) proposed a high-order degradation model that better simulates real-world degradation. They expand the degeneration domain by a second-order degeneration model composing various classical degeneration kernels. But the high-order degradation mode has a theoretical upper bound on the degradation domain, which means the permutations of all the classical degenerate kernels are included. However, this strategy solely deals with a limited portion of real-world scene degradation.\n###figure_1### Recently, many noise migration and simulation methods (Ji et al. 2020 ###reference_12###; Li et al. 2022 ###reference_14###; Dong et al. 2023 ###reference_9###; Pan et al. 2023 ###reference_22###) can extract the noise from the real-world dataset. They sample noise by calculating the feature from the real-world scene dataset. Estimating blur kernels and noise by modeling real-world noise effectively improves the quality of reconstructed images (Zhang et al. 2021 ###reference_34###). Furthermore, suppose the sampled noise is mixed with the VSR input during training. The high-level semantic information in the input image will be further degraded, which helps the discriminative model learn robust features. However, in the VSR task, the noise domain shows a different pattern with space-time structure in the same video sequence, leading to misaligned information in the space-time dimensions. It reveals that the concept of concurrently processing sequential frames and independent noise needs to be re-examined. As illustrated in Tab. 1 ###reference_###, \u2019Mixup Noise\u2019 comparisons with other mixing methods produce the worst result. Therefore, one of the primary challenges in real-world VSR is to investigate sequential noise sampling algorithms corresponding to the space-time dimension in the video sequence.\nIn this paper, we develop a sequential noise modeling approach for the real-world VSR. The proposed method consists of three main stages: noise sequence sampling, negative sample/noise augmentation, and recovery via augmented negative guidance. First, our approach samples noise sequences in an unsupervised manner from the out-of-distribution (OOD) video noise dataset and mixes the noise sequence with the training video. Meanwhile, the sampled noise sequence contains information in both the temporal and spatial dimensions, which will allow the VSR model to learn high-order degradation among real-world noise sequences. Second, we propose a negative augmentation for video frames and sequential noise. Specifically, we perform a patch-based center rotation operation on the video. The proposed negative augmentation operation preserves the semantic information of the local region but destroys the spatial connections between patches, reducing global semantic information, which creates a more challenging degradation metric. Finally, we propose the augmented negative guidance loss to effectively learn robust features among augmented negatives. To demonstrate the effectiveness of our proposed approach, we conduct experiments on two real-world video datasets: VideoLQ (Chan et al. 2022b ###reference_4###) and FLIR. In both datasets, our approach achieved superior performance in terms of quantitative and qualitative indexes. Additionally, we perform an ablation study to evaluate the effectiveness of each component in our method.\nIn summary, our overall contributions are summarized in four-fold:\nWe re-examine the traditional noise mixup strategy in the VSR task and introduce a video noise sampling method that can extract the noise sequence from a given video in an unsupervised manner while ensuring that the space-time information within the noise sequence is continuous.\nWe propose a negative augmentation for generalized no-\nise modeling. With the negative augmentation, NegVSR aims to create various yet challenging sets of real-world noise.\nWe employ an Augment Negative Guidance loss to learn robust features from augmented negatives and enhance model generalization ability.\nOur extensive experiments on two real-world datasets demonstrate that NegVSR outperformed not only other advanced methods but is also highly effective in noise reduction."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Video Super-Resolution. VSR is an extension of SISR (Single-Image Super-Resolution) (Dong et al. 2015 ###reference_8###). Unlike SISR, VSR necessitates the utilization of information contained in multiple frames. Existing VSR research (Wang et al. 2019 ###reference_26###) points out that effectively utilizing the information contained in frames can improve the performance of VSR. And the alignment module is commonly utilized to leverage inter-frame information. VSR methods using alignment module can be categorized into two groups: estimation and compensation (Chan et al. 2021 ###reference_2###; Chu et al. 2020 ###reference_6###) and dynamic convolution (DCN) (Tian et al. 2020 ###reference_25###; Chan et al. 2022a ###reference_3###). Recently, BasicVSR (Chan et al. 2021 ###reference_2###) introduced a bidirectional propagation module aggregating information from future and past frames. BasicVSR++ (Chan et al. 2022a ###reference_3###) builds upon BasicVSR by incorporating additional backward and forward propagation branches. Furthermore, BasicVSR++ introduces optical flow alignment and DCN alignment, where optical flow alignment assists DCN alignment in achieving better performance.\nReal-World Video Super-Resolution. Recent works in real-world VSR have focused on obtaining a larger unknown degeneration domain. RealVSR (Yang et al. 2021 ###reference_31###) utilizes a dual-lens phone camera to acquire LR-HR video pairs.\nReal-ESRGAN (Wang et al. 2021 ###reference_27###) incorporates a high-order degeneration model based on classic degeneration kernel combinations. AnimeSR (Wu et al. 2022 ###reference_29###) employs convolution layers between degradation kernels. Nonetheless, expanding the domain of degeneration gives rise to the challenge of restoring high-quality video from a more complex degradation space. To tackle this problem, RealBasicVSR (Chan et al. 2022b ###reference_4###) introduces a dynamic cleaning module that suppresses degradation. FastRealVSR (Xie et al. 2022 ###reference_30###) proposes manipulating the hidden states to reduce artifacts.\nNoise Modeling. Noise modeling has been utilized in many recent SR tasks. RealSR (Ji et al. 2020 ###reference_12###) extracts noise by calculating the variance and injects noise into the input. GCBD (Chen et al. 2018 ###reference_5###) trains a Generative Adversarial Network (GAN) to estimate the noise distribution of the input noise and generate noise samples. RWSR-EDL (Li et al. 2022 ###reference_14###) introduces a Noise-Guidance Data Collection method to address the time-consuming training required for optimizing multiple datasets. Our work presents the first proposal to utilize real-world noise sequence modeling in real-world VSR to enhance the network denoising capability."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Method",
21
+ "text": "In this section, we provide a detailed description of negative augmentation in NegVSR. First, we discuss the characteristics and challenges associated with the Mixup family. Second, we present a real-world noise sequence sampling and negative modeling method for VSR. The real-world noise sequence used for mixing is extracted unsupervised, but simple input-noise pair mixing methods can often lead to missing details. Finally, to address this problem, we propose a negative augmented noise-guided modeling approach. Through negative augmentation, VSR improves the ability to denoise robustly. During training, the LR video dimension is equal to the real-world noise sequence . represents the size of the training input."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Preliminaries",
27
+ "text": "Mixup (Zhang et al. 2017 ###reference_33###) is a data augmentation methodology frequently employed in deep learning to enhance the model generalization capability. It produces novel training instances via a weighted amalgamation of pre-existing examples and their corresponding labels. Specifically, an additional sample is chosen randomly from the training dataset. And then, the two examples are combined convexly to construct a new example in both the input and label space. Mixup can be formulated as:\nwhere and represent the training samples, and denote their respective labels. and correspond to the new input and label. is the hyperparameter used in the Mixup.\nMixup has inspired a range of variants and derivatives, which are demonstrated comprehensively in Tab. 1 ###reference_###.\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Negatives for Generalized Noise Modeling",
33
+ "text": "Real-world VSR differs from non-blind VSR due to the absence of labels. Specifically, non-blind VSR fails to resolve the various disruptions and changes in external video. This deficiency often results in worse performance in the OOD case.\nConsidering the deficiency of non-blind VSR, we attempt to investigate a practical noise sampling strategy for sequential real-world frames. Assuming that the LR image is degraded from the HR image , the single image degradation formula can be described as:\nwhere is the blur kernel, and is the noise.\nIn the video, most sequential data are taken from the same device and have a similar noise distribution. It implies that the noise domain has a strong connection across most frames within a sequence for video. Current noise modeling methods involve independent noise sampling within the input image and transfer those noise to more data for augmentation. However, it is only applicable to the single-image condition. It is essential for a video to ensure that the sampled noise remains consistent across frames within a sequence and keeps independent and identically distributed (i.i.d) across the different sequences. Therefore, we first propose a sequential noise sample and negative augmentation strategy for real-world video.\nSequential Real-world Noise Generation.\nBuilding up-\non the aforementioned observation, we present our proposed method for extracting sequential noise in video. As shown in Fig. 1 ###reference_### (a). Suppose the video contains frames and the image at moment is . We scan the entire video using the window sequence , each with a dimension of . The total number of window sequences in a is . The window of a window sequence at the moment is denoted by . Each window sequence contains windows . As is shown in Fig. 2 ###reference_###. We calculate the variance for . The scan window with high variance typically contains rich textures. These textures can impact the model to learn the noise distribution. High-variance window is commonly referred to as the noiseless region. Conversely, the noise in the window with low variance is perceptible. This window is referred to as the noise-prone region (Liu et al. 2022 ###reference_15###).\nTo ensure that the texture and margin in the extracted noise sequence are as uniform as possible (Chan et al. 2022b ###reference_4###), we need to calculate the variance for the mean and variance of each window in the sequence as follows:\nwhere and refer to the functions used to calculate variance and mean, respectively. and are the mean and variance of each window . and are the variance of the variance and mean of the window sequence . We consider the window sequence that satisfies the Equ. 4 ###reference_### and Equ. 5 ###reference_### as real-world noise sequence . Before training, we collect all the to create an offline noise dataset.\nVideo Negative Augment for Generalized Noise Generation.\n\nWe first extract from and then mix with to generate the new training sample as follows:\nwhere denotes the mixing noise weight. represents the new training input consisting entirely of .\nVSR can effectively learn to denoise by incorporating into training. However, this denoising ability may lack robustness due to the limited noise. To acquire a more extensive real-world noise set, we propose a patch-based negative augmentation to expand the noise domain.\nNegative Augment toward Video Frames. As illustrated in Fig. 1 ###reference_### (b). We divide into fixed-size patches. Negative augmentation will be applied in the patch-based scenario. Given the each patch sequence . Meanwhile, represents the scale factor of the patches in . The high and width of each patch are . Expressed in the formula as:\nwhere function denotes dividing into patches of the same size, and the number of channels keeps constant. Then we apply negative augmentation to each patch .\n###figure_3### ###figure_4### A random central rotation operation is performed on , with rotation angles of degrees. For under the same , patch-based rotation is applied with the same probability . Each patch is associated with corresponding practical rotation probability . The probability is randomly selected from an array of with an interval of 0.1. Likewise, corresponds to a practical rotation probability randomly drawn from a uniform distribution [0, 1]. is only applied to the patch when is less than or equal to . If equals 1, the is applied to all patches. It can be mathematically represented as:\nwhere refers to random central rotation operation and denotes without any augmentation. As illustrated in Fig. 3 ###reference_###, when approaches 1, less semantic information is preserved. Negative augmentation renders the semantic information unintelligible to the human. It poses a significant challenge to the capacity of VSR to reconstruct the information.\nNegative Augment toward Noise Sequence. extracted from often consists of predominantly solid color blocks, which can negatively impact the generalization ability of VSR. To enhance the robustness of VSR to denoise ability, we also utilize negative augmentation for . Initially, we obtain from using Sequential Real-world Noise Generation. is then divided into patches, then a random central rotation operation is applied to each patch:\nwhere should remain consistent for each pair of and . The weight of in the mixed sequence is controlled by . Finally, our NegMix can be expressed as follows:\n###figure_5### Input: HR video ; Noise sequence ;\ninputa:Training iterations ;\nOutput: Final model ;\nRecovering via Augmented Negative Guidance.\nGiven a clean video and a degradation bank (Wang et al. 2021 ###reference_27###). The LR video is degraded from HR video . We can apply the NegMix to and then get negative output through VSR. represents the output of via VSR.\nwhere represents the degradation bank, which consists of various classic degradation kernels such as blur, resize, noise, and compression.\nWe propose an Augmented Negative Guidance that encourages the consistency between the augmented outputs (i.e., negatives and positives ). As shown in Fig. 4 ###reference_###, we reconstruct the video without NegMix and only use the degradation bank to obtain the through VSR. Next, we use NegMix on to get , and then feed into VSR to generate the corresponding negative output . Furthermore, our proposed approach minimizes the distance between the prediction and its corresponding negative augmentation output. It enables VSR to learn robust features in negative augmentation. We propose an Augmented Negative Guidance for as follows:\nwhere represents the batch size.\nTo promote the convergence of and as the positive augmented loss , various criteria (i.e., pixel loss, perceptual loss (Johnson, Alahi, and Fei-Fei 2016 ###reference_13###) and generative loss (Goodfellow et al. 2020 ###reference_10###)) are utilized for Augmented Positive Guidance as:\nwhere , .\npromotes performance and robustness by learning discriminative representations from augmented noise and frames. This regularization term can be seamlessly integrated into the loss function of VSR. By including this additional term, VSR is motivated to acquire characteristics resistant to negative augmentation, consequently advancing the generalization and recovering capacity. To this end, the total loss in our framework is summarized as follows:\nwhere is the negative augmentation coefficient."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Experiment",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Implementation Details",
45
+ "text": "Training Setting. We adopt the training setting of RealBasicVSR (Chan et al. 2022b ###reference_4###) and train our NegVSR using the REDS (Nah et al. 2019 ###reference_20###) dataset. Noise sequences are gathered from the FLIR training dataset (). We employ the high-order degradation bank (Wang et al. 2021 ###reference_27###) to synthesize the training input. The size of and are and , respectively. And we configure the patch size to be . Throughout the training process, the time length of the sequence fixes to 15. The flip inversion is employed to augment the sequence at each iteration. Setting batch size to 2. Optimizer adopts Adam. The SPyNet (Ranjan and Black 2017 ###reference_23###) model generates the optical flow estimation in the alignment module, and the SPyNet does not participate in the gradient backpropagation during the training process.\nThe training process comprises two distinct stages: pre-training and fine-tuning. During the pre-training phase, the model is trained for 100k iterations while maintaining the learning rate of and employing the . The fine-tuning stage consisted of 150k iterations, where the learning rate is set to . The loss function settings are consistent with Equ. 16 ###reference_###.\nNetwork Config. We configure the propagation module ResBlock to 10 layers and set the ResBlock in the clean module to 20 layers. Additionally, the convolution kernel size is fixed at , and the number of middle channels is set to 64."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Comparation",
51
+ "text": "Evaluation Dataset. To comprehensively compare and validate our NegVSR, we employ the following two real-world VSR datasets, i.e., VideoLQ (Chan et al. 2022b ###reference_4###) and FLIR testing dataset \u2020\u2020https://www.flir.com/oem/adas/adas-dataset-form ###reference_-form###.\nTo keep consistent with the previous method (Chan et al. 2022b ###reference_4###), we calculate the image metrics for a portion of video included in both the VideoLQ and FLIR datasets to mitigate the computational overhead. Similarly, we select the first, middle, and last frames of each video. To FLIR frames, we divide the images into four equally sized copies to lower their resolution. Then the video is reorganized based on their segmented position. We select only the first 25 frames from each video.\nEvaluation Metris. Due to the unavailability of labels, we conduct a quantitative assessment of reconstructed images using reference-free image quality assessment metrics such as NIQE (Mittal, Soundararajan, and Bovik 2012 ###reference_19###), BRISQUE (Mittal, Moorthy, and Bovik 2011 ###reference_18###), NRQM (Ma et al. 2017 ###reference_17###), and PI (Blau et al. 2018 ###reference_1###).\nEvaluation Results. We compare our approach with other VSR methods: DAN (Luo et al. 2020 ###reference_16###), BSRGAN (Zhang et al. 2021 ###reference_34###), Real-ESRGAN (Wang et al. 2021 ###reference_27###), RealVSR\n (Yang et al. 2021 ###reference_31###), DBVSR (Pan et al. 2021 ###reference_21###), and RealBasicVSR (Chan et al. 2022b ###reference_4###). \u2019RealBasicVSR, original\u2019 refers to the RealBasicVSR officially released model. And \u2019RealBasicVSR, our impl\u2019 refers to the implementation of the RealBasicVSR with the same training settings as introduced in our paper.\nThe quantitative evaluation results of our experiments on VideoLQ are presented in Tab. 2 ###reference_###. Our method exhibits superior performance in VideoLQ when compared to the other methods. Specifically, in contrast to RealBasicVSR, our method demonstrates a more effective blur removal. Fig. 5 ###reference_### (1, 2 rows) exhibits the remarkable ability of NegVSR to remove blur and recover more details than other methods.\nAccording to Tab. 3 ###reference_###, we demonstrate the metrics and runtimes test on the FLIR testing dataset, in which NegVSR achieves the best results among all evaluation metrics. A comprehensive depiction of the image details on FLIR is presented within Fig. 5 ###reference_### (3, 4 rows). NegVSR shows a notably superior deblurring effect, enhancing the intricate texture of the road scene. And a satisfactory trade-off between computing speed and image quality is obtained."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Ablations Study",
57
+ "text": "To evaluate the effectiveness of each component in NegVSR\n, we conducted an ablation comparison by separately analyzing each component. The baseline used in our ablation experiments represents RealBasicVSR. We performed a split on . indicates that only the loss function of Equ. 15 ###reference_### is utilized. In contrast, indicates the usage of both and . \u2019w/\u2019 indicates that we have incorporated additional components compared to the baseline. We employ VideoLQ as the test set.\nAnalysis of Noise Sequence. In Tab. 4 ###reference_###, the \u2019w/ Noise\u2019 denotes the noise mixed with the RealBasicVSR inputs during training. We employ the noise sampling method to extract from . Specifically, is scanned using sliding windows of uniform size, and the noise is obtained by filtering these windows based on the calculation of their mean and variance. \u2019w/ Noise Sequences\u2019 utilizes our Sequential Real-world Noise Generation to extract from the same . The distribution of \u2019w/ Noise\u2019 is independent for each noise, and the noise domain of each \u2019w/ Noise Sequences\u2019 is identical. As shown in the Tab. 4 ###reference_###, \u2019w/ Noise Sequences\u2019 outperforms both \u2019w/ Noise\u2019 and the baseline, suggesting that the proposed Sequential Real-world Noise Generation can effectively facilitate the utilization of this long-term noise in VSR.\nRecovering via Augmented Negative Guidance. \u2019w/ NegMix\u2019 refers to executing random center rotation for \u2019w/ Noise Sequences\u2019. If \u2019w/ NegMix\u2019 is used without correcting the corrupted video with , the texture of the resulting image from \u2019w/ NegMix\u2019 will be distorted, leading to a degradation in performance as demonstrated in the Tab. 4 ###reference_###. Utilizing NegMix with corresponds to our NegVSR. The closeness of positive and negative augmented outputs benefits VSR, enhancing its capacity to denoise robustly."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusion",
63
+ "text": "In this paper, we emphasized the significance of noise sequence in real-world VSR. In our study, we find that independent yet separate noise is not suitable for VSR tasks. Conversely, sequential noise exhibits a better solution in the VSR task. Despite efforts to address noise in real-world VSR, the monotonicity and finiteness of noise have resulted in many limitations, rendering the insufficient number for the task demands. To create more robust noise types for real-world VSR, we propose a Negatives augmentation strategy for generalized noise modeling. With the proposed NegVSR, the degeneration\ndomain is widely expanded by negative augmentation to build up various yet challenging real-world noise sets. We additionally present experiments on real-world datasets to show the effectiveness and superiority of NegVSR.\nHowever, the proposed approach still has some limitations, especially the inference speed. In the following research, we are considering involving light-weight structures to facilitate real-time real-world VSR."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Acknowledgements",
69
+ "text": "This work is supported in part by National Key R&D Program of China (no.2021YFB2900900) and National Natural Science Foundation of China (NSFC) (no. 62002069)."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx1.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"Sx1.T1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.2.1\">Methods</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx1.T1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.1\">NIQE</span> \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.3.1.1\">Mixup\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Zhang et\u00a0al. <a class=\"ltx_ref\" href=\"#bib.bib33\" title=\"\">2017</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.3.1.2\">3.635</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.4.2.1\">CutOut\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(DeVries and Taylor <a class=\"ltx_ref\" href=\"#bib.bib7\" title=\"\">2017</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.4.2.2\">3.563</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.5.3.1\">CutMix\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Yun et\u00a0al. <a class=\"ltx_ref\" href=\"#bib.bib32\" title=\"\">2019</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.5.3.2\">3.470</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.6.4.1\">FMix\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Harris et\u00a0al. <a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">2020</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.6.4.2\">3.585</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.7.5.1\">Mixup Noise</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.7.5.2\">3.643</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx1.T1.2.2.1\">NegMix, w/ (ours full)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.2.2.2.1\">3.188</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The quantitative comparison of our model with various data augmentation methods uses REDS and FLIR training dataset as the training video. \u2019Mixup Noise\u2019 denotes the mixing of VSR inputs with a solitary noise extracted from the FLIR training dataset. \u2019NegMix\u2019 is our negative augmentation method for VSR. The performance evaluation is performed on the VideoLQ dataset.</figcaption>\n</figure>",
76
+ "capture": "Table 1: The quantitative comparison of our model with various data augmentation methods uses REDS and FLIR training dataset as the training video. \u2019Mixup Noise\u2019 denotes the mixing of VSR inputs with a solitary noise extracted from the FLIR training dataset. \u2019NegMix\u2019 is our negative augmentation method for VSR. The performance evaluation is performed on the VideoLQ dataset."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"Sx4.T2.4.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.2\">Bicubic</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.3\">DAN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.4\">BSRGAN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.4.5.1.5.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.5.1.1.1\">Real-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.5.1.2.1\">ESRGAN</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.6\">RealVSR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.7\">DBVSR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.8\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.4.5.1.8.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.8.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.8.1.1.1\">RealBasicVSR,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.8.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.8.1.2.1\">our impl.</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T2.4.5.1.9\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.4.5.1.9.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.9.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.9.1.1.1\">RealBasicVSR,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.5.1.9.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.5.1.9.1.2.1\">original.</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.4.5.1.10\">NegVSR</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.1\">NIQE \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.2\">7.987</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.3\">7.086</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.4\">4.204</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.5\">4.187</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.6\">7.810</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.7\">6.732</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.8\">3.936</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T2.1.1.9\">3.699</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.1.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.10.1\">3.188</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T2.2.2.1\">BRISQUE \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.2\">66.652</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.3\">63.360</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.4\">25.159</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.5\">29.844</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.6\">66.252</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.7\">61.163</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.8\">29.073</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.2.2.9\">24.700</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.2.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.2.2.10.1\">22.255</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T2.3.3.1\">PI \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.2\">7.301</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.3\">6.707</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.4\">4.066</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.5\">4.131</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.6\">7.210</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.7\">6.501</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.8\">3.941</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.3.3.9\">3.755</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.3.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.3.10.1\">3.416</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T2.4.4.1\">NRQM \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.2\">3.392</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.3\">3.740</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.4\">6.155</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.5\">6.053</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.6\">3.432</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.7\">3.796</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.8\">6.195</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T2.4.4.9\">6.313</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.4.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.4.4.10.1\">6.465</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The quantitative comparison of our proposed method with other VSR methods. Our method (NegVSR) exhibits superior performance compared to all other methods on the VideoLQ dataset. The metric is calculated on the Y channel. </figcaption>\n</figure>",
80
+ "capture": "Table 2: The quantitative comparison of our proposed method with other VSR methods. Our method (NegVSR) exhibits superior performance compared to all other methods on the VideoLQ dataset. The metric is calculated on the Y channel. "
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T3.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"Sx4.T3.4.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.2\">Bicubic</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.3\">DAN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.4\">BSRGAN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T3.4.5.1.5.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.5.1.1.1\">Real-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.5.1.2.1\">ESRGAN</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.6\">RealVSR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.7\">DBVSR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.8\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T3.4.5.1.8.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.8.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.8.1.1.1\">RealBasicVSR,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.8.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.8.1.2.1\">our impl.</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T3.4.5.1.9\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T3.4.5.1.9.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.9.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.9.1.1.1\">RealBasicVSR,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.5.1.9.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.5.1.9.1.2.1\">original.</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T3.4.5.1.10\">NegVSR</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.6.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.1\">Params (M)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.3\">4.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.4\">16.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.5\">16.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.6\">2.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.7\">25.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.8\">6.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.4.6.1.9\">6.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.4.6.1.10\">4.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.7.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T3.4.7.2.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T3.4.7.2.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.7.2.1.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.4.7.2.1.1.1.1\">Runtimes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.7.2.1.1.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T3.4.7.2.1.1.2.1\">(ms/F)</td>\n</tr>\n</table>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.3\">295.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.4\">379.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.5\">613.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.6\">276.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.7\">1280.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.8\">387.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.7.2.9\">387.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.7.2.10\">315.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.1\">NIQE \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.2\">8.656</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.3\">8.253</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.4\">7.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.5\">7.507</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.6\">8.372</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.7\">8.348</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.8\">6.464</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T3.1.1.9\">6.096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.1.1.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.10.1\">5.225</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T3.2.2.1\">BRISQUE \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.2\">60.468</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.3\">60.822</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.4\">32.396</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.5\">37.905</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.6\">59.398</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.7\">59.451</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.8\">30.819</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.2.2.9\">28.428</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.2.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.2.2.10.1\">20.702</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T3.3.3.1\">PI \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.2\">7.314</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.3\">6.912</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.4\">5.508</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.5\">5.681</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.6\">7.153</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.7\">6.958</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.8\">5.018</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.3.3.9\">4.765</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.3.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.3.10.1\">4.201</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx4.T3.4.4.1\">NRQM \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.2\">3.915</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.3\">4.383</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.4\">6.665</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.5\">6.203</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.6\">4.046</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.7\">4.380</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.8\">6.695</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx4.T3.4.4.9\">6.829</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.4.4.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.4.4.10.1\">6.973</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Quantitative analysis of the FLIR testing dataset. The inference is performed on an NVIDIA 3090 24G with a fixed input frame size of , and the metric is calculated on the Y channel. </figcaption>\n</figure>",
84
+ "capture": "Table 3: Quantitative analysis of the FLIR testing dataset. The inference is performed on an NVIDIA 3090 24G with a fixed input frame size of , and the metric is calculated on the Y channel. "
85
+ },
86
+ "4": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T4.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"Sx4.T4.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.3.4.1\">Methods</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T4.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"Sx4.T4.2.2.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.2.2.2.1\">NIQE</span> \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T4.3.3.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.3.3.1\">BRISQUE</span> \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T4.4.4.2\">Baseline</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.4.4.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"Sx4.T4.4.4.1.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.4.4.3\">3.936</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.4.4.4\">29.073</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.5.6.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.6.1.1\">w/ Noise</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.6.1.2\">3.643</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.5.6.1.3\">25.286</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.5.7.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.7.2.1\">w/ Noise Sequences</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.7.2.2\">3.215</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.5.7.2.3\">22.951</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.5.8.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.8.3.1\">w/ NegMix</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.8.3.2\">3.312</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.5.8.3.3\">22.969</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.5.2\">w/ NegMix</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx4.T4.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.5.5.3.1\">3.188</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.5.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.5.5.4.1\">22.255</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Ablation study of NegMix and . Each proposed component is analyzed independently.</figcaption>\n</figure>",
88
+ "capture": "Table 4: Ablation study of NegMix and . Each proposed component is analyzed independently."
89
+ }
90
+ },
91
+ "image_paths": {
92
+ "1": {
93
+ "figure_path": "2305.14669v3_figure_1.png",
94
+ "caption": "Figure 1: The overview of the proposed NegVSR. (a) Our approach initially extracts noise sequence Ns\u2062qsubscript\ud835\udc41\ud835\udc60\ud835\udc5eN_{sq}italic_N start_POSTSUBSCRIPT italic_s italic_q end_POSTSUBSCRIPT through window sequence C\ud835\udc36Citalic_C in an unsupervised manner. The motion of C\ud835\udc36Citalic_C occurs within the OOD video noise dataset Vo\u2062dsubscript\ud835\udc49\ud835\udc5c\ud835\udc51V_{od}italic_V start_POSTSUBSCRIPT italic_o italic_d end_POSTSUBSCRIPT. Subsequently, it mixes Ns\u2062qsubscript\ud835\udc41\ud835\udc60\ud835\udc5eN_{sq}italic_N start_POSTSUBSCRIPT italic_s italic_q end_POSTSUBSCRIPT and LR video Vl\u2062rsubscript\ud835\udc49\ud835\udc59\ud835\udc5fV_{lr}italic_V start_POSTSUBSCRIPT italic_l italic_r end_POSTSUBSCRIPT to create novel training input Vl\u2062rNsuperscriptsubscript\ud835\udc49\ud835\udc59\ud835\udc5f\ud835\udc41V_{lr}^{N}italic_V start_POSTSUBSCRIPT italic_l italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT. (b) Vl\u2062rNsuperscriptsubscript\ud835\udc49\ud835\udc59\ud835\udc5f\ud835\udc41V_{lr}^{N}italic_V start_POSTSUBSCRIPT italic_l italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT is applied with a patch-based random central rotation to derive Vn\u2062e\u2062gsubscript\ud835\udc49\ud835\udc5b\ud835\udc52\ud835\udc54V_{neg}italic_V start_POSTSUBSCRIPT italic_n italic_e italic_g end_POSTSUBSCRIPT. (c) Both Vn\u2062e\u2062gsubscript\ud835\udc49\ud835\udc5b\ud835\udc52\ud835\udc54V_{neg}italic_V start_POSTSUBSCRIPT italic_n italic_e italic_g end_POSTSUBSCRIPT and Vl\u2062rsubscript\ud835\udc49\ud835\udc59\ud835\udc5fV_{lr}italic_V start_POSTSUBSCRIPT italic_l italic_r end_POSTSUBSCRIPT are fed into the VSR model to generate Y^^\ud835\udc4c\\widehat{Y}over^ start_ARG italic_Y end_ARG and Y\ud835\udc4cYitalic_Y, respectively. And \u2112A\u2062u\u2062g\u2212Psubscript\u2112\ud835\udc34\ud835\udc62\ud835\udc54\ud835\udc43\\mathcal{L}_{Aug-P}caligraphic_L start_POSTSUBSCRIPT italic_A italic_u italic_g - italic_P end_POSTSUBSCRIPT enables the model to recover realistic pixels from the Vl\u2062rsubscript\ud835\udc49\ud835\udc59\ud835\udc5fV_{lr}italic_V start_POSTSUBSCRIPT italic_l italic_r end_POSTSUBSCRIPT. \u2112A\u2062u\u2062g\u2212Nsubscript\u2112\ud835\udc34\ud835\udc62\ud835\udc54\ud835\udc41\\mathcal{L}_{Aug-N}caligraphic_L start_POSTSUBSCRIPT italic_A italic_u italic_g - italic_N end_POSTSUBSCRIPT drives Y\ud835\udc4cYitalic_Y to learn the robust features present in the negative output Y^^\ud835\udc4c\\widehat{Y}over^ start_ARG italic_Y end_ARG.",
95
+ "url": "http://arxiv.org/html/2305.14669v3/x1.png"
96
+ },
97
+ "2": {
98
+ "figure_path": "2305.14669v3_figure_2.png",
99
+ "caption": "Figure 2: Two window sequences C1subscript\ud835\udc361C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and C2subscript\ud835\udc362C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT originate from the same video, which comprises consecutive frames. Based on this sliding window sequence strategy, the sequential Noise-Prone Region (yellow box) that contains less texture and more noise is selected by the low variance feature for noise augmentation.",
100
+ "url": "http://arxiv.org/html/2305.14669v3/x2.png"
101
+ },
102
+ "3": {
103
+ "figure_path": "2305.14669v3_figure_3.png",
104
+ "caption": "Figure 3: A grid visualization of mixed images using the NegMix method by adjusting the noise weight (vertical) and rotation ratio (horizontal). We set M\ud835\udc40Mitalic_M to 0.5 and varied P\ud835\udc43Pitalic_P from 0 to 1 with an interval of 0.1 in our NegVSR setting. Zooming up for a better view.",
105
+ "url": "http://arxiv.org/html/2305.14669v3/x3.png"
106
+ },
107
+ "4": {
108
+ "figure_path": "2305.14669v3_figure_4.png",
109
+ "caption": "Figure 4: The figure depicts the process of our Augmented Negative Guidance approach. We obtain the positive output Y^^\ud835\udc4c\\widehat{Y}over^ start_ARG italic_Y end_ARG by passing Vh\u2062rsubscript\ud835\udc49\u210e\ud835\udc5fV_{hr}italic_V start_POSTSUBSCRIPT italic_h italic_r end_POSTSUBSCRIPT sequential through the degeneration model D\ud835\udc37Ditalic_D and VSR. Then we inject noise sequence Ns\u2062qsubscript\ud835\udc41\ud835\udc60\ud835\udc5eN_{sq}italic_N start_POSTSUBSCRIPT italic_s italic_q end_POSTSUBSCRIPT into the degraded video and apply the video with negative augmentation. Finally, we encourage the model to learn robust features from the augmented noise and video by \u2112A\u2062u\u2062g\u2212Nsubscript\u2112\ud835\udc34\ud835\udc62\ud835\udc54\ud835\udc41\\mathcal{L}_{Aug-N}caligraphic_L start_POSTSUBSCRIPT italic_A italic_u italic_g - italic_N end_POSTSUBSCRIPT and \u2112A\u2062u\u2062g\u2212Psubscript\u2112\ud835\udc34\ud835\udc62\ud835\udc54\ud835\udc43\\mathcal{L}_{Aug-P}caligraphic_L start_POSTSUBSCRIPT italic_A italic_u italic_g - italic_P end_POSTSUBSCRIPT.",
110
+ "url": "http://arxiv.org/html/2305.14669v3/x4.png"
111
+ },
112
+ "5": {
113
+ "figure_path": "2305.14669v3_figure_5.png",
114
+ "caption": "Figure 5: We conduct a visual comparison with recent state-of-the-art methods on real-world images from the VideoLQ (1, 2 rows) and FLIR testing dataset (3, 4 rows), with the upsampling scale factor of 4.",
115
+ "url": "http://arxiv.org/html/2305.14669v3/x5.png"
116
+ }
117
+ },
118
+ "validation": true,
119
+ "references": [
120
+ {
121
+ "1": {
122
+ "title": "The 2018 PIRM challenge on perceptual image super-resolution.",
123
+ "author": "Blau, Y.; Mechrez, R.; Timofte, R.; Michaeli, T.; and Zelnik-Manor, L. 2018.",
124
+ "venue": "In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 0\u20130.",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "2": {
130
+ "title": "BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond.",
131
+ "author": "Chan, K. C.; Wang, X.; Yu, K.; Dong, C.; and Loy, C. C. 2021.",
132
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), 4947\u20134956.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "3": {
138
+ "title": "BasicVSR++: Improving video super-resolution with enhanced propagation and alignment.",
139
+ "author": "Chan, K. C.; Zhou, S.; Xu, X.; and Loy, C. C. 2022a.",
140
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 5972\u20135981.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "4": {
146
+ "title": "Investigating tradeoffs in real-world video super-resolution.",
147
+ "author": "Chan, K. C.; Zhou, S.; Xu, X.; and Loy, C. C. 2022b.",
148
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5962\u20135971.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "5": {
154
+ "title": "Image blind denoising with generative adversarial network based noise modeling.",
155
+ "author": "Chen, J.; Chen, J.; Chao, H.; and Yang, M. 2018.",
156
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, 3155\u20133164.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "6": {
162
+ "title": "Learning temporal coherence via self-supervision for GAN-based video generation.",
163
+ "author": "Chu, M.; Xie, Y.; Mayer, J.; Leal-Taix\u00e9, L.; and Thuerey, N. 2020.",
164
+ "venue": "ACM Transactions on Graphics (TOG), 39(4): 75\u20131.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "7": {
170
+ "title": "Improved regularization of convolutional neural networks with cutout.",
171
+ "author": "DeVries, T.; and Taylor, G. W. 2017.",
172
+ "venue": "arXiv preprint arXiv:1708.04552.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "8": {
178
+ "title": "Image super-resolution using deep convolutional networks.",
179
+ "author": "Dong, C.; Loy, C. C.; He, K.; and Tang, X. 2015.",
180
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, 38(2): 295\u2013307.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "9": {
186
+ "title": "Deep Unpaired Blind Image Super-Resolution Using Self-supervised Learning and Exemplar Distillation.",
187
+ "author": "Dong, J.; Bai, H.; Tang, J.; and Pan, J. 2023.",
188
+ "venue": "International Journal of Computer Vision, 1\u201313.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "10": {
194
+ "title": "Generative adversarial networks.",
195
+ "author": "Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2020.",
196
+ "venue": "Communications of the ACM, 63(11): 139\u2013144.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "11": {
202
+ "title": "Fmix: Enhancing mixed sample data augmentation.",
203
+ "author": "Harris, E.; Marcu, A.; Painter, M.; Niranjan, M.; Pr\u00fcgel-Bennett, A.; and Hare, J. 2020.",
204
+ "venue": "arXiv preprint arXiv:2002.12047.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "12": {
210
+ "title": "Real-World Super-Resolution via Kernel Estimation and Noise Injection.",
211
+ "author": "Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; and Huang, F. 2020.",
212
+ "venue": "In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "13": {
218
+ "title": "Perceptual losses for real-time style transfer and super-resolution.",
219
+ "author": "Johnson, J.; Alahi, A.; and Fei-Fei, L. 2016.",
220
+ "venue": "In Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14, 694\u2013711. Springer.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "14": {
226
+ "title": "Real-world image super-resolution by exclusionary dual-learning.",
227
+ "author": "Li, H.; Qin, J.; Yang, Z.; Wei, P.; Pan, J.; Lin, L.; and Shi, Y. 2022.",
228
+ "venue": "IEEE Transactions on Multimedia.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "15": {
234
+ "title": "Decoupled Mixup for Generalized Visual Recognition.",
235
+ "author": "Liu, H.; Zhang, W.; Xie, J.; Wu, H.; Li, B.; Zhang, Z.; Li, Y.; Huang, Y.; Ghanem, B.; and Zheng, Y. 2022.",
236
+ "venue": "arXiv preprint arXiv:2210.14783.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "16": {
242
+ "title": "Unfolding the Alternating Optimization for Blind Super Resolution.",
243
+ "author": "Luo, Z.; Huang, Y.; Li, S.; Wang, L.; and Tan, T. 2020.",
244
+ "venue": "Advances in Neural Information Processing Systems (NeurIPS), 33.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "17": {
250
+ "title": "Learning a no-reference quality metric for single-image super-resolution.",
251
+ "author": "Ma, C.; Yang, C.-Y.; Yang, X.; and Yang, M.-H. 2017.",
252
+ "venue": "Computer Vision and Image Understanding, 158: 1\u201316.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "18": {
258
+ "title": "Blind/referenceless image spatial quality evaluator.",
259
+ "author": "Mittal, A.; Moorthy, A. K.; and Bovik, A. C. 2011.",
260
+ "venue": "In 2011 conference record of the forty fifth asilomar conference on signals, systems and computers (ASILOMAR), 723\u2013727. IEEE.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "19": {
266
+ "title": "Making a \u201ccompletely blind\u201d image quality analyzer.",
267
+ "author": "Mittal, A.; Soundararajan, R.; and Bovik, A. C. 2012.",
268
+ "venue": "IEEE Signal processing letters, 20(3): 209\u2013212.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "20": {
274
+ "title": "NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study.",
275
+ "author": "Nah, S.; Baik, S.; Hong, S.; Moon, G.; Son, S.; Timofte, R.; and Mu Lee, K. 2019.",
276
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "21": {
282
+ "title": "Deep blind video super-resolution.",
283
+ "author": "Pan, J.; Bai, H.; Dong, J.; Zhang, J.; and Tang, J. 2021.",
284
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4811\u20134820.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "22": {
290
+ "title": "Deep Discriminative Spatial and Temporal Network for Efficient Video Deblurring.",
291
+ "author": "Pan, J.; Xu, B.; Dong, J.; Ge, J.; and Tang, J. 2023.",
292
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 22191\u201322200.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "23": {
298
+ "title": "Optical Flow Estimation Using a Spatial Pyramid Network.",
299
+ "author": "Ranjan, A.; and Black, M. J. 2017.",
300
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "24": {
306
+ "title": "DDet: Dual-path dynamic enhancement network for real-world image super-resolution.",
307
+ "author": "Shi, Y.; Zhong, H.; Yang, Z.; Yang, X.; and Lin, L. 2020.",
308
+ "venue": "IEEE Signal Processing Letters, 27: 481\u2013485.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "25": {
314
+ "title": "Tdan: Temporally-deformable alignment network for video super-resolution.",
315
+ "author": "Tian, Y.; Zhang, Y.; Fu, Y.; and Xu, C. 2020.",
316
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 3360\u20133369.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "26": {
322
+ "title": "EDVR: Video Restoration With Enhanced Deformable Convolutional Networks.",
323
+ "author": "Wang, X.; Chan, K. C.; Yu, K.; Dong, C.; and Change Loy, C. 2019.",
324
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) Workshops.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "27": {
330
+ "title": "Real-esrgan: Training real-world blind super-resolution with pure synthetic data.",
331
+ "author": "Wang, X.; Xie, L.; Dong, C.; and Shan, Y. 2021.",
332
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, 1905\u20131914.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "28": {
338
+ "title": "Component divide-and-conquer for real-world image super-resolution.",
339
+ "author": "Wei, P.; Xie, Z.; Lu, H.; Zhan, Z.; Ye, Q.; Zuo, W.; and Lin, L. 2020.",
340
+ "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part VIII 16, 101\u2013117. Springer.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "29": {
346
+ "title": "AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos.",
347
+ "author": "Wu, Y.; Wang, X.; Li, G.; and Shan, Y. 2022.",
348
+ "venue": "arXiv preprint arXiv:2206.07038.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "30": {
354
+ "title": "Mitigating Artifacts in Real-World Video Super-Resolution Models.",
355
+ "author": "Xie, L.; Wang, X.; Shi, S.; Gu, J.; Dong, C.; and Shan, Y. 2022.",
356
+ "venue": "arXiv preprint arXiv:2212.07339.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "31": {
362
+ "title": "Real-world video super-resolution: A benchmark dataset and a decomposition based learning scheme.",
363
+ "author": "Yang, X.; Xiang, W.; Zeng, H.; and Zhang, L. 2021.",
364
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, 4781\u20134790.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "32": {
370
+ "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features.",
371
+ "author": "Yun, S.; Han, D.; Oh, S. J.; Chun, S.; Choe, J.; and Yoo, Y. 2019.",
372
+ "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, 6023\u20136032.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "33": {
378
+ "title": "mixup: Beyond empirical risk minimization.",
379
+ "author": "Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2017.",
380
+ "venue": "arXiv preprint arXiv:1710.09412.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "34": {
386
+ "title": "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution.",
387
+ "author": "Zhang, K.; Liang, J.; Van Gool, L.; and Timofte, R. 2021.",
388
+ "venue": "In IEEE International Conference on Computer Vision, 4791\u20134800.",
389
+ "url": null
390
+ }
391
+ }
392
+ ],
393
+ "url": "http://arxiv.org/html/2305.14669v3"
394
+ }
20240101/2305.17760v6.json ADDED
@@ -0,0 +1,560 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "1 Introduction",
3
+ "abstract": "How do language models \u201cthink\u201d? This paper formulates a probabilistic cognitive model called the bounded pragmatic speaker, which can characterize the operation of different variations of language models.\nSpecifically, we demonstrate that large language models fine-tuned with reinforcement learning from human feedback Ouyang et al. (2022) embody a model of thought that conceptually resembles a fast-and-slow model Kahneman (2011), which psychologists have attributed to humans.\nWe discuss the limitations of reinforcement learning from human feedback as a fast-and-slow model of thought and propose avenues for expanding this framework.\nIn essence, our research highlights the value of adopting a cognitive probabilistic modeling approach to gain insights into the comprehension, evaluation, and advancement of language models.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large language models Brown et al. (2020 ###reference_6###); Chowdhery et al. (2022 ###reference_9###); Hoffmann et al. (2022 ###reference_22###); Zhang et al. (2022a ###reference_55###); Scao et al. (2022 ###reference_41###); Touvron et al. (2023 ###reference_46###) have emerged as a powerful form of intelligence.\nThese models demonstrate numerous traits associated with both human and superhuman intelligence.\nThey can engage in natural conversations with humans (OpenAI, 2022 ###reference_35###), learn from limited examples (Dong et al., 2022 ###reference_13###), solve complex reasoning problems (Wei et al., 2022 ###reference_50###), generate programs (Chen et al., 2021 ###reference_7###), and pass exams designed for human professionals (OpenAI, 2023 ###reference_36###).\nAlthough the capabilities of large language models have been extensively documented, our understanding of the underlying cognitive mechanisms that enable these capabilities remains limited.\nBy consuming a huge collection of records of human behavior and knowledge, have these models managed to think and reason like humans? Or are they merely copycats?\nIf neither is the case, what exactly is their \u201cmodel of thought\u201d?\nProviding a scientific answer to these questions is crucial for dispelling unfounded speculations about large language models and guiding their future development.\nIn this paper, we attempt to mathematically characterize the cognitive process of large language models.\nOur work is inspired by the work of Mahowald et al. (2023 ###reference_32###) who propose a distinction between formal competence (knowledge about linguistic rules and patterns) and functional competence (knowledge that enables pragmatic use of language) in evaluating large language models.\nTo formalize this intuition, we introduce a mathematical cognitive model called the bounded pragmatic speaker (Figure 1 ###reference_###), which is a generalized version of the Rational Speech Act model Frank & Goodman (2012 ###reference_14###).\nThe bounded pragmatic speaker represents an agent that strives to communicate pragmatically but is constrained by its computational capacity.\nConsequently, it develops a base speaker model to effectively narrow the space of utterances to consider, and a theory-of-mind listener model to select the utterance that would trigger the desired effect in the listener\u2019s mind.\nThe base speaker encapsulates the formal competency of the agent, whereas the theory-of-mind listener embodies its functional competency.\nTo efficiently generate utterances,\nthe bounded pragmatic speaker employs an approximate inference algorithm (e.g., Monte Carlo inference, variational inference, or a search algorithm).\nDespite its apparent simplicity, the bounded pragmatic speaker framework provides valuable insight and guiding principles for comprehending and improving large language models.\nIts potential lies in fostering interdisciplinary connections between cognitive science, reinforcement learning, and probabilistic programming to advance the development of next-generation models.\nOur vision encompasses the creation of modular probabilistic programs that draw inspiration from human cognition and incorporate enhanced reinforcement learning techniques to achieve efficient inference.\n###figure_1### The remainder of the paper is structured as follows.\nFirst, we formally define the bounded pragmatic speaker framework (\u00a7 2 ###reference_###).\nNext, we demonstrate that a language model can be viewed as a straightforward bounded pragmatic speaker that uses its own model to serve as both a base speaker and a theory-of-mind listener (\u00a7 3 ###reference_###).\nThis perspective on language models motivates three directions for improving them.\nIn \u00a7 4 ###reference_###, we revisit two recent extensions of large language models\u2014pragmatic inference Zhang et al. (2022b ###reference_56###) and reinforcement learning from human feedback (Ouyang et al., 2022 ###reference_37###)\u2014and show that they can be regarded as methods for boosting the functional competency of a bounded pragmatic speaker.\nIn particular, reinforcement learning can be framed as learning a variational approximation of a bounded pragmatic speaker\u2019s distribution to allow for efficient yet pragmatic inference.\nThis approach bears striking similarities with the dual model of thought proposed by Kahneman (2011 ###reference_24###), which is composed of a slow-thinking system that performs deep reasoning and a fast-thinking system that implements heuristics to react quickly to situations.\nIn the final section (\u00a7 5 ###reference_###), we argue that reinforcement learning from human feedback remains a rudimentary means of implementing a dual model of thought.\nWe explain the limitations of the reward function as a slow-thinking system and the inefficiency of using reinforcement learning to transfer knowledge and capabilities from the slow-thinking to the fast-thinking system.\nFinally, we discuss promising ideas for devising superior alternatives."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Bounded pragmatic speakers",
15
+ "text": "A language model can be viewed as a speaker that outputs a distribution over utterances to complete a task specified by a context and an intention .\nFor example, to ask a language model to generate a summary of an article, we input to the model a prompt specifying an article to be summarized (the context ), and a list of the desiderata of the output summary (the intention ), and let it execute an inference algorithm to generate a summary (the utterance ).\nGenerating satisfactory utterances can be formulated as solving a communication game (Lewis, 1969 ###reference_29###; Goodman & Frank, 2016 ###reference_18###; Lazaridou et al., 2016 ###reference_27###; Wang et al., 2021 ###reference_49###), where a speaker communicates with a listener to deliver a target intention .\nThe listener can use their judgment to infer the underlying intention of an utterance.\nThe objective of the speaker is to find an utterance that maximizes the probability of the listener inferring :\nA communication game can be solved by an unbounded pragmatic speaker, which has unlimited computing capacity and a perfect copy of the listener inference model. Its speaking distribution is\nWith unlimited computational power, this speaker is capable of finding the optimal utterance in a reasonable amount of time by running all possible utterances through its model and selecting the one with maximum probability.\nHuman and language models, however, have limited computing capacity and are better modeled as agents with bounded rationality (Simon, 1957 ###reference_42###).\nWe define a bounded pragmatic speaker (BPS) as a speaker with bounded rationality, who possesses two capabilities: the search capability and the pragmatic capability.\nIt leverages these capabilities to efficiently approximately compute the optimal solution for the communication game.\nThe search capability refers to the ability to effectively narrow the search space using prior knowledge.\nThis capability can be formalized as having a low support probability distribution on utterances , which we call the base speaker.\nThe pragmatic capability allows for the construction of an approximate model of the listener , which we call the theory-of-mind listener.\nHumans are widely known to possess these two capabilities.\nWe postulate the mental states of others to predict their behavior (Premack & Woodruff, 1978 ###reference_39###; Wimmer & Perner, 1983 ###reference_52###; Baron-Cohen et al., 1985 ###reference_5###; Gopnik & Astington, 1988 ###reference_19###).\nWe are also capable of quickly proposing effective candidate solutions of problems (Sanborn & Chater, 2016 ###reference_40###; Vul et al., 2014 ###reference_47###) and instantly crafting fluent and grammatically correct sentences.\nGiven the components and , the speaking distribution of a BPS is defined as\nwhich is essentially a Bayesian belief update with as the prior and as the likelihood function.\nPerforming exact Bayesian inference is still intractable for this speaker.\nHowever, the addition of the base speaker enables it to efficiently solve communication games via approximate inference.\nWe will discuss this approach in \u00a7 4 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Language models are bounded pragmatic speakers",
21
+ "text": "In this section, we show that any language model can be viewed as a BPS and discuss the implications arising from this viewpoint."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Formulation",
27
+ "text": "Let be a language model parameterized by .\nThis model is equivalent to a BPS that uses as both its base speaker and ToM listener.\nFormally, let the base speaker and the ToM listener .\nFor any task , the BPS constituted by and , and the language model agree on the optimal choice:\nIn other words, they exhibit identical behavior in every communication game.\nStudying this trivial BPS may not initially appear to be interesting.\nHowever, this perspective of a language model has conceptual value because it essentially transforms a monolithic model into a modular one.\nThe monolithic view provides limited insight into improving language models, as their internal operations under this view are largely nebulous.\nIn contrast, the BPS view establishes the connection between language models and a broader family of modular models, which offers greater interpretability as the modular structure of these models allows for independent dissection and upgrading of the modules.\nWithin the BPS family of models, a (vanilla) language model can be seen as the simplest instantiation, with its modules sharing the same model.\nTherefore, it is natural to enhance language models by developing them into more sophisticated BPSs that are composed of specialized modules.\nRecent developments on language models follow this principle."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Directions for improving a language model",
33
+ "text": "There are three main causes of a BPS\u2019s failure in a communication game:\nLimited search capability: the base speaker does not assign sufficiently large probability to the optimal utterance ;\nFlawed pragmatic capability: the ToM listener does not accurately emulate the actual listener ;\nInefficient or erroneous inference algorithm: In this case, even if both and are perfect, the speaker is unable to find the optimal utterance within a reasonable timeframe.\nThese causes point to three directions for augmenting a language model: (1) enhance its search capability (the base speaker), (2) elevate its pragmatic capability (the ToM listener), and (3) devise a more efficient and accurate inference algorithm.\nIn fact, many recent advancements in language models can be categorized within these directions.\nFor example, training language models on vast amounts of data (Brown et al., 2020 ###reference_6###) enables them to generate more relevant utterances, aligning with the objective of enhancing search capability.\nIncorporating a re-ranker Chiu & Chen (2021 ###reference_8###); Cobbe et al. (2021 ###reference_11###); Zhang et al. (2022b ###reference_56###) or a reward function learned from human feedback (Stiennon et al., 2020 ###reference_43###; Ouyang et al., 2022 ###reference_37###) extends a model with a better ToM listener and embodies the goal of improving pragmatic capability.\nLastly, research that introduces novel decoding algorithms Holtzman et al. (2019 ###reference_23###); Li et al. (2022 ###reference_30###); Lu et al. (2021 ###reference_31###) can be attributed to the direction of refining the inference algorithm.\nTo effectively utilize research resources, developers may want to prioritize specific directions instead of trying all of them simultaneously.\nFor instance, if a language model\u2019s search capability is already sufficient, it would be more beneficial to focus on enhancing its pragmatic capability rather than the inference algorithm.\nThis requires being able to diagnose the exact cause of a model\u2019s failure.\nZhao et al. (2023a ###reference_57###) propose a procedure for this purpose.\nTheir idea is quite simple: to evaluate a capability of a model, comparing the model\u2019s performance on a downstream task to that of an oracle model, which is equally proficient in the evaluated capability but attains human-level proficiency in other capabilities.\nFor example, to assess the pragmatic capability, one can sample a set of candidates from the model and have a human rank them, simulating an oracle model with equivalent search capability but human-level pragmatic capability.\nThe performance gap between the evaluated model and the oracle model on a downstream task is then computed, with a larger gain indicating a more pronounced deficiency in the former\u2019s pragmatic capability."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Improving the inference and pragmatic capability of bounded pragmatic speakers",
39
+ "text": "In this section, we discuss pragmatic inference (Andreas & Klein, 2016 ###reference_3###; Fried et al., 2017 ###reference_15###; Zhang et al., 2022b ###reference_56###) and reinforcement learning from human feedback (RLHF) (Christiano et al., 2017 ###reference_10###; Stiennon et al., 2020 ###reference_43###; Ouyang et al., 2022 ###reference_37###)\u2014two popular approaches for boosting the performance of language models.\nWe will show that, under the BPS framework, these two methods essentially follow the same recipe: extending a base speaker with a ToM listener and employing a probabilistic inference algorithm to enable efficient inference."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Pragmatic inference",
45
+ "text": "In this approach, a score function is learned and then used to evaluate a set of candidate outputs sampled from a language model .\nThe approach can be seen as performing Monte-Carlo inference on a BPS whose base speaker is the language model and ToM listener is the score function.\nConcretely, let and , pragmatic inference selects the output utterance as follows\nwhere is the space over all possible utterances and is a small set of candidates sampled from .\nNote that the right-hand side in the last equation takes the form of a BPS (Eq 3 ###reference_###)."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Reinforcement learning from human feedback",
51
+ "text": "Variational inference is an alternative approach to approximate inference for BPS.\nIt involves choosing a variational distribution that is efficient in inference.\nThe objective is to find a set of parameters that minimizes the KL-divergence between the variational and the approximated distributions\nwhere is a BPS\u2019s distribution (Eq 3 ###reference_###) and denotes the KL divergence between two conditional distributions and .\nRLHF is a fine-tuning approach that has been shown to effectively align large language models (LLMs) with human preference.\nThe method first learns a reward function from human ratings.\nStarting with an LLM that was pre-trained for language modeling, the method continues to train the model to maximize the learned reward function.\nA popular RLHF variant penalizes the new model for deviating too far from , yielding the following KL-regularized objective:\nRLHF is equivalent to applying variational inference to a BPS.\nThe learned reward function can be interpreted as a ToM listener because it predicts how the listener evaluates the alignment of an utterance with respect to an intention.\nOn the other hand, the pre-trained language model represents prior knowledge and can be considered as a base speaker.\nFormally, we define these components as follows: and .\nThen, the RLHF objective can be rewritten as\nwhich is exactly the variational inference\u2019s objective (Eq 6 ###reference_###).\nThe connection between RL and variational inference is not a new discovery (see Korbak et al. (2022 ###reference_25###); Sumers et al. (2022 ###reference_44###); White et al. (2020 ###reference_51###); Levine (2018 ###reference_28###)).\nBut in this context, the implication of this connection transcends the equivalence between two machine learning algorithms.\nOur finding suggests a similarity between the thinking processes of RLHF-tuned LLMs and humans, as the behaviors of both can be explained reasonably well under the BPS framework.\nThis connection is surprising because it is not planned: RLHF-tuned LLMs were supposedly not inspired by computational models of human cognition.\nIt can potentially bring new opportunities and perspectives to RL researchers and cognitive scientists.\nRL researchers can incorporate principles of human cognition and communication into the design of intelligent artificial agents.\nCognitive scientists can borrow mathematical and algorithmic tools from RL to simulate more complex human behaviors.\n###figure_2###"
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Towards bounded pragmatic speakers with a dual model of thought",
57
+ "text": "The variational inference approach is reminiscent of the fast-and-slow dual model of thought (DMoT) Kahneman (2011 ###reference_24###)\u2014a renowned theory in psychology that explains human cognition.\nA DMoT comprises of a slow-thinking system for deep reasoning and a fast-thinking system for fast inference.\nIn the case of BPS, the speaker itself is essentially a slow-thinking system because of the expensive cost of the Bayesian inference operator.\nPerforming variational inference on BPS amounts to distilling the knowledge and capabilities of the slow-thinking system into a more efficient fast-thinking system using a learning algorithm.\nIn the more specific case of RLHF-tuned LLMs, the slow-thinking system is constituted by the pre-trained model (the base speaker) and the reward function (the ToM listener).\nThis system is a BPS that pragmatically reasons about the real listener to make decisions.\nRL serves as the learning algorithm, constructing a fast-thinking system (the fine-tuned LLM) that agrees with the slow-thinking system in a set of situations.\nIf this fast thinking system robustly generalizes to new situations, it allows the LLM to communicate both efficiently and pragmatically.\nThis approach can be viewed as a form of amortized inference Gershman & Goodman (2014 ###reference_17###), wherein inferences are \u201ccached\u201d to reduce the asymptotic cost.\nWhile it may not be necessary to construct an explicit fast-thinking system111For example, a Monte Carlo approach only draws a set of samples from the slow-thinking system and considers it as an implicit fast-thinking system., implementing the system as an actual machine learning model can be powerful.\nHigh-capacity models like neural networks can potentially implement more complex algorithms than any human can design.\nMoreover, this algorithm can be continually improved by minimizing disagreement with slow-thinking system and optimizing for other intrinsic motivations (e.g., cognitive effort).\nConsequently, instead of having to manually design a complex inference algorithm, one can implement a highly general model and learning algorithm, and let the optimization process automatically discover an effective inference algorithm.\nDMoT is an abstract conceptualization that can manifest itself in various forms.\nA slow-thinking system can be implemented in many different ways: a probabilistic model Griffiths et al. (2010 ###reference_20###), a modular neural network Corona et al. (2020 ###reference_12###), a tree search algorithm (Anthony et al., 2017 ###reference_4###; Zhao et al., 2023b ###reference_58###), a causal graph (Geiger et al., 2021 ###reference_16###), a program (Wang et al., 2023 ###reference_48###), or a language model prompted to reason and construct plans (Wei et al., 2022 ###reference_50###; Ahn et al., 2022 ###reference_1###) or engineered to represent mental states (Andreas, 2022 ###reference_2###).\nA fast-thinking system can be a light-weight generative neural network.\nThe learning algorithm can be imitation learning, reinforcement learning, an advanced decoding algorithm Lu et al. (2021 ###reference_31###), or a learning algorithm that enables learning from rich feedback Nguyen et al. (2021 ###reference_33###).\nWhile we could attempt all combinations, it is more useful to think about general development directions.\nIn the remainder of the section, we discuss several potential directions motivated by our analysis of the fundamental limitations of RLHF as an approach to constructing a DMoT.\nOur proposals are summarized in Figure 2 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Beyond reward function: slow-thinking system with strong reasoning capability",
63
+ "text": "As shown in \u00a7 4.2 ###reference_###, an RLHF-tuned LLM defines a slow-thinking system based on a reward function , which is essentially a ToM listener\n.\nWe argue that this function offers very limited reasoning capability.\nFirst, the function lacks the capability of reasoning counterfactually, because it does not model the full distribution of the true listener.\nImagine the true listener\u2019s model as a matrix with rows corresponding to intentions and columns corresponding to utterances.\nThe RLHF\u2019s ToM listener captures only a single row of this matrix where .\nIt can only predict the likelihood of an utterance under the target intention , but it cannot describe exactly what intentions the listener could infer from .\nBeing able to reason counterfactually is important for a model to develop a deep understanding of the consequences of its behavior, which helps it effectively adjust its behavior to achieve goals.\nFor example, in a summarization task, suppose that a language model implements a ToM listener and employs it as an imaginary human judge to iteratively revise its summary before outputting a final one.\nIf the model simply reasons about how a human would numerically grade its summary, it provides itself with very vague clues about how to improve the summary.\nDoes a score of 6 out of 10 imply that a summary needs to be more concise or faithful, or both?\nIn contrast, if the model can imagine evaluation with more elaborate criteria (e.g., faithfulness, conciseness, toxicity), it can modify its summary more effectively to satisfy the real listener.\nIt is important to clarify that we do not claim that RLHF-tuned LLMs cannot perform counterfactual reasoning.\nIn fact, they can acquire this capability by imitating records of human thoughts (see (Lampinen et al., 2023 ###reference_26###) for a general explanation).\nOur argument is that the reward function does not offer counterfactual reasoning ability.\nHowever, RLHF-tuned LLMs can still acquire this capability through other mechanisms.\nSecond, a reward function does not capture the long-term effect of an utterance in the world because it is only trained to predict the immediate judgment of a human on the utterance.\nIn reality, an utterance does not simply influence human thoughts, but those thoughts would eventually be translated into actions that alter the world.\nA safe AI agent should implement a slow-thinking system that is capable of reasoning about the long-term impact of its actions.\nWhen offering life advice, the agent must anticipate the potential biases that could influence users\u2019 decisions, in order to avoid recommending harmful actions. Similarly, when providing cooking recipes, it is crucial that the agent envisions the end results and considers their impact on human health, ensuring that no unintentional poison recipes are created.\nThese capabilities necessitate rich knowledge about the world and how humans interact in it, which is currently severely lacking in reward functions trained purely on texts and human judgements.\nThese drawbacks suggest that a natural development for RLHF is to generalize the reward function into a world model which can postulate physical and social interactions (Ni et al., 2023 ###reference_34###; Hafner et al., 2023 ###reference_21###; Park et al., 2023 ###reference_38###; Yao et al., 2023 ###reference_54###; Wong et al., 2023 ###reference_53###) and to develop approximate inference algorithms to reduce the cost of planning with a world model."
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "Beyond learning from rewards: transferring knowledge through rich communication",
69
+ "text": "A slow-thinking system should not only possess strong reasoning capability but also implement an algorithm for transferring its knowledge and capabilities quickly and accurately to a fast-thinking system.\nAs previously shown, RLHF-tuned LLMs employ variational inference as the learning algorithm.\nThis method optimizes the KL-divergence: between a variational distribution and an approximated distribution .\nTo augment this method, it is important to understand its basic assumptions.\nSpecifically, the method assumes an efficient evaluation capability of , i.e. it can swiftly and cheaply compute a score for any .\nThis minimal assumption makes variational inference applicable to a wide range of distributions but is also the root of its inefficiency.\nBecause communicates with only by scores, has to propose many samples to \u201cguess\u201d the shape of .\nTo improve this method, we need to untie the communication bottleneck between and , giving more information than just \u201chow likely your sample is under my distribution\u201d.\nSuppose that we can decompose into and such that , and that the space of is highly structured (e.g., the space of language utterances).\nWe can then compute .\nNow, for a sample drawn from , we can provide more information than just : (i) we can offer information about by sending a sample and (ii) we can disclose information about by sending a sample .222Note that if and has the same representation, i.e. , then or is an identity mapping and becomes . This approach is reduced to behavior cloning because or is essentially a demonstration.\nThese pieces of information allow for the estimation of and .\nWhen we have good approximations of these distributions, we can fully recover .\nThis approach yields advantages if the structure of the space of enables the learning of and to be much more sample-efficient than directly estimating via variational inference.\nFor example, if is expressed in a compositional language, we can hope that if and have overlapping phrases, improving the estimation of also refines the estimation of ; similarly, we can expect that changing to also shift .\nTranslated into the language of reinforcement learning, this idea basically suggests that learning can be accelerated by employing more informative and structured feedback.\nHere, the term \u201cfeedback\u201d refers to any piece of information about received after observing a sample .\nIn variational inference, feedback is a reward .\nIn the approach that we suggest, feedback is and , which can be of any form.\nNguyen et al. (2021 ###reference_33###) demonstrate a variant of this approach where feedback is a language description.\nThe authors present an algorithm with theoretical guarantees and empirically show that it is more sample-efficient than reinforcement learning baselines.\nWe refer the reader to the original paper for more details.\nThere is no free lunch: the primary challenge of this approach is to choose the type of feedback that supports fast learning and is yet inexpensive to obtain.\nThe superiority of natural language as a communication medium makes it a great choice for feedback conveyance.\nHowever, collecting language feedback directly from humans is notoriously costly.\nA promising future direction is to leverage powerful LLMs to cheaply generate language feedback.\nThese models possess remarkable language generation capabilities and vast common sense about the world.\nWith adequate fine-tuning and prompting, they can potentially be repurposed into high-quality feedback providers.\nAlthough it is desirable to perfectly simulate human behavior,\nbuilding models that are reliable enough to substantially reduce the amount of real human feedback would already bring immense economic values."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "We believe that there are great opportunities for the fields of reinforcement learning, probabilistic programming, and socio-cognitive science to collaboratively contribute to the development of more capable and beneficial large language models.\nIn this work, we show that Bayesian models of human cognition can be used to explain the operation of large language models.\nOur proposed framework represents only a simple version of the models that computational cognitive scientists have developed.\nMore advanced proposals, such as hierarchical Bayesian models Tenenbaum et al. (2011 ###reference_45###), can potentially accommodate more complex reasoning and offer better explainability.\nIt has been challenging to scale up these models to real-world problems because of their expensive inference cost.\nHowever, as we have shown, large language models and its learning techniques like RLHF can offer themselves as useful tools for developing more scalable Bayesian probabilistic models.\nThe outcomes would yield models that not only advance the scientific pursuit of comprehending human cognition but also serve as pragmatic tools, enhancing the quality of our daily lives."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {
81
+ "1": {
82
+ "figure_path": "2305.17760v6_figure_1.png",
83
+ "caption": "Figure 1: An overview of our proposed framework. (a) a summarization task is illustrated as a communication game, where a speaker generates an utterance (the summary) to convey an intention (generating a good summary) given a context (the text to be summarized). The game is considered solved when the speaker presents an utterance that causes the listener to infer exactly the speaker\u2019s target intention. (b) a bounded pragmatic speaker efficiently finds a good utterance to output by implementing a base speaker to effectively restrict the search space, and a theory-of-mind listener to anticipate the intention inferred by the (real) listener.",
84
+ "url": "http://arxiv.org/html/2305.17760v6/x1.png"
85
+ },
86
+ "2": {
87
+ "figure_path": "2305.17760v6_figure_2.png",
88
+ "caption": "Figure 2: RLHF-tuned LLMs are instances of models that implement a dual model of thought (a), which consists of a deliberate, methodical thinking system for rigorous reasoning (the slow-thinking system) and a quick, intuitive system for rapid decision-making (the fast-thinking system). The efficacy of the fast-thinking system can be continually enhanced by learning from the slow-thinking system. However, we argue that RLHF-tuned LLMs are still a rudimentary dual model of thought (b). The reward function fails to capture the complete reasoning capabilities of the listener, and the slow-thinking system communicates knowledge through a limited-capacity channel. We advocate for the development of a more comprehensive dual model of thought, wherein the slow-thinking system possesses extensive knowledge and profound comprehension of the physical and social world. This system would employ effective reasoning algorithms (LLMs, search algorithms, probabilistic programs, etc.) to leverage such knowledge and understanding, while facilitating efficient distillation of knowledge and capabilities into the fast-thinking system.",
89
+ "url": "http://arxiv.org/html/2305.17760v6/x2.png"
90
+ }
91
+ },
92
+ "validation": true,
93
+ "references": [
94
+ {
95
+ "1": {
96
+ "title": "Do as i can, not as i say: Grounding language in robotic affordances.",
97
+ "author": "Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., Finn, C., Gopalakrishnan, K., Hausman, K., Herzog, A., et al.",
98
+ "venue": "arXiv preprint arXiv:2204.01691, 2022.",
99
+ "url": null
100
+ }
101
+ },
102
+ {
103
+ "2": {
104
+ "title": "Language models as agent models.",
105
+ "author": "Andreas, J.",
106
+ "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 5769\u20135779, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.",
107
+ "url": null
108
+ }
109
+ },
110
+ {
111
+ "3": {
112
+ "title": "Reasoning about pragmatics with neural listeners and speakers.",
113
+ "author": "Andreas, J. and Klein, D.",
114
+ "venue": "arXiv preprint arXiv:1604.00562, 2016.",
115
+ "url": null
116
+ }
117
+ },
118
+ {
119
+ "4": {
120
+ "title": "Thinking fast and slow with deep learning and tree search.",
121
+ "author": "Anthony, T., Tian, Z., and Barber, D.",
122
+ "venue": "Advances in neural information processing systems, 30, 2017.",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "5": {
128
+ "title": "Does the autistic child have a \u201ctheory of mind\u201d?",
129
+ "author": "Baron-Cohen, S., Leslie, A. M., and Frith, U.",
130
+ "venue": "Cognition, 21(1):37\u201346, 1985.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "6": {
136
+ "title": "Language models are few-shot learners.",
137
+ "author": "Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.",
138
+ "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "7": {
144
+ "title": "Evaluating large language models trained on code.",
145
+ "author": "Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al.",
146
+ "venue": "arXiv preprint arXiv:2107.03374, 2021.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "8": {
152
+ "title": "Innovative bert-based reranking language models for speech recognition.",
153
+ "author": "Chiu, S.-H. and Chen, B.",
154
+ "venue": "In 2021 IEEE Spoken Language Technology Workshop (SLT), pp. 266\u2013271. IEEE, 2021.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "9": {
160
+ "title": "Palm: Scaling language modeling with pathways.",
161
+ "author": "Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al.",
162
+ "venue": "arXiv preprint arXiv:2204.02311, 2022.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "10": {
168
+ "title": "Deep reinforcement learning from human preferences.",
169
+ "author": "Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D.",
170
+ "venue": "Advances in neural information processing systems, 30, 2017.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "11": {
176
+ "title": "Training verifiers to solve math word problems.",
177
+ "author": "Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., et al.",
178
+ "venue": "arXiv preprint arXiv:2110.14168, 2021.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "12": {
184
+ "title": "Modular networks for compositional instruction following.",
185
+ "author": "Corona, R., Fried, D., Devin, C., Klein, D., and Darrell, T.",
186
+ "venue": "arXiv preprint arXiv:2010.12764, 2020.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "13": {
192
+ "title": "A survey for in-context learning.",
193
+ "author": "Dong, Q., Li, L., Dai, D., Zheng, C., Wu, Z., Chang, B., Sun, X., Xu, J., and Sui, Z.",
194
+ "venue": "arXiv preprint arXiv:2301.00234, 2022.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "14": {
200
+ "title": "Predicting pragmatic reasoning in language games.",
201
+ "author": "Frank, M. C. and Goodman, N. D.",
202
+ "venue": "Science, 336(6084):998\u2013998, 2012.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "15": {
208
+ "title": "Unified pragmatic models for generating and following instructions.",
209
+ "author": "Fried, D., Andreas, J., and Klein, D.",
210
+ "venue": "arXiv preprint arXiv:1711.04987, 2017.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "16": {
216
+ "title": "Causal abstractions of neural networks.",
217
+ "author": "Geiger, A., Lu, H., Icard, T., and Potts, C.",
218
+ "venue": "Advances in Neural Information Processing Systems, 34:9574\u20139586, 2021.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "17": {
224
+ "title": "Amortized inference in probabilistic reasoning.",
225
+ "author": "Gershman, S. and Goodman, N.",
226
+ "venue": "In Proceedings of the annual meeting of the cognitive science society, volume 36, 2014.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "18": {
232
+ "title": "Pragmatic language interpretation as probabilistic inference.",
233
+ "author": "Goodman, N. D. and Frank, M. C.",
234
+ "venue": "Trends in cognitive sciences, 20(11):818\u2013829, 2016.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "19": {
240
+ "title": "Children\u2019s understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction.",
241
+ "author": "Gopnik, A. and Astington, J. W.",
242
+ "venue": "Child development, pp. 26\u201337, 1988.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "20": {
248
+ "title": "Probabilistic models of cognition: Exploring representations and inductive biases.",
249
+ "author": "Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., and Tenenbaum, J. B.",
250
+ "venue": "Trends in cognitive sciences, 14(8):357\u2013364, 2010.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "21": {
256
+ "title": "Mastering diverse domains through world models.",
257
+ "author": "Hafner, D., Pasukonis, J., Ba, J., and Lillicrap, T.",
258
+ "venue": "arXiv preprint arXiv:2301.04104, 2023.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "22": {
264
+ "title": "Training compute-optimal large language models.",
265
+ "author": "Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. d. L., Hendricks, L. A., Welbl, J., Clark, A., et al.",
266
+ "venue": "arXiv preprint arXiv:2203.15556, 2022.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "23": {
272
+ "title": "The curious case of neural text degeneration.",
273
+ "author": "Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y.",
274
+ "venue": "arXiv preprint arXiv:1904.09751, 2019.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "24": {
280
+ "title": "Thinking, fast and slow.",
281
+ "author": "Kahneman, D.",
282
+ "venue": "macmillan, 2011.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "25": {
288
+ "title": "Rl with kl penalties is better viewed as bayesian inference.",
289
+ "author": "Korbak, T., Perez, E., and Buckley, C. L.",
290
+ "venue": "arXiv preprint arXiv:2205.11275, 2022.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "26": {
296
+ "title": "Passive learning of active causal strategies in agents and language models.",
297
+ "author": "Lampinen, A. K., Chan, S. C., Dasgupta, I., Nam, A. J., and Wang, J. X.",
298
+ "venue": "arXiv preprint arXiv:2305.16183, 2023.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "27": {
304
+ "title": "Multi-agent cooperation and the emergence of (natural) language.",
305
+ "author": "Lazaridou, A., Peysakhovich, A., and Baroni, M.",
306
+ "venue": "arXiv preprint arXiv:1612.07182, 2016.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "28": {
312
+ "title": "Reinforcement learning and control as probabilistic inference: Tutorial and review.",
313
+ "author": "Levine, S.",
314
+ "venue": "arXiv preprint arXiv:1805.00909, 2018.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "29": {
320
+ "title": "Convention: A Philosophical Study.",
321
+ "author": "Lewis, D. K.",
322
+ "venue": "Cambridge, MA, USA: Wiley-Blackwell, 1969.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "30": {
328
+ "title": "Contrastive decoding: Open-ended text generation as optimization.",
329
+ "author": "Li, X. L., Holtzman, A., Fried, D., Liang, P., Eisner, J., Hashimoto, T., Zettlemoyer, L., and Lewis, M.",
330
+ "venue": "arXiv preprint arXiv:2210.15097, 2022.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "31": {
336
+ "title": "Neurologic a* esque decoding: Constrained text generation with lookahead heuristics.",
337
+ "author": "Lu, X., Welleck, S., West, P., Jiang, L., Kasai, J., Khashabi, D., Bras, R. L., Qin, L., Yu, Y., Zellers, R., et al.",
338
+ "venue": "arXiv preprint arXiv:2112.08726, 2021.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "32": {
344
+ "title": "Dissociating language and thought in large language models: a cognitive perspective.",
345
+ "author": "Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., and Fedorenko, E.",
346
+ "venue": "arXiv preprint arXiv:2301.06627, 2023.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "33": {
352
+ "title": "Interactive learning from activity description.",
353
+ "author": "Nguyen, K. X., Misra, D., Schapire, R., Dud\u00edk, M., and Shafto, P.",
354
+ "venue": "In International Conference on Machine Learning, pp. 8096\u20138108. PMLR, 2021.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "34": {
360
+ "title": "Lever: Learning to verify language-to-code generation with execution.",
361
+ "author": "Ni, A., Iyer, S., Radev, D., Stoyanov, V., Yih, W.-t., Wang, S. I., and Lin, X. V.",
362
+ "venue": "arXiv preprint arXiv:2302.08468, 2023.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "35": {
368
+ "title": "Chatgpt.",
369
+ "author": "OpenAI.",
370
+ "venue": "https://openai.com/blog/chatgpt, 2022.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "36": {
376
+ "title": "Gpt-4 technical report.",
377
+ "author": "OpenAI.",
378
+ "venue": "2023.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "37": {
384
+ "title": "Training language models to follow instructions with human feedback.",
385
+ "author": "Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al.",
386
+ "venue": "Advances in Neural Information Processing Systems, 35:27730\u201327744, 2022.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "38": {
392
+ "title": "Generative agents: Interactive simulacra of human behavior.",
393
+ "author": "Park, J. S., O\u2019Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M. S.",
394
+ "venue": "arXiv preprint arXiv:2304.03442, 2023.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "39": {
400
+ "title": "Does the chimpanzee have a theory of mind?",
401
+ "author": "Premack, D. and Woodruff, G.",
402
+ "venue": "Behavioral and brain sciences, 1(4):515\u2013526, 1978.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "40": {
408
+ "title": "Bayesian brains without probabilities.",
409
+ "author": "Sanborn, A. N. and Chater, N.",
410
+ "venue": "Trends in cognitive sciences, 20(12):883\u2013893, 2016.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "41": {
416
+ "title": "Bloom: A 176b-parameter open-access multilingual language model.",
417
+ "author": "Scao, T. L., Fan, A., Akiki, C., Pavlick, E., Ili\u0107, S., Hesslow, D., Castagn\u00e9, R., Luccioni, A. S., Yvon, F., Gall\u00e9, M., et al.",
418
+ "venue": "arXiv preprint arXiv:2211.05100, 2022.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "42": {
424
+ "title": "Models of man; social and rational.",
425
+ "author": "Simon, H. A.",
426
+ "venue": "1957.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "43": {
432
+ "title": "Learning to summarize with human feedback.",
433
+ "author": "Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F.",
434
+ "venue": "Advances in Neural Information Processing Systems, 33:3008\u20133021, 2020.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "44": {
440
+ "title": "How to talk so ai will learn: Instructions, descriptions, and autonomy.",
441
+ "author": "Sumers, T., Hawkins, R., Ho, M. K., Griffiths, T., and Hadfield-Menell, D.",
442
+ "venue": "Advances in Neural Information Processing Systems, 35:34762\u201334775, 2022.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "45": {
448
+ "title": "How to grow a mind: Statistics, structure, and abstraction.",
449
+ "author": "Tenenbaum, J. B., Kemp, C., Griffiths, T. L., and Goodman, N. D.",
450
+ "venue": "science, 331(6022):1279\u20131285, 2011.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "46": {
456
+ "title": "Llama: Open and efficient foundation language models.",
457
+ "author": "Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozi\u00e8re, B., Goyal, N., Hambro, E., Azhar, F., et al.",
458
+ "venue": "arXiv preprint arXiv:2302.13971, 2023.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "47": {
464
+ "title": "One and done? optimal decisions from very few samples.",
465
+ "author": "Vul, E., Goodman, N., Griffiths, T. L., and Tenenbaum, J. B.",
466
+ "venue": "Cognitive science, 38(4):599\u2013637, 2014.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "48": {
472
+ "title": "Voyager: An open-ended embodied agent with large language models.",
473
+ "author": "Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., Fan, L., and Anandkumar, A.",
474
+ "venue": "2023.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "49": {
480
+ "title": "Calibrate your listeners! robust communication-based training for pragmatic speakers.",
481
+ "author": "Wang, R. E., White, J., Mu, J., and Goodman, N. D.",
482
+ "venue": "arXiv preprint arXiv:2110.05422, 2021.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "50": {
488
+ "title": "Chain of thought prompting elicits reasoning in large language models.",
489
+ "author": "Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., and Zhou, D.",
490
+ "venue": "arXiv preprint arXiv:2201.11903, 2022.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "51": {
496
+ "title": "Learning to refer informatively by amortizing pragmatic reasoning.",
497
+ "author": "White, J., Mu, J., and Goodman, N. D.",
498
+ "venue": "arXiv preprint arXiv:2006.00418, 2020.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "52": {
504
+ "title": "Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children\u2019s understanding of deception.",
505
+ "author": "Wimmer, H. and Perner, J.",
506
+ "venue": "Cognition, 13(1):103\u2013128, 1983.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "53": {
512
+ "title": "From word models to world models: Translating from natural language to the probabilistic language of thought.",
513
+ "author": "Wong, L. S., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., and Tenenbaum, J. B.",
514
+ "venue": "2023.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "54": {
520
+ "title": "Tree of thoughts: Deliberate problem solving with large language models.",
521
+ "author": "Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K.",
522
+ "venue": "arXiv preprint arXiv:2305.10601, 2023.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "55": {
528
+ "title": "Opt: Open pre-trained transformer language models.",
529
+ "author": "Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al.",
530
+ "venue": "arXiv preprint arXiv:2205.01068, 2022a.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "56": {
536
+ "title": "Coder reviewer reranking for code generation.",
537
+ "author": "Zhang, T., Yu, T., Hashimoto, T. B., Lewis, M., Yih, W.-t., Fried, D., and Wang, S. I.",
538
+ "venue": "arXiv preprint arXiv:2211.16490, 2022b.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "57": {
544
+ "title": "Define, evaluate, and improve task-oriented cognitive capabilities for instruction generation models.",
545
+ "author": "Zhao, L., Nguyen, K., and Daum\u00e9 III, H.",
546
+ "venue": "arXiv preprint arXiv:2301.05149, 2023a.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "58": {
552
+ "title": "Large language models as commonsense knowledge for large-scale task planning.",
553
+ "author": "Zhao, Z., Lee, W. S., and Hsu, D.",
554
+ "venue": "arXiv preprint arXiv:2305.14078, 2023b.",
555
+ "url": null
556
+ }
557
+ }
558
+ ],
559
+ "url": "http://arxiv.org/html/2305.17760v6"
560
+ }
20240101/2306.00613v2.json ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Algorithms Transcending the SAT-Symmetry Interface",
3
+ "abstract": "Dedicated treatment of symmetries in satisfiability problems (SAT) is indispensable for solving various classes of instances arising in practice. However, the exploitation of symmetries usually takes a black box approach. Typically, off-the-shelf external, general-purpose symmetry detection tools are invoked to compute symmetry groups of a formula. The groups thus generated are a set of permutations passed to a separate tool to perform further analyzes to understand the structure of the groups. The result of this second computation is in turn used for tasks such as static symmetry breaking or dynamic pruning of the search space. Within this pipeline of tools, the detection and analysis of symmetries typically incurs the majority of the time overhead for symmetry exploitation.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Many SAT instances, especially of the hard combinatorial type, exhibit symmetries. When symmetries exhibited by these instances\nare not handled adequately, SAT solvers may repeatedly explore symmetric\nparts of the search space.\nThis can dramatically increase runtime, sometimes making it impossible for the solver to finish within reasonable time [6 ###reference_6###].\nOne common method to handle the symmetries is to add symmetry breaking\nformulas to the problem specification [9 ###reference_9###, 2 ###reference_2###]. This approach is called static symmetry breaking.\nAnother, competing, approach is to handle symmetries dynamically during the running of the SAT solver. There are a variety of such dynamic strategies, exploiting symmetry information during variable branching [20 ###reference_20###] and learning [27 ###reference_27###, 11 ###reference_11###].\nFor SAT, the tools Shatter [1 ###reference_1###] and BreakID [12 ###reference_12###, 7 ###reference_7###] take the static symmetry breaking approach, while SymChaff [27 ###reference_27###] and SMS [20 ###reference_20###] take the dynamic symmetry exploitation approach.\nWhile there is a growing number of competing approaches of how best to handle symmetries, there are also a number of common obstacles:\nsymmetries of the underlying formula have to be detected first, and the structure of symmetries has to be understood, at least to some degree.\nApproaches that handle symmetries can be typically divided into three distinct steps: (Step 1) symmetry detection, (Step 2) symmetry analysis, and (Step 3) symmetry breaking, or other ways of exploiting symmetry.\nIn the following, we discuss each of these steps, also illustrated on the left side of Figure 1 ###reference_###.\nStep 1. In practice, symmetries are detected by modeling a given SAT formula as a graph, and then applying an off-the-shelf symmetry detection tool, such as saucy [10 ###reference_10###],\nto the resulting graph.\nSince symmetries form a permutation group under composition, a symmetry detection tool does not return all the symmetries. Instead, it only returns a small set of generators, which, by composition, lead to all the symmetries of the formula.\nIndeed, returning only a small set of generators is crucial for efficiency, since the number of symmetries is often exponential in the size of the formula.\nStep 2. Symmetry exploitation algorithms apply heuristics to analyze the structure of the group described by the generators.\nThis is necessary to enable the best possible use of the symmetries to improve SAT solver performance.\nWe mention three examples for structural analyzes.\nFirstly, the disjoint direct decomposition splits a group into independent parts that can be handled separately.\nSecondly, so-called row interchangeability subgroups of the group [12 ###reference_12###, 13 ###reference_13###, 24 ###reference_24###] are of particular interest since they form a class of groups for which linear-sized, complete symmetry breaking constraints are known.\nThirdly, stabilizers are commonly used for various purposes among both static and dynamic approaches [26 ###reference_26###].\nStep 3. Lastly, the symmetries and structural insights are used to reduce the search space in SAT using one of the various static and dynamic symmetry exploitation approaches.\nDesigning symmetry exploitation algorithms typically involves delicately balancing computational overhead versus how thoroughly symmetries are used.\nIn this trade-off, symmetry detection (Step 1) and analysis (Step 2) typically induce the majority of the overhead [12 ###reference_12###].\nThe main focus of this paper is improving the analysis of symmetries, i.e. (Step 2).\nPractical implementations in use today that perform such structural analyzes do so through heuristics. While using heuristics is not an issue per se, some heuristics currently in use strongly depend on properties that the generators returned by symmetry detection tools may or may not exhibit.\nFor example, BreakID and the MIP heuristic in [24 ###reference_24###] both rely on so-called separability of the generating set and a specific arrangement of transpositions being present. Neither of these properties are guaranteed by contemporary symmetry detection tools [8 ###reference_8###].\nIn fact, modern symmetry detection tools such as Traces [22 ###reference_22###] and dejavu [3 ###reference_3###] return randomly selected symmetries, since the use of randomization provides an asymptotic advantage in the symmetry detection process itself [4 ###reference_4###]. However, generating sets consisting of randomly selected symmetries are in a sense the exact opposite of what is desired for the heuristics, since with high probability random symmetries satisfy neither of the required conditions.\nThis is particularly unfortunate, as dejavu is currently the fastest symmetry detection tool available for graphs stemming from SAT instances [5 ###reference_5###].\nAnother downside of the use of practical heuristics for the structural analysis of the group is that they are often also computationally expensive\nand make up a large portion of the runtime of the overall symmetry exploitation process.\nFor example, the row interchangeability algorithm of BreakID performs multiple callbacks to the underlying symmetry detection tool, where each call can be expensive.\nAltogether, heuristics in use today sometimes cause significant overhead, while also posing an obstacle to speeding up symmetry detection itself.\nThis immediately poses the question: why is it that these heuristics are currently in place that cause such a loss of efficiency when it comes to computations within the SAT-symmetry interface?\nWe believe that the issue is that tools on either side of the interface treat each other as a black box. Indeed, when considered as an isolated task, algorithms for the analysis of permutation groups are well-researched in the area of computational group theory [29 ###reference_29###]. Not only is the theory well-understood, but there are also highly efficient implementations [14 ###reference_14###]. However, we can make two crucial observations regarding the available algorithms.\nFirst and foremost, for group theoretic algorithms from the literature that are deemed to have linear or nearly-linear runtime [29 ###reference_29###], the concrete runtime notions actually differ from the ones applicable in the overall context. In fact, the runtime is essentially measured in\nterms of a dense rather than a sparse input description. Therefore, in the context of SAT-solving or graph algorithms, the runtime of these algorithms should rather be considered quadratic. Secondly, in computational group theory, algorithms assume that only generators for an input group are available.\nHowever, in the context of the SAT-symmetry interface, not only a group but also a graph (computed from the original formula) is available. It turns out as a key insight of our paper that lacking access to the graphs crucially limits the design space for efficient algorithms.\nContributions.\nAdvocating a holistic view of the SAT-symmetry interface, we develop algorithms that transcend both into the SAT domain and the symmetry domain at the same time.\nThis is illustrated in Figure 1 ###reference_### on the right side.\nFirstly, we provide a definition for the computational setting such as input, output, and runtime, under which these algorithms should operate (Section 3 ###reference_###).\nWe then extract precise formal problem definitions from heuristics implemented in state-of-the-art tools (Section 4 ###reference_###).\nLastly, we demonstrate the efficacy of our new approach by providing faster theoretical algorithms for commonly used heuristics, as is described below.\nComputational Setting.\nIn our new computational setting, algorithms take as input a joint graph/group pair, meaning a group and corresponding graph , whose symmetry group is precisely .\nWe define a precise notion of instance-linear time, meaning it is linear in the encoding size of the SAT formula, graph, and group.\nNew Algorithms. Given a joint graph/group pair, we develop and analyze the following algorithms:\nAn instance-linear algorithm for computing the finest direct disjoint decomposition of the symmetry group of a graph (Section 5 ###reference_###).\nWe also give a heuristic specific to SAT formulas, decomposing the symmetry group on the literals.\nAn algorithm to simultaneously detect natural symmetric group actions on all the orbits of a group (Section 6 ###reference_###). Here we exploit randomized techniques from computational group theory for the detection of \u201cgiant\u201d permutation groups.\nWe give instance-linear heuristics which are able to exploit properties of the SAT-symmetry interface.\nAn instance-quasi-linear algorithm to compute equivalent symmetric orbits, under some mild assumptions about the generating set (Section 7 ###reference_###).\nIn conjunction with (A2), this enables us to detect all elementary row interchangeability subgroups.\nBoth (A1) and (A3) improve the (at least) quadratic runtime of previous, general-purpose permutation group algorithms of [8 ###reference_8###] and [29 ###reference_29###], respectively."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries and Related Work",
15
+ "text": "Graphs and Symmetries.\nA colored graph consists of a set of vertices , edges , and a vertex coloring which maps to some set of colors . We use , , and to refer to the vertices, edges, and coloring of , respectively.\nA symmetry, or automorphism, of a colored graph is a bijection such that as well as for all .\nIn other words, symmetries preserve the neighborhood relation of the graph, as well as the coloring of vertices.\nThe colors of vertices in the graph are solely used to ensure that distinctly colored vertices are not mapped onto each other using symmetries.\nTogether, all symmetries of a graph form a permutation group under composition, which we call the automorphism group .\nIn this paper, we call software tools computing the automorphism group of a graph symmetry detection tools [22 ###reference_22###, 10 ###reference_10###, 17 ###reference_17###, 22 ###reference_22###, 3 ###reference_3###].\nIn the literature, these tools are also often called practical graph isomorphism solvers.\nIn this paper, we avoid the use of this term in order not to confuse them with SAT solvers.\nColor refinement.\nA common algorithm applied when computing the symmetries of a graph is color refinement.\nGiven a colored graph , color refinement refines the coloring of into .\nCrucially, the automorphism group remains invariant under color refinement, i.e., .\nWe now describe the algorithm.\nIf two vertices in some color have a different number of neighbors in another color ,\nthen can be split by partitioning it according to neighbor counts in .\nAfter the split, two vertices have the same color precisely if they had the same color before the split, and they have the same number of neighbors in .\nWe repeatedly split classes with respect to other classes until no further splits are possible.\nFigure 2 ###reference_### shows an illustration of the color refinement procedure.\nA coloring which does not admit further splits is called equitable.\nFor a graph , color refinement can be computed in time [21 ###reference_21###, 25 ###reference_25###].\nLet us also recall a different definition for equitable colorings:\nA coloring of a graph is equitable if for all pairs of (not necessarily distinct) color classes , all vertices in have the same number of neighbors in (i.e., for all ).\nGiven a coloring , color refinement computes an equitable refinement , i.e., an equitable coloring for which implies .\nIn fact, it computes the coarsest equitable refinement.\nPermutation Groups.\nThe symmetric group is the permutation group consisting of all permutations of the set .\nA permutation group on domain is a group that is a subgroup of , denoted .\nFor a subset of the domain , the restriction of to is (where denotes restricting the domain of to ). The restriction is not necessarily a group since the images need not be in .\nThe pointwise stabilizer is the group , obtained by fixing all points of individually.\nWhenever we are dealing with groups, we use a specific, succinct encoding.\nInstead of explicitly representing each element of the group, we only store a subset that is sufficient to obtain any other element through composition.\nFormally, let be a subset of the group , i.e., .\nWe call a generating set of whenever we obtain precisely when exhaustively composing elements of .\nWe write .\nMoreover, each individual element can be referred to as a generator of .\nWe write for the support of a map, meaning points of not fixed by .\nThe support of a group is the union of all supports of elements of , i.e., .\nWe use the cycle notation for permutations .\nThe permutation of given by we write as .\nNote that, for example and denote the same permutation.\nAlgorithmically the cycle notation enables us to read and store a permutation in time .\nWhen considering two permutation groups and it is possible that the groups are isomorphic as abstract groups but not as permutation groups. For example, if we let the symmetric group act component-wise on pairs of elements of , we obtain a permutation group with domain that also has many elements. In fact this group is isomorphic to as an abstract group. We say a group is a symmetric group in natural action if the group is , where is the domain of .\nSAT and Symmetries. A Boolean satisfiability (SAT) instance is commonly given in conjunctive normal form (CNF), which we denote with\n, where each element of is called a clause. A clause itself consists of a set of literals. A literal is either a variable or its negation.\nWe use for the set of variables of and we use for its literals.\nA symmetry, or automorphism, of is a permutation of the literals satisfying the following two properties. First, it maps back to itself, i.e., , where is applied element-wise to the literals in each clause. Here clauses are equivalent, if they are the same when treated as unordered sets of literals, for example . Then if is obtained from by reordering the literals of within the clauses.\nSecond, for all it must hold that , i.e., induces a permutation of the variables.\nFor example the permutation mapping to and to , with indices taken modulo 4, is a symmetry of .\nThe permutation group of all symmetries of is .\nIt is well understood that the symmetries of a SAT formula can be captured by a graph.\nWe call this the model graph and denote it with .\nWhile there exists various constructions for the model graphs, we use the following common construction.\nEach literal is associated with a vertex .\nEach clause is associated with a vertex .\nAll pairs of literals and are connected by an edge.\nFor all literals of a clause , we connect vertices and .\nLastly, to distinguish clause vertices from literal vertices, we color all clauses with color and all literals with color .\nAs desired, for this graph, holds [28 ###reference_28###].\nConsider the formula .\nThroughout the paper, we use as our running example.\nFigure 3 ###reference_### shows its model graph.\nRegarding the symmetries of , note that there are symmetries interchanging all of , all of and of the .\nA generating set for is\nOrbits.\nGiven a permutation group , we denote with the orbit of a point .\nThat is, an element is in whenever there is a with .\nThe orbits of our example are shown in Figure 3 ###reference_###, e.g., the orbit is green."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "SAT-Symmetry Computational Setting",
21
+ "text": "Let us describe the computational setting in which our new algorithms operate. Since we want the theoretical runtimes to reflect more closely the runtimes in practice, there are two important differences compared to the traditional computational group theory setting.\nThese differences are in the measure of runtime as well as in the format of the input.\nJoint Graph/Group Pairs.\nTypically, algorithms in computational group theory dealing with permutation groups assume as their input a generating set of permutations of a group .\nWhile this is certainly a natural setting when discussing algorithms for groups in general,\nin our setting this input format disregards further information that is readily available. Therefore, we require that algorithms in the SAT-symmetry interface have access to more information about the input group.\nSpecifically, we may require that the input consists of both a generating set and a graph with .\nWe call this a joint graph/group pair .\nFor our SAT context, we may moreover assume that the SAT formula with is available, whenever necessary.\nInstance-Linear Runtime.\nIn computational group theory, given a generating set for a permutation group , a runtime of is typically considered linear time [29 ###reference_29###].\nThis is however only a very crude upper bound when seen in terms of the actual encoding size of a given generating set. In particular, when generators are sparse, as is common in SAT [28 ###reference_28###, 10 ###reference_10###],\nlinear time in this sense is not necessarily linear time in the encoding size, which is what we would use in a graph-theoretic or SAT context.\nSpecifically, we are interested in measuring the runtime of algorithms relative to the encoding size of a generating set given in a sparse format.\nTherefore, we define the encoding size of a generating set as .\nIn particular, given a SAT formula , graph , and generating set , the goal is to have algorithms that (ideally) run in time linear in .\nIn order to not confuse the \u201ctypes of linear time\u201d, we refer to such algorithms as instance-linear. Analogously, an algorithm has instance-quasi-linear time if it runs in time for some constant .\nIllustrative Examples.\nThe task of computing the orbits is an excellent example demonstrating the usefulness instance-quasi-linear time. As transitive closure, we can find the orbit of an element in time [29 ###reference_29###].\nHowever, with instance-quasi-linear time in mind, we quickly arrive at an algorithm to compute the entire orbit partition in time using a union-find data structure, where is the inverse Ackermann function. The inverse Ackermann function exhibits substantially slower growth than .\nFurthermore, having access to the graph of a graph/group pair gives a significant advantage in what is algorithmically possible. A good example of the difference\nis that testing membership is much easier for the graph/group pair:\ntesting (which is true if and only if ) can be done in instance-linear time.\nHowever, testing without access to the graph is much more involved.\nThe best known method for the latter involves computing a strong generating set, corresponding base and Schreier table [29 ###reference_29###], followed by an application of the fundamental sifting algorithm [29 ###reference_29###].\nEven performing only the last step of this process (sifting) is not guaranteed to be in instance-linear time."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Favorable Group Structures in SAT",
27
+ "text": "We now propose problems which should be solved within the SAT-symmetry computational setting.\nWe analyze heuristics used in advanced symmetry exploitation algorithms [12 ###reference_12###, 24 ###reference_24###, 16 ###reference_16###], extracting precise formal definitions.\nDisjoint Direct Decomposition.\nFollowing [8 ###reference_8###], we say a direct product of a permutation group is a disjoint direct decomposition of , whenever all have pairwise disjoint supports. We call a factor of the disjoint direct decomposition of .\nA disjoint direct decomposition is finest, if we cannot decompose any factor further into a non-trivial disjoint direct decomposition.\nA recent algorithm solves the problem of computing the finest disjoint direct decomposition\nfor permutation groups in polynomial-time [8 ###reference_8###].\nIn our running example, the finest disjoint direct decomposition of splits the group into a subgroup permuting only the and variables, and a subgroup permuting and .\nIndeed, setting , and\n, we have .\nComputing a disjoint direct decomposition is a typical routine in symmetry exploitation tools [12 ###reference_12###, 24 ###reference_24###, 16 ###reference_16###].\nIt allows for separate treatment of each factor of the decomposition.\nThe heuristics in use today do not guarantee that the decomposition is the finest disjoint direct decomposition:\nindeed, the heuristics of the tools mentioned above assume that the given generating sets are already separable [8 ###reference_8###].\nThis means it is assumed, that every generator only operates on one factor of the disjoint direct decomposition .\nFormally, this means for each there is one for which holds, and for all it holds that .\nFor example, the generating set we gave for is separable.\nIt is not known how often generating sets given for graphs of SAT formulas are separable for a given symmetry detection tool, or in particular after reducing the domain to the literals of the SAT formula.\nIt is however obvious that for the most advanced general-purpose symmetry detection tools, Traces and dejavu, generators are not separable with very high probability due to the use of randomly selected generators [8 ###reference_8###].\nRow Interchangeability.\nWe now discuss the concept of row interchangeability [12 ###reference_12###, 13 ###reference_13###, 24 ###reference_24###].\nLet be a SAT formula.\nLet be a variable matrix .\nWe denote the entries of with where and . We define the shorthand . The set denotes all the literals involved with the matrix .\nWe say exhibits row interchangeability if there exists a matrix such that for every permutation ,\nfor the induced literal permutation given by it holds that .\nIndeed, if this is the case, we can observe that the matrix describes a subgroup of consisting of . We denote this group by .\nA crucial fact is that for , linear-sized complete symmetry breaking is available [12 ###reference_12###, 13 ###reference_13###].\nAs is also in part discussed in [12 ###reference_12###, 13 ###reference_13###], we observe that the complete symmetry breaking for is most effective whenever is the only action on in , or more precisely, .\nIn this case, we call an elementary row interchangeability subgroup.\nOtherwise, there are non-trivial symmetries with or is not a union of orbits.\nIndeed, in this case, the complete symmetry breaking of might make it more difficult to break such overlapping symmetries : for example, if two row interchangeability subgroups and overlap, i.e., , complete symmetry breaking can only be guaranteed for one of them using the technique of [12 ###reference_12###].\nWhenever is an elementary row interchangeability subgroup, the situation is much clearer: we can produce a linear-sized complete symmetry breaking formula and this covers at least all symmetries on the literals .\nIn this paper, we therefore focus on computing elementary row interchangeability groups.\nLet us consider again: for the matrix\n\nthere is indeed a row interchangeability subgroup.\n(Recall that the group permutes positive and negative literals of variables appearing in .) For this example, is both an elementary row interchangeability group and a factor in the finest direct disjoint decomposition of .\nRow Interchangeability and Equivalent Orbits.\nWe now describe the matrix of elementary row interchangeability groups in more group-theoretic terms.\nWe first define the notion of equivalent orbits:\nTwo orbits are equivalent, if and only if there is a bijection such that for all and , .\nWe write to indicate orbits and are equivalent.\nIt is easy to see this indeed defines an equivalence relation on the orbits [29 ###reference_29###].\nWe observe that if a row interchangeability subgroup is elementary, each row of the matrix is an orbit of .\nSince all rows are moved simultaneously in the same way, we remark that rows of are precisely equivalent orbits with a natural symmetric action:\nLet be a row interchangeability subgroup of , and let denote a row of .\n is an elementary row interchangeability subgroup if and only if all of the following hold: (1) is an orbit with a natural symmetric action in . (2) For every other row of , it holds that . (3) For , is also an orbit with .\nThere is an exact algorithm which computes equivalent orbits in essentially quadratic runtime [29 ###reference_29###]. Again, runtimes are difficult to compare due to different pre-conditions in [29 ###reference_29###].\nIn any case, the algorithm for equivalent orbits depends on computing a base and strong generating set, which is too slow from our perspective.\nWe may split detecting elementary row interchangeability groups into detecting natural symmetric action on the orbits, followed by computing equivalent orbits.\nWe now turn to solving the problems defined above in the computational setting of the SAT-symmetry interface.\nSpecifically, we propose algorithms for the finest disjoint direct decomposition (Section 5 ###reference_###), natural symmetric action (Section 6 ###reference_###), and equivalent orbits (Section 7 ###reference_###)."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Finest Disjoint Direct Decomposition",
33
+ "text": "Having established the problems we want to address, we now turn to presenting suitable algorithms in the SAT-symmetry computational setting. In particular, recall that we want to make use of joint graph/group pairs in order to state algorithms that run in instance-quasi-linear time. We begin by computing the finest disjoint direct decomposition.\nSpecifically, given a joint graph/group pair , our aim is to compute the finest disjoint direct decomposition of the group .\nOur proposed algorithm, given the orbits, can do so in instance-linear time.\nThe disjoint direct decomposition of a group allows us to separately treat each factor of the decomposition in symmetry exploitation or other consecutive algorithms.\nTo simplify the discussion, we assume the graph to be undirected. However, the procedure generalizes to both directed and even edge-colored graphs.\nOrbit Graph.\nWe describe the orbit graph, which can be constructed from .\nWe are particularly interested in the connected components of the orbit graph, which turn out to correspond exactly to the factors of the finest disjoint direct decomposition.\nFirst, note that the orbit partition of can be viewed as a vertex coloring of the graph , assigning to every vertex its orbit.\nWe consider the graph , i.e., colored with its orbit partition (see Figure 4 ###reference_###, left).\nWe call two distinct orbits homogeneously connected, whenever either all vertices are adjacent to all vertices of , or there is no edge with endpoints both in and .\nIndeed, we could \u201cflip edges\u201d between homogeneously connected orbits such that they all become disconnected, without changing the automorphism group (see Figure 4 ###reference_###, middle).\nWe now give the formal definition of the orbit graph.\nThe orbit graph is essentially an adapted version of the so-called flipped quotient graph (see [19 ###reference_19###] for a discussion).\nThe vertex set of the orbit graph is the set of orbits of , i.e., .\nTwo orbits are adjacent in the orbit graph if and only if the orbits are not homogeneously connected in the original graph (see Figure 4 ###reference_###, right).\nDescription of Algorithm 1 ###reference_hm1###. Algorithm 1 ###reference_hm1### describes how to compute the orbit graph from . The algorithm first initializes the graph with a vertex set that contains exactly one vertex for each orbit of .\nIt then counts for each orbit , how many neighbors a vertex has in the other orbits. Since is an orbit, this number is the same for all vertices, so it suffices to compute this for one .\nFinally, the algorithm checks to which other colors the vertex and thus the orbit is not homogeneously connected (Line 1 ###reference_hm1###).\nIf and are not homogeneously connected, the edge is added to the orbit graph.\nRemark on the runtime of Algorithm 1 ###reference_hm1###. Using appropriate data structures for graphs (adjacency lists) and colorings (see [22 ###reference_22###], which in particular includes efficient ways to compute ), the algorithm can be implemented in instance-linear time.\nOrbit Graph to Decomposition.\nIndeed, the connected components of the orbit graph represent precisely the factors of the finest disjoint direct decomposition of the automorphism group of the graph:\n{restatable}lemmadecompositionlemma Let . The vertices represented by a connected component of the orbit graph of are all in the same factor of the finest direct disjoint decomposition of and vice versa.\nConsider two orbits of in different factors of any direct disjoint decomposition.\nTowards a contradiction, assume are not homogeneously connected.\nNote that naturally, the orbit coloring is equitable.\nSince the orbit coloring is equitable, the connection must be regular, i.e., each vertex of has neighbors in , and every vertex of has neighbors in for some integers .\nHowever, and hold.\nLet us now fix a point , i.e., consider the point stabilizer . If two orbits are in different factors of a direct disjoint decomposition, fixing a point of must not change the group action on .\nIn particular, must be an orbit of .\nHowever, is adjacent to some vertex and non-adjacent to some vertex (see Figure 5 ###reference_### for an illustration).\nHaving fixed , we can therefore not map to . This contradicts the assumption that and are in different factors of a direct disjoint decomposition.\nHence, orbits in different factors of a direct disjoint decomposition must be homogeneously connected in , i.e., non-adjacent in the orbit graph.\nNow assume and are in the same component in the orbit graph.\nThen, there must be a path of orbits where each is not homogeneously connected.\nIn this case, we know for each that and must be in the same factor of any disjoint direct decomposition.\nTherefore, and must be in the same factor of every disjoint direct decomposition.\nOn the other hand, if and are in different components in the orbit graph, they are in different factors of the finest disjoint direct decomposition.\n\u220e\nSince connected components can be computed in linear time in the size of a graph, and the size of the orbit graph is at most linear in the size of the original graph, we can therefore compute the finest direct disjoint decomposition in instance-linear time.\nIn a consecutive step, the generators could be split according to factors, producing a separable generating set, again in instance-linear time. This is done by separating each generator into the different factors.\nFinally,\ngiven the finest direct disjoint decomposition, we can again produce a joint graph/group pair for each factor, by outputting for a factor the induced subgraph . We summarize the above in a theorem:\nGiven a joint graph/group pair and orbit partition of , there is an instance-linear algorithm which computes the following:\nThe finest disjoint direct decomposition .\nA separable generating set with .\nFor all factors a joint graph/group pair with in instance-linear time.\nWe recall that if the orbit partition of is not yet available, we can compute it in instance-quasi-linear time.\nDomain Reduction to SAT Literals.\nFor a SAT formula , we can apply the above procedure to its model graph .\nHowever, as mentioned above, in SAT we are typically only interested in symmetries for a subset of vertices, namely the vertices that represent literals.\nTherefore, we are specifically interested in the finest direct disjoint decomposition of the automorphism group reduced to literal vertices .\nThe crucial point here is that when removing orbits that represent clauses, orbits of literal vertices can become independent and the disjoint direct decomposition can therefore become finer.\nWe cannot simply apply our algorithm for the induced group since this is not a joint graph/group pair. Of course we could apply the algorithm from [8 ###reference_8###] that computes finest disjoint direct decomposition for permutation groups in general.\nHowever, we can detect some forms of independence by simple means using the original joint graph/group pair. Indeed, we will describe an algorithm that checks in instance-linear time whether the parts in a given partition of the literals are independent. We can thus at least check whether a given partition induces a disjoint direct decomposition:\ntheoremsatdecomposition\nLet be a CNF-Formula and be a joint graph/group pair for the model graph of . Given a partition of the literals of , the pair , and its orbits, we can check in instance-linear time whether the partition induces a disjoint direct product (that is, whether ).\nFirst, we check whether each is a union of literal orbits. Otherwise, we do not have a disjoint direct product.\nWe now argue that we can treat each clause orbit independently. Indeed, in the construction of the model graph clause vertices are never adjacent to other clause vertices. Moreover, literal vertices can only be adjacent to their negation or clauses. Thus exactly if for every clause orbit this decomposition is a disjoint direct product when we remove all clauses not in .\nFor each clause orbit we will check this in time linear in , where denotes the total number of clauses of the orbit and is the number of literals in each clause. Thus, overall we will get an instance-linear time algorithm.\nWe therefore assume from now on that is the only clause orbit. We let denote the total number of clauses of the orbit.\nWe define for each part in the given partition of literals the following:\nwhenever there is a with , we call a joint occurrence of .\nWe denote by the number of joint occurrences of .\nNote that all joint occurrences of have the same size, since the symmetry group acts transitively on them.\nAlso note that we can assume that literals appear together with their negation in the same part . Indeed, to obtain a disjoint direct product, they could only appear in different parts if they are fixed by all elements of , in which case we can put them into a new part containing only the two literals.\nWe claim now that the partition induces a disjoint direct product (when it comes to ) exactly if\nfor all combined choices of occurrences of for each there is a clause containing all simultaneously. In other words, exactly all combinations of joint occurrences are encoded in .\nNote that this can be checked in instance-linear time by simply checking .\nIt remains to argue the claim. When , every permutation in the projection extends in to and to . In fact, it is even possible to extend the maps by fixing all points of . Conversely, suppose are occurrences with .\nChoose some clause in . Let be the occurrences in , with . Since is an orbit, this means for each some permutation maps the set of literals to . If this implies that there is a permutation that simultaneously maps to for all . Thus, some clause contains all simultaneously.\n\u220e"
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Natural Symmetric Action",
39
+ "text": "Before we can begin our discussion of the natural symmetric action, we need to discuss generating (nearly-)uniform random elements (see [29 ###reference_29###]) of a given permutation group .\nThere is no known algorithm which produces uniform random elements of in quasi linear time, even in computational group theory terminology [29 ###reference_29###].\nHowever, there are multiple ways to produce random elements,\nmost of which are proven to work well in practice, and can be implemented fairly easily [29 ###reference_29###, 14 ###reference_14###].\nIn this paper, we attempt to only make use of random elements sparingly.\nWhenever we do, as is common in computational group theory, we do not consider the particular method used to generate them and simply denote the runtime of the generation with .\nMoreover, we discuss potential synergies in the SAT-symmetry context which might help to avoid random elements in practice, whenever applicable.\nWe now explain how to efficiently test whether a permutation group is a symmetric group in natural action.\nThen, we describe more generally how to determine simultaneously for all orbits of a permutation group whether the induced action is symmetric in natural action.\nDetecting symmetric permutation groups in their natural action is a well-researched problem in computational group theory. State-of-the-art practical implementations are available in modern computer algebra systems (such as in [23 ###reference_23###], or as described by [30 ###reference_30###]).\nTypically, a natural symmetric action is detected using a so-called probabilistic giant test, followed by a test to ensure that the group is indeed symmetric. The tests work by computing (nearly) uniform random elements of the group and inspecting them for specific properties.\nA permutation group is called a giant if it is the symmetric group or the alternating group in natural action. In many computational contexts, giants are by far the largest groups that appear, hence their name.\nBecause of this, giants form bottleneck cases for various algorithms and therefore often need to be treated separately. To test whether a permutation group is a symmetric group in natural action, we first test whether the group is a giant.\nWe leverage the following facts:\nIf a transitive permutation group of degree contains an element with a cycle of length for some prime with then is a giant.\nThe proportion of elements in containing a cycle of length for a prime with is asymptotically .\nCollectively, these statements show that we only need to generate few random elements of a group and inspect their cycle lengths to detect a giant.\nTo then distinguish between the alternating group and the symmetric group, we can check whether all generators belong to the alternating group.\nThis can be attained using basic routines, such as examining the so-called parity of a generator (see [29 ###reference_29###] for more details).\nAlgorithm 2 ###reference_hm2### generalizes the probabilistic test for a transitive group [29 ###reference_29###] to a test which is performed simultaneously to check for a natural symmetric action on all the orbits of a group.\nDescription of Algorithm 2 ###reference_hm2###.\nOverall, the algorithm samples uniform random elements of the group and checks whether the random elements exhibit long prime cycles (see Fact 1 ###reference_1###).\nMore precisely, the algorithm first distinguishes between potential alternating and symmetric groups on each orbit.\nThen, it computes random elements.\nFor each random element and each orbit, we then apply the giant test (Fact 1 ###reference_1###) to check whether the element certifies that the orbit induces a natural symmetric action.\nRuntime of Algorithm 2 ###reference_hm2###.\nLet us assume access to random elements of the joint graph/group pair with in time .\nAssuming a random element can be produced in time , the algorithm runs in worst-case time .\nCorrectness of Algorithm 2 ###reference_hm2###. Regarding the correctness of the algorithm, the interesting aspect is to discuss the error probability. We argue that the error probability is at most if is chosen to larger than . Practical implementations use in similar contexts [23 ###reference_23###].\nIf an orbit does not induce a symmetric action, no error can be made. If an orbit induces a symmetric action, by Fact 2 ###reference_2###, the probability that one iteration does not produce a long prime cycle for is at most . Thus, the probability that none of the iterations produces a long prime cycle for is bounded by since .\nSince there can be at most orbits, using the union bound, we get that the probability that the test fails for at least one of the orbits is at most .\nWhen trying to compute a natural symmetric action on a graph/group pair, the following heuristics can be implemented in instance-linear time.\nThe first and most straightforward heuristic is that most of the time, it is fairly clear that the generators describe a natural symmetric action.\nIn particular, symmetry detection based on depth-first search seem to often produce generators that are transpositions.\nFrom these symmetric actions can be detected immediately.\nThis fact is implicitly used by column interchangeability heuristics in use today.\nThere are however many more ways to detect a natural symmetric action, many of which are implemented in modern computer algebra systems such as [14 ###reference_14###, 23 ###reference_23###].\nNext, the symmetry detection preprocessor sassy [5 ###reference_5###] as well as the preprocessing used by Traces sometimes detect a natural symmetric action on an orbit, by detecting certain structures of a graph.\nIn these cases, the result should be immediately communicated to consecutive algorithms.\nWe can also use the graph structure to immediately discard orbits from the test of Algorithm 2 ###reference_hm2###.\nIn particular, all orbits where is neither the empty graph nor the complete graph cannot have a natural symmetric action.\nFurthermore, the generators produced by dejavu and Traces are fairly random (for some parts even uniformly random [3 ###reference_3###]).\nThis means they should presumably work well with the probabilistic tests above.\nLastly, internally, symmetry detection tools often produce so-called Schreier-Sims tables [29 ###reference_29###], which can be used to produce random elements effectively.\nIndeed, for our running example , the natural symmetric action can be detected quite easily: let us consider the generators reduced to the orbit of .\nWe observe that there is a generator and .\nWhile this is not a set of generators detected by current symmetry exploitation algorithms [12 ###reference_12###, 24 ###reference_24###], this is indeed also an arguably obvious encoding of a natural symmetric action:\nfor an orbit of size , an -cycle in conjunction with a transposition encodes a symmetric action."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Equivalent Orbits",
45
+ "text": "Towards our overall goal to compute row interchangeability subgroups, we can now already determine which orbits induce a natural symmetric action.\nBy Lemma 2 ###reference_a2###, to detect elementary row interchangeability subgroups, we only miss a procedure for orbit equivalence.\nWe describe now how to compute equivalent orbits as the automorphism group of a special, purpose-built graph.\nThen, we give a faster algorithm computing equivalent orbits with natural symmetric action in instance-quasi-linear time, under mild assumptions.\nIn particular, we can find all classes of equivalent orbits described by Lemma 2 ###reference_a2###."
46
+ },
47
+ {
48
+ "section_id": "7.1",
49
+ "parent_section_id": "7",
50
+ "section_name": "Cycle Type Graph",
51
+ "text": "If two orbits are equivalent, they appear in every permutation in \u201cthe same manner\u201d: for example, if orbits and are equivalent then for every generator , the cycle types induces on are the same as the cycle types induces on .\nMore generally, equivalent orbits must be equivalent with respect to every generating set of the group. We introduce the cycle type graph whose symmetries capture orbit equivalence.\nThis means we can use a symmetry detection tool to detect equivalent orbits.\nFor a group and generating set , we define the cycle type graph as follows.\nFirstly, the vertex set of is the disjoint union . In other words, there is a vertex for each element of and there are separate elements for all the points moved by the generators. In particular if a point is moved by several generators there are several copies of the point.\nSecondly, the edges of are added as follows: each corresponding vertex for is adjacent to the corresponding vertex for element via an undirected edge. Furthermore, there are directed edges . In other words, for each generator , we add directed cycles for each cycle of the generator, as shown in Figure 5(a) ###reference_f1###.\nIn the following, we refer to directed cycles added in this manner as cycle gadgets.\nThirdly, we define a vertex coloring for . For this we enumerate the generators, i.e., . We then color the vertices in with color and an is colored with , where is the length of the cycle in containing . With this, the cycle type graph is constructed in such a way that its automorphism structure captures equivalence of orbits, as is described in more detail below.\nWe record several observations on automorphisms of the cycle type graph.\n{restatable}lemmacyclecentralizer\nIf are orbits of then there is some for which , if and only if .\nLet us first make some general remarks about elements . Due to the nature of the graph, for every generator we have . In other words conjugation with leaves the generating set invariant.\nGenerally, conjugation is a group homomorphism: for all group elements we have . Thus, by induction, since all elements can be generated from the generators, we have .\nNow assume there is some for which .\nThen, by the previous discussion, for all elements of the group we have and thus .\nHence, for any , we have .\nThus, the orbits are equivalent.\nOn the other hand, assume two orbits and are equivalent, and let denote the bijection . Define to be the permutation for which , and .\nThen is an automorphism of with .\n\u220e\nWe may formulate the observations in group theoretic terms, giving the following lemma.\nGiven a group , its centralizer in the symmetric group and the cycle type graph are a joint graph/group pair, i.e., .\nSince the centralizer of in is trivial for , we get the following corollary.\nIf with , the cycle type graph of is asymmetric.\nIt follows from the corollary that for two equivalent orbits with a natural symmetric action the bijection commuting with the generators and interchanging the orbits is in fact unique.\nWhile the cycle type graph and the centralizer in the symmetric group\n\nis a joint graph/group pair, we still have to compute the group: so far, we only have access to .\nOne option is a symmetry detection tool. However, this goes against our goal of invoking symmetry detection tools unnecessarily often \u2014 and against our goal to find instance-linear algorithms.\nHence, instead of computing the entire automorphism group, our approach is to make due with less:\nin the following, we enhance the cycle type graph in a way such that it becomes \u201ceasy\u201d for color refinement.\nColor refinement is usually applied as a heuristic approximating the orbit partition of a graph.\nHowever, on the enhanced graphs, we prove that color refinement is guaranteed to compute the orbit partition.\nThen, we show that the orbit partition suffices to determine equivalent orbits.\nOverall, these methods are only guaranteed to work for orbits with a natural symmetric action, as is the case in elementary row interchangeability groups."
52
+ },
53
+ {
54
+ "section_id": "7.2",
55
+ "parent_section_id": "7",
56
+ "section_name": "Symmetries of Cycle Type Graph with Unique Cycles",
57
+ "text": "Our goal is now to enhance the cycle type graph such that color refinement is able to compute its orbit partition.\nThis in turn enables us to detect equivalent orbits, and in turn elementary row interchangeability groups.\nTowards this goal, we first discuss an algorithm to compute unique cycles on orbits.\nUnique cycles are a key ingredient for our enhanced cycle type graph.\nThese cycles should be invariant with respect to an ordered generating set, a concept we explain first.\nGiven an ordered generating set for a permutation group , i.e., , a\npermutation \nfixes the ordered generating set (point-wise under conjugation) if for all we have .\nA permutation is invariant with respect to the ordered generating set if all permutations that fix under conjugation also fix under conjugation111In group theoretic terms, is in .. Note that all group elements in are invariant. However, there can be further invariant permutations.\nWe say a permutation has a unique cycle if for some length , the permutation contains exactly one cycle of . We now describe a two-step process. Step one is to compute an invariant permutation with a unique cycle. Step two is to use this to compute an invariant permutation with a cyclic order.\nUnique Cycle from Generators.\nAs a first step we now need an invariant unique cycle to proceed.\nWe argue how to compute such a cycle for orbits on which our group induces a natural symmetric action.\nWe may use random elements to find a unique cycle. In fact, if we perform the giant test of Section 6 ###reference_###, we get access to a unique cycle.\nHowever, in that section we needed a prime length cycle. If we are only interested in unique cycles, not necessarily of prime length, this process terminates much more quickly:\nGolomb\u2019s constant [15 ###reference_15###] measures, as , the probability that a random element has a cycle of length greater than . The limit is greater than .\nIn practice the existence of a unique cycle is a mild assumption: on the one hand some practical heuristics only apply if specific combinations of transpositions are present in the generators [12 ###reference_12###, 24 ###reference_24###]. Each transposition is a unique cycle.\nOn the other hand, randomly distributed automorphisms, as returned by Traces and dejavu, satisfy having a unique cycle with high probability, as argued above.\nUnique Cycle To Cyclic Order.\nWe now assume we are given an invariant unique cycle .\nThe idea is now to extend using the generators to a cycle which encompasses the entire orbit.\nCrucially, the extension ensures that the result is still invariant (i.e. if we do this for all orbits simultaneously, the final permutation will be invariant).\nWe now describe the cycle overlap algorithm, which gets as input a directed cycle , as well as a collection of cycles which must be pair-wise disjoint.\nFurthermore, each must have one vertex in common with .\nThe result is a cycle that contains all vertices of all the cycles.\nA formal description can be found in Algorithm 3 ###reference_hm3###.\nDescription of Algorithm 3 ###reference_hm3###. The algorithm first checks which vertices of appear in , and records them into the set .\nThen, for each in the overlap of and , the algorithm walks along the respective cycle containing , and records all vertices it observes into .\nIt walks along the cycle until another is reached (it may record the entire cycle , i.e., may hold).\nFinally, is inserted as a path into .\nThe output of the process is invariant under the cyclic orders involved. This means no matter in which order the cycles from are processed, the algorithm always returns the same cyclic order.\nFigure 7 ###reference_### illustrates the algorithm.\nRuntime of Algorithm 3 ###reference_hm3###. We may use a doubly-linked list structure for directed cycles and , and an array to link vertices to their position in in time . Assuming these data structures, inserting a into can be performed in time .\nIndeed, with these data structures, we can implement the entire algorithm in time .\nWe may also update the array to include the new vertices of added from .\nTo get a unique cyclic order,\nwe repeatedly combine with cycles appearing in generators that intersect . Every cycle in a generator only has to be processed once. Eventually contains the entire orbit. With careful management of usage-lists of vertices in cycles of generators, the overall algorithm can be implemented in instance-linear time."
58
+ },
59
+ {
60
+ "section_id": "7.3",
61
+ "parent_section_id": "7",
62
+ "section_name": "Cyclic Order to Equivalent Orbits",
63
+ "text": "We finally describe how to find equivalent orbits, assuming invariant cyclic orders are given on the orbits.\nAn invariant cyclic order for the vertices of each orbit moves us one step closer to the orbits of the cycle type graph.\nThere are however still many potential bijections between orbits: indeed, we do not know how each cyclic order should be rotated.\nWe therefore describe a procedure to refine the cyclic order further.\nWe introduce the enhanced cycle type graph .\nWe are provided an invariant cyclic order for each orbit of , which we denote by .\nFirst, we add to the cycle type graph (Subsection 7.1 ###reference_###) a cycle gadget for each .\nAs before, we color the cycle gadget according to its cycle length.\nNext, we enhance all other cycle gadgets using distance information of :\nin every cycle gadget we mark each directed edge with the length of the (directed) path from to in (see Figure 5(b) ###reference_f2###).\nWe write whenever the path from to in has length .\nNote that while we use edge-labels for clarity, these can be encoded back into vertex colors (see [18 ###reference_18###, Proof of Lemma 15]).\nJust like with the cycle type graph, the automorphism group of the enhanced cycle type graph is the centralizer of and is a joint graph/group pair (see Lemma 4 ###reference_a4###).\nHowever, it is easier to compute the orbit partition of .\nIn fact, our method of obtaining the orbits of is rather straightforward: we apply the color refinement procedure to the enhanced cycle type graph .\n{restatable}lemmacyclegraphcolorref Color refinement computes the orbit partition of the enhanced cycle type graph.\nLet us make some observations on any equitable coloring of the enhanced cycle type graph.\nConsider the canonical cycle of an orbit.\nFor a directed cycle of size to be color-stable, it must be partitioned into equally sized colors of, say, size , where .\nThese must always appear in-order in the cyclic order.\nThis is also true for every cycle gadget, in every generator.\nLet us now assume we have two vertices of a single orbit with equal color.\nWe can conclude the following:\nIn every generator , either both and hold, or neither.\nIn every generator , and are in a cycle of equal size.\nIn every generator where and , . Furthermore, their distance in the canonical cycle must be equal, i.e., .\nIn every generator , if and are contained in cycles and , the vector of colors and distances starting from in , and in must be identical. In other words, the previous property must hold transitively.\nWe call a sequence of vertices evenly spaced in a cycle gadget , whenever for each pair with the sum of distances in when going from to , i.e., where , is the same.\nWe call the spacing.\nA sequence of vertices is evenly spaced in the canonical cycle, whenever the above holds assuming edge weights of .\nLet be the permutation which rotates any canonical cycle by to the right, such that equally colored vertices are mapped onto each other.\nIntuitively, this means the canonical cycle is rotated by the least possible amount (see Figure 8 ###reference_###).\nWe extend such that all vertices which correspond to a vertex of the canonical cycle are mapped accordingly.\nWe show that must be an automorphism of the enhanced cycle type graph.\nBy definition, the canonical cycle is mapped back to itself.\nIt remains to be shown that every generator , as well as its connections to the canonical cycle, are mapped back to themselves.\nLet be a generator.\nLet denote the cycle gadgets of .\nFirst of all, note that if a color of vertices is represented in , all of its respective vertices must be in (1).\nFurthermore, all cycles in which vertices of a color are present, must have the same size (2), and contain the same number of vertices of a color, in the same order (3) and (4).\nIndeed, vertices of a color must be evenly spaced in each cycle gadget due to (4).\nThis immediately implies that they are also evenly spaced in the canonical cycle.\nIn fact, vertices of color in other cycle gadgets must also be evenly spaced with the same spacing as the first cycle, again due to (4).\nWe now read vertices of in the cyclic order of the canonical cycle, say, .\nUsing this, we also get a cyclic order of the cycle gadgets, in which cycle gadgets may appear multiple times.\nWe denote this with .\nWe argue that if cycle gadgets do repeat, they must always repeat in the same order:\nhowever, since each cycle gadget contains the same number of vertices of , and all of them are evenly spaced in the canonical cycle, this immediately follows.\nFor example, vertices may lead to , but not .\nIndeed, such an ordering would contradict the even spacing with respect to the canonical cycle. In the example, in and in would not be equidistant, hence, these vertices could be distinguished.\nNaturally, the ordering must also respect the ordering of each cycle gadget individually, since in each cycle gadget vertices of a color are evenly spaced.\nTherefore, looking at color , since maps the canonical cycle \u201cone to the right\u201d, it maps the vertices of cycle to cycle .\nMoreover, as mentioned above, when looking at vertices of , we know it also respects the order of each cycle gadget.\nLet us now consider the next color according to the cycle gadgets of , i.e., for in .\nImmediately, we get that for all, .\nThis means that is also ordered correctly according to the canonical cycle.\nIndeed, if then immediately follows.\nThe automorphism therefore maps the cycle gadgets back to themselves, and therefore also the generator back to itself.\nHence, the automorphism maps all gadgets related to some orbit back to itself.\nLet us now consider the case where vertices of two different orbits and are equally colored, i.e., .\nNote that this can only be the case when and are indeed equally sized.\nIndeed, then the arguments above hold true as well, and there must be an automorphism interchanging and :\nin particular, the automorphism maps to (and vice versa), precisely mapping the canonical cycles onto each other to , starting at and respectively.\n\u220e\nGiven our high-level procedure in Algorithm 4 ###reference_hm4###, and given that color refinement can be computed in quasi-linear time as previously discussed, leads to the following theorem:\nGiven access to a unique cycle per orbit, there is an instance-quasi-linear algorithm which computes for a joint graph/group pair a partition of equivalent orbits.\nGiven two equivalent orbits , there is an algorithm which computes from a corresponding matching such that for all and , in time ."
64
+ },
65
+ {
66
+ "section_id": "8",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion and Future Work",
69
+ "text": "Exploiting our concept of joint graph/group pairs, we proposed new, asymptotically faster algorithms for the SAT-symmetry interface. However, most of the new concepts and approaches of this paper do not only apply to the domain of SAT, but also for example to MIP [24 ###reference_24###] and CSP [13 ###reference_13###].\nMore computational tasks should be considered in this context, the most prominent one arguably being pointwise stabilizers [29 ###reference_29###].\nOur new algorithms exploit subroutines with highly efficient implementations available, but otherwise do not use any complicated data structures. We intend to implement the algorithms and integrate them into the symmetry detection preprocessor sassy [5 ###reference_5###].\nFinally, in some classes of SAT instances,\nmore complex symmetry structures may arise. Analyzing and taking advantage of these structures is potential future work.\nFor example, in the pigeonhole principle, BreakID finds overlapping row interchangeability groups and breaks these groups partially.\nBy virtue of being overlapping, the symmetry breaking constraints produced are not guaranteed to be complete.\nAnother example for a complex symmetry structure is the wreath product of two symmetric groups, i.e., .\nThese wreath products naturally occur as the automorphism groups of tree-like structures.\nProcedures to detect and exploit such groups (e.g., by first using blocks of imprimitivity [29 ###reference_29###] followed by the algorithms of this paper) could be of practical interest."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {},
74
+ "image_paths": {},
75
+ "validation": true,
76
+ "references": [
77
+ {
78
+ "1": {
79
+ "title": "Shatter: efficient symmetry-breaking for boolean satisfiability.",
80
+ "author": "Fadi A. Aloul, Igor L. Markov, and Karem A. Sakallah.",
81
+ "venue": "In Proceedings of the 40th Design Automation Conference, DAC,\npages 836\u2013839. ACM, 2003.",
82
+ "url": null
83
+ }
84
+ },
85
+ {
86
+ "2": {
87
+ "title": "Solving difficult SAT instances in the presence of symmetry.",
88
+ "author": "Fadi A. Aloul, Arathi Ramani, Igor L. Markov, and Karem A. Sakallah.",
89
+ "venue": "In Proceedings of the 39th Design Automation Conference, DAC,\npages 731\u2013736. ACM, 2002.",
90
+ "url": null
91
+ }
92
+ },
93
+ {
94
+ "3": {
95
+ "title": "Parallel computation of combinatorial symmetries.",
96
+ "author": "Markus Anders and Pascal Schweitzer.",
97
+ "venue": "In Petra Mutzel, Rasmus Pagh, and Grzegorz Herman, editors, 29th\nAnnual European Symposium on Algorithms, ESA, volume 204 of LIPIcs,\npages 6:1\u20136:18. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik,\n2021.",
98
+ "url": null
99
+ }
100
+ },
101
+ {
102
+ "4": {
103
+ "title": "Search problems in trees with symmetries: Near optimal traversal\nstrategies for individualization-refinement algorithms.",
104
+ "author": "Markus Anders and Pascal Schweitzer.",
105
+ "venue": "In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming,\nICALP, volume 198 of LIPIcs, pages 16:1\u201316:21. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2021.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "5": {
111
+ "title": "Engineering a preprocessor for symmetry detection.",
112
+ "author": "Markus Anders, Pascal Schweitzer, and Julian Stie\u00df.",
113
+ "venue": "CoRR, abs/2302.06351, 2023.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "6": {
119
+ "title": "The efficiency of resolution and davis\u2013putnam procedures.",
120
+ "author": "Paul Beame, Richard M. Karp, Toniann Pitassi, and Michael E. Saks.",
121
+ "venue": "SIAM J. Comput., 31(4):1048\u20131075, 2002.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "7": {
127
+ "title": "Certified symmetry and dominance breaking for combinatorial\noptimisation.",
128
+ "author": "Bart Bogaerts, Stephan Gocht, Ciaran McCreesh, and Jakob Nordstr\u00f6m.",
129
+ "venue": "In Thirty-Sixth AAAI Conference on Artificial Intelligence,\nAAAI, pages 3698\u20133707. AAAI Press, 2022.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "8": {
135
+ "title": "Disjoint direct product decompositions of permutation groups.",
136
+ "author": "Mun See Chang and Christopher Jefferson.",
137
+ "venue": "J. Symb. Comput., 108:1\u201316, 2022.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "9": {
143
+ "title": "Symmetry-breaking predicates for search problems.",
144
+ "author": "James M. Crawford, Matthew L. Ginsberg, Eugene M. Luks, and Amitabha Roy.",
145
+ "venue": "In Luigia Carlucci Aiello, Jon Doyle, and Stuart C. Shapiro, editors,\nProceedings of the Fifth International Conference on Principles of\nKnowledge Representation and Reasoning (KR\u201996), pages 148\u2013159. Morgan\nKaufmann, 1996.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "10": {
151
+ "title": "Exploiting structure in symmetry detection for CNF.",
152
+ "author": "Paul T. Darga, Mark H. Liffiton, Karem A. Sakallah, and Igor L. Markov.",
153
+ "venue": "In Sharad Malik, Limor Fix, and Andrew B. Kahng, editors, Proceedings of the 41th Design Automation Conference, DAC 2004, San Diego,\nCA, USA, June 7-11, 2004, pages 530\u2013534. ACM, 2004.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "11": {
159
+ "title": "Symmetric explanation learning: Effective dynamic symmetry handling\nfor SAT.",
160
+ "author": "Jo Devriendt, Bart Bogaerts, and Maurice Bruynooghe.",
161
+ "venue": "In Serge Gaspers and Toby Walsh, editors, Theory and\nApplications of Satisfiability Testing - SAT, volume 10491 of LNCS,\npages 83\u2013100. Springer, 2017.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "12": {
167
+ "title": "Improved static symmetry breaking for SAT.",
168
+ "author": "Jo Devriendt, Bart Bogaerts, Maurice Bruynooghe, and Marc Denecker.",
169
+ "venue": "In Nadia Creignou and Daniel Le Berre, editors, Theory and\nApplications of Satisfiability Testing - SAT, volume 9710 of LNCS,\npages 104\u2013122. Springer, 2016.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "13": {
175
+ "title": "Breaking row and column symmetries in matrix models.",
176
+ "author": "Pierre Flener, Alan M. Frisch, Brahim Hnich, Zeynep Kiziltan, Ian Miguel,\nJustin Pearson, and Toby Walsh.",
177
+ "venue": "In Pascal Van Hentenryck, editor, Principles and Practice of\nConstraint Programming - CP, volume 2470 of LNCS, pages 462\u2013476.\nSpringer, 2002.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "14": {
183
+ "title": "GAP \u2013 Groups, Algorithms, and Programming, Version 4.12.2,\n2022.",
184
+ "author": "The GAP Group.",
185
+ "venue": null,
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "15": {
191
+ "title": "On the number of permutations on n objects with greatest cycle length\nk.",
192
+ "author": "Solomon W. Golomb and Peter Gaal.",
193
+ "venue": "Advances in Applied Mathematics, 20(1):98\u2013107, 1998.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "16": {
199
+ "title": "Minimal ordering constraints for some families of variable\nsymmetries.",
200
+ "author": "Andrew Grayland, Christopher Jefferson, Ian Miguel, and Colva M. Roney-Dougal.",
201
+ "venue": "Annals of Mathematics and Artificial Intelligence, 57:75\u2013102,\n2009.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "17": {
207
+ "title": "Conflict propagation and component recursion for canonical labeling.",
208
+ "author": "Tommi A. Junttila and Petteri Kaski.",
209
+ "venue": "In Alberto Marchetti-Spaccamela and Michael Segal, editors, Theory and Practice of Algorithms in (Computer) Systems - First International\nICST Conference, TAPAS, volume 6595 of LNCS, pages 151\u2013162.\nSpringer, 2011.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "18": {
215
+ "title": "The weisfeiler-leman dimension of planar graphs is at most 3.",
216
+ "author": "Sandra Kiefer, Ilia Ponomarenko, and Pascal Schweitzer.",
217
+ "venue": "J. ACM, 66(6):44:1\u201344:31, 2019.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "19": {
223
+ "title": "Graphs identified by logics with counting.",
224
+ "author": "Sandra Kiefer, Pascal Schweitzer, and Erkal Selman.",
225
+ "venue": "In Giuseppe F. Italiano, Giovanni Pighizzini, and Donald Sannella,\neditors, Mathematical Foundations of Computer Science 2015 - 40th\nInternational Symposium, MFCS, Part I, volume 9234 of LNCS, pages\n319\u2013330. Springer, 2015.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "20": {
231
+ "title": "SAT modulo symmetries for graph generation.",
232
+ "author": "Markus Kirchweger and Stefan Szeider.",
233
+ "venue": "In Laurent D. Michel, editor, 27th International Conference on\nPrinciples and Practice of Constraint Programming, CP, volume 210 of LIPIcs, pages 34:1\u201334:16. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2021.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "21": {
239
+ "title": "Practical graph isomorphism.",
240
+ "author": "Brendan D. McKay.",
241
+ "venue": "In 10th. Manitoba Conference on Numerical Mathematics and\nComputing (Winnipeg, 1980), pages 45\u201387, 1981.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "22": {
247
+ "title": "Practical graph isomorphism, II.",
248
+ "author": "Brendan D. McKay and Adolfo Piperno.",
249
+ "venue": "J. Symb. Comput., 60:94\u2013112, 2014.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "23": {
255
+ "title": "recog, a package for constructive recognition of permutation and\nmatrix groups, Version 1.4.2.",
256
+ "author": "M. Neunh\u00f6ffer, \\a\u2019A. Seress, and M. Horn.",
257
+ "venue": "https://gap-packages.github.io/recog, Sep 2022.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "24": {
263
+ "title": "A computational comparison of symmetry handling methods for mixed\ninteger programs.",
264
+ "author": "Marc E. Pfetsch and Thomas Rehn.",
265
+ "venue": "Math. Program. Comput., 11(1):37\u201393, 2019.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "25": {
271
+ "title": "Isomorphism test for digraphs with weighted edges.",
272
+ "author": "Adolfo Piperno.",
273
+ "venue": "In Gianlorenzo D\u2019Angelo, editor, 17th International Symposium on\nExperimental Algorithms, SEA, volume 103 of LIPIcs, pages\n30:1\u201330:13. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2018.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "26": {
279
+ "title": "Symmetry breaking using stabilizers.",
280
+ "author": "Jean-Francois Puget.",
281
+ "venue": "In Francesca Rossi, editor, Principles and Practice of\nConstraint Programming - CP, volume 2833 of LNCS, pages 585\u2013599.\nSpringer, 2003.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "27": {
287
+ "title": "Symchaff: exploiting symmetry in a structure-aware satisfiability\nsolver.",
288
+ "author": "Ashish Sabharwal.",
289
+ "venue": "Constraints An Int. J., 14(4):478\u2013505, 2009.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "28": {
295
+ "title": "Symmetry and satisfiability.",
296
+ "author": "Karem A. Sakallah.",
297
+ "venue": "In Armin Biere, Marijn Heule, Hans van Maaren, and Toby Walsh,\neditors, Handbook of Satisfiability - Second Edition, volume 336 of\nFrontiers in Artificial Intelligence and Applications, pages 509\u2013570.\nIOS Press, 2021.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "29": {
303
+ "title": "Permutation Group Algorithms.",
304
+ "author": "\u00c1kos Seress.",
305
+ "venue": "Cambridge Tracts in Mathematics. Cambridge University Press, 2003.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "30": {
311
+ "title": "Fast detection of giant permutation groups.",
312
+ "author": "William R. Unger.",
313
+ "venue": "CoRR, abs/1905.09431, 2019.",
314
+ "url": null
315
+ }
316
+ }
317
+ ],
318
+ "url": "http://arxiv.org/html/2306.00613v2"
319
+ }
20240101/2306.11250v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2306.13746v2.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Revisiting inference after prediction",
3
+ "abstract": "Recent work has focused on the very common practice of prediction-based inference: that is, (i) using a pre-trained machine learning model to predict an unobserved\nresponse variable, and then (ii) conducting inference on the association between that predicted response and some covariates.\nAs pointed out by Wang et al. [2020], applying a standard inferential approach in (ii) does not accurately quantify the association between the unobserved (as opposed to the predicted) response and the covariates. In recent work, Wang et al. [2020] and Angelopoulos et al. [2023] propose corrections to step (ii) in order to enable valid inference on the association between the unobserved response and the covariates. Here, we show that the method proposed by Angelopoulos et al. [2023] successfully controls the type 1 error rate and provides confidence intervals with correct nominal coverage, regardless of the quality of the pre-trained machine learning model used to predict the unobserved response. However, the method proposed by Wang et al. [2020] provides valid inference only under very strong conditions that rarely hold in practice: for instance,\nif the machine learning model perfectly estimates the true regression function in the study population of interest.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Rapid recent progress in the field of machine learning has enabled the development of complex and high-quality machine learning models to predict a response variable of interest.\nThis is particularly attractive in settings where future measurement of this response variable is prohibitively expensive or impossible. For example, instead of performing expensive experiments to determine a protein\u2019s structure, it is possible to obtain high-quality structural predictions using AlphaFold [Jumper et al., 2021 ###reference_6###]. Similarly, in developing countries, determining the true cause of death may be impossible; instead, one might predict the cause of death on the basis of a \u201cverbal autopsy\u201d [Clark et al., 2015 ###reference_2###, Khoury et al., 1999 ###reference_7###]. In the context of gene expression data, it is infeasible to conduct experiments in every possible tissue type, and so instead a machine learning model can be applied to predict gene expression in a tissue type of interest [Ellis et al., 2018 ###reference_3###, Gamazon et al., 2015 ###reference_4###, Gusev et al., 2019 ###reference_5###].\nIn this paper, we will use the notation to denote a pre-trained machine learning model that maps from , the space of the predictors , to , the space of the response variable . We assume that the operating characteristics of in the study population of interest are unknown to the end-user, and the data used to fit are unavailable. Thus, in what follows, we will treat as a \u201cblack box\u201d function.\nIn important recent papers, Wang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###]\nconsider the practice of prediction-based inference111Wang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###] refer to this practice as\npost-prediction inference and\nprediction-powered inference, respectively; to unify terminology, here we refer to it as prediction-based inference.: that is, of quantifying the association between the response and some covariates using realizations not of , but rather, of . Such an approach is attractive in cases where the association between and is of interest, and both and a large sample of realizations of predictors and covariates are available, but realizations of are expensive or otherwise unavailable. Throughout, we will use to denote the predictors in the machine learning model , and to denote the covariates whose association with is of interest. In many settings, the covariate variables may be a subset of the predictor variables , or may be identical to , but this is not necessarily the case.\nWang et al. [2020 ###reference_8###] point out that\na naive approach to prediction-based inference that simplistically interprets the association between as the association between is problematic from a statistical perspective. For instance, regressing onto using least squares does not lead to valid inference on the association between and :\ne.g. it leads to hypothesis tests that fail to control type 1 error, and confidence intervals that do not achieve the nominal coverage. See Box 1.\n[!h] \nBox 1: Naive approach to prediction-based inference.\nWe are given a (pre-trained) prediction function , and an unlabeled dataset representing realizations from .\nAs pointed out by Wang et al. [2020 ###reference_8###], the naive approach displayed here does not allow for valid inference on the association between and .\n\n\nStep 1:\n\nCompute .\n\nStep 2:\n\nConduct inference on the association between and , using and as data, without accounting for the fact that is not a sample from the distribution of .\nBox 1: Naive approach to prediction-based inference.\nWe are given a (pre-trained) prediction function , and an unlabeled dataset representing realizations from .\nAs pointed out by Wang et al. [2020 ###reference_8### ###reference_8###], the naive approach displayed here does not allow for valid inference on the association between and .\nWang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###] propose creative solutions to overcome this issue.\nThey assume that in addition to a large unlabeled dataset , they also have access to a (relatively small) labeled dataset\n. We focus on the case where the labeled and unlabeled data are independent and identically distributed samples from the same study population of interest, though Angelopoulos et al. [2023 ###reference_1###] also extend beyond this setting. See Box 2. We emphasize that the goal is to quantify association between and .\n[!h] \nBox 2: Setting of prediction-based inference.\n\n\n\u2022\n\nGiven: A pre-trained machine learning model .\n\n\u2022\n\nGoal: To quantify the association between a response and covariates .\n\n\n\u2022\n\nData: A (relatively small) labeled dataset consisting of i.i.d. realizations of , and a (large) unlabeled dataset consisting of i.i.d. realizations of . Both are drawn from the same study population.\nBox 2: Setting of prediction-based inference.\nOne simple (and valid) option is to quantify association between and using only the labeled data\n. However, this approach entirely discards the vast amount of unlabeled data . Intuitively, if the prediction function is nearly perfect on our population of interest, then using\n in addition to will aid our efforts to quantify the association between and . By contrast, if is a very poor prediction of on our population of interest, then using in addition to may hinder our efforts.\nOur goal is valid quantification of the association between and , regardless of the quality of on the population of interest.\nTo achieve this goal, Wang et al. [2020 ###reference_8###] propose to (Step 1\u2019) model the association between and using the labeled dataset , and then (Step 2\u2019) incorporate the model in Step 1\u2019 to conduct inference between and using the unlabeled data . See Box 3.\n[!h] \nBox 3: Wang et al. [2020 ###reference_8###]\u2019s proposal to correct prediction-based inference.\n\n\nStep 1\u2019:\n\nModel the association between and using the labeled data .\n\nStep 2\u2019:\n\nIncorporate the model from Step 2\u2019 to conduct inference on the association between and using the unlabeled data . To do this, bootstrap and analytical approaches are proposed.\nBox 3: Wang et al. [2020 ###reference_8### ###reference_8###]\u2019s proposal to correct prediction-based inference.\nBy contrast, Angelopoulos et al. [2023 ###reference_1###] propose to de-bias the estimates obtained using the unlabeled data using information from the labeled data. In the case of estimands that are linear in , they (Step 1\u201d) compute the difference between the estimate of the parameter of interest obtained using\n and the estimate obtained using .\nThey then (Step 2\u201d) correct the estimate obtained using the unlabeled dataset by this amount. See Box 4. Angelopoulos et al. [2023 ###reference_1###] also propose a more general framework for estimands that minimize the expectation of a general loss function, though we focus on the linear case in this paper for simplicity.\n[!h] \nBox 4: Angelopoulos et al. [2023 ###reference_1###]\u2019s proposal to correct prediction-based inference (in the special case of estimands that are linear in ).\n\n\nStep 1\u201d:\n\nCompute the difference between the estimate of the parameter of interest obtained using\n and the estimate obtained using .\n\nStep 2\u201d:\n\nCorrect the parameter estimate obtained using the unlabeled dataset by\nthe difference computed in Step 1\u201d.\nBox 4: Angelopoulos et al. [2023 ###reference_1### ###reference_1###]\u2019s proposal to correct prediction-based inference (in the special case of estimands that are linear in ).\nIn this paper, we investigate these two proposals. In Section 2, we ask a fundamental question: what parameter is each proposal targeting? We see that the proposal of Angelopoulos et al. [2023 ###reference_1###] targets the parameter of interest, whereas that of Wang et al. [2020 ###reference_8###] does not. In Sections 3 and 4, we investigate the empirical consequences of our findings from Section 2. These empirical investigations paint a clear picture: namely, that failure to target the correct parameter has substantial statistical consequences for the proposal of Wang et al. [2020 ###reference_8###], in the form of hypothesis tests that fail to control the Type 1 error, and confidence intervals that fail to attain the nominal coverage. The proposal of Angelopoulos et al. [2023 ###reference_1###] does not suffer these consequences, as it targets the correct parameter. We close with a discussion in Section 5.\nIn this paper, we use capitals to represent a random variable and lower case to represent its realization. Vectors of length equal to the number of observations, or matrices whose rows correspond to the observations, are in bold."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "What parameter is each method targeting?",
15
+ "text": "For concreteness, suppose that we would have fit a linear regression model on realizations of using least squares, had a large number of realizations of been available. Therefore, our goal is to conduct inference on the population parameter\nThe naive method (Box 1) and the proposals of Wang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###] use a test statistic of the form\nfor testing or constructing confidence intervals for . They rely on it having a known distribution, or converging in distribution to a known distribution with increasing sample size.\nHowever, if and goes to as and increase, then does not converge in distribution (see Appendix A for a formal statement of this). Therefore, consistency of for is a necessary condition for valid inference using this approach. We now investigate whether this is the case."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "The general case for an arbitrary prediction model",
21
+ "text": "We first consider the naive approach, as defined in Box 1. Fitting a linear model with least squares would result in . We can see that as increases, which is not equal to in general. In fact, viewing as a black-box function with unknown operating characteristics in the study population, we see that the quantity does not even involve the response, ; therefore, the parameter targeted by the naive method is not of any scientific interest.\nX 1. Use to fit the \u201crelationship model\u201d , yielding . \nX 2. For : \nXX 2.1. Sample unlabeled observations with replacement to obtain and . \nXX 2.2. Sample outcomes from the relationship model . \nXX 2.3. Use to fit a \u201cregression model\u201d for the relationship between and , and record the coefficient estimate and model-based standard error .\nX 3. Compute the point estimate . \nX 4. Compute the \u201cnonparametric\u201d standard error . \nX 5. Compute the \u201cparametric\u201d standard error .\nNow, we consider the bootstrap variant of the proposal of Wang et al. [2020 ###reference_8###], which is introduced in Box 3. A detailed description of these proposals are presented in Algorithm 1. We take Step 2.3 to involve a least squares regression. Note that Step 2.2 of Algorithm 1 ###reference_### involves sampling observations from for use in fitting a regression model in Step 2.3, giving us an estimate . Thus as increases, where we slightly abuse notation by letting denote a random variable with distribution . In general, is not equal to the parameter of interest . Note that both the \u201cparametric\u201d and \u201cnon-parametric\u201d bootstrap corrections suffer from this issue.\nThe analytic variant of the proposal of Wang et al. [2020 ###reference_8###] adjusts the naive estimate by the coefficient of in a regression of onto , with an intercept, using the labeled data222The expression for given in (2 ###reference_###) is implemented in Wang et al. [2020 ###reference_8###]\u2019s code. Their publication involves a slightly different expression for , which also does not converge to the parameter of interest for a similar reason \u2013 we show this in Appendix B.; that is, the estimate takes the form\nThus as and increase,\nwhich is not again not equal to in general.\nThis highlights a cause for concern about the proposals of Wang et al. [2020 ###reference_8###]: namely, that the wrong parameter is being targeted. This calls into question whether the inference that they propose will achieve the desired statistical guarantees; we investigate this issue further in the next two sections.\nFinally, we turn to the proposal of Angelopoulos et al. [2023 ###reference_1###], which is introduced in Box 4. In the case of linear regression, their estimate takes the form\nAs and increase, we see that\nso the proposal of Angelopoulos et al. [2023 ###reference_1###] correctly targets the desired quantity."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "An extreme setting where all methods target the correct quantity",
27
+ "text": "We now consider an extreme setting where the prediction model exactly equals the true regression function: that is, . We also assume that is contained within : that is, for some . This is a reasonable assumption in practice, since the covariate of interest is likely also a predictor in the machine learning model. In this extreme setting, the naive method targets\nIn other words, it targets the correct parameter of interest.\nWe will now show that the bootstrap methods of Wang et al. [2020 ###reference_8###] can similarly target the correct parameter in this extreme setting, for example, if is defined by fitting a generalized additive model (GAM)\nto and adding mean-zero noise, as in Wang et al. [2020 ###reference_8###]. That is,\nwhere is mean-zero noise and is the fitted GAM.\nThe fitted GAM takes the form\nThe approximation in (4 ###reference_###) holds if the labeled sample size is sufficiently large. Equation 5 ###reference_### is a consequence of the extreme assumption that . Equation 6 ###reference_### follows from iterated expectations.\nIt is not hard to see that (6 ###reference_###) is minimized when , i.e., is approximately the identity function.\nCombining this with the extreme assumption that ,\n(3 ###reference_###) leads to\nNow, recall from Section 2.1 ###reference_### that the parameter targeted by Wang et al. [2020 ###reference_8###] is\n.\nWe observe that\nHere, (8 ###reference_###) and (10 ###reference_###) follow from iterated expectations since , and (9 ###reference_###) follows from (7 ###reference_###).\nWith a large enough labeled dataset, this approximation will hold almost exactly.\nThe analytical method of Wang et al. also targets the correct parameter in this setting, since the naive method targets the correct parameter and\nTherefore, we have seen that under a very extreme assumption that , the methods of Wang et al. [2020 ###reference_8###] will target (nearly) the correct parameter.\nHowever, in general, this assumption is not reasonable. And in fact, our goal is valid inference on the parameter regardless of (the quality of) ."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "An empirical investigation of the distribution of the test statistic",
33
+ "text": "We consider a simple simulation setting, inspired by the \u201cSimulated Data: Continuous case\u201d section of Wang et al. [2020 ###reference_8###]. They generate three datasets: a training dataset consisting of realizations of used to train a machine learning model , a labeled dataset consisting of realizations of , and an unlabeled dataset consisting only of realizations of ; both the labeled and unlabeled datasets are used for inference333Wang et al. [2020 ###reference_8###] refer to the labeled data set as \u201ctest\u201d data, and to the unlabeled data as \u201cvalidation\u201d data.. They consider predictors and response , and define the covariate . In Wang et al. [2020 ###reference_8###]\u2019s paper, the\ntraining, labeled, and unlabeled datasets each consist of observations. Throughout this section, we keep the training sample size fixed at observations, but vary the size of the labeled and unlabeled datasets.\nAs in Wang et al. [2020 ###reference_8###], we generate the training, labeled, and unlabeled datasets from the same partially linear additive model . Their goal is to conduct inference on the marginal association between and in a linear model. That is, their parameter of interest is\nBecause the features are independent, we have that\nThus, is the marginal regression coefficient of onto , as well as the coefficient associated with in the partially linear additive model used to generate the data. We consider two settings: one under the null () and one under the alternative ().\nWe generate 3 training sets and fit a GAM to each training set, to obtain three fitted models . In each replicate of the simulation study, we generate a new labeled and unlabeled dataset as described above. Note that this differs from the simulation in Wang et al. [2020 ###reference_8###], which generates a new training set (and thus a different ) in each replicate of the simulation study. We do this to focus on the properties of estimation and inference under a fixed , e.g. how AlphaFold would be used in practice. We perform a total of 1,000 simulation replicates.\nTo conduct prediction-based inference on , both Wang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###] rely on the claim that . We consider the following versions of Wang et al. [2020 ###reference_8###]:\nProposal of Wang et al. [2020 ###reference_8###], with an analytical correction. Apply Box 3 with the \u201canalytical correction\u201d (2 ###reference_###) using a linear model for the regression model and a linear model for the relationship model.\nProposal of Wang et al. [2020 ###reference_8###], with a \u201cparametric bootstrap\u201d correction. Apply Box 3 with the \u201cparametric bootstrap\u201d correction presented in Algorithm 1 ###reference_### using a linear model for the regression model and a GAM for the relationship model.\nProposal of Wang et al. [2020 ###reference_8###], with a \u201cnon-parametric bootstrap\u201d correction. Apply Box 3 with the \u201cnon-parametric bootstrap\u201d correction presented in Algorithm 1 ###reference_### using a linear model for the regression model and a GAM for the relationship model.\nWe additionally consider the proposal of Angelopoulos et al. [2023 ###reference_1###]:\nProposal of Angelopoulos et al. [2023 ###reference_1###]. Apply Box 4 using a linear model.\nFinally, we consider the following two approaches.\nClassical approach using only the labeled data. Fit a linear model to .\nNaive approach. Apply Box 1 using a linear model.\n###figure_1### ###figure_2### In Figure 1 ###reference_### we show the empirical distribution of under , for increasing sample sizes. In the first three panels, we can see that the asymptotic distribution of this test statistic for an arbitrary does not follow a for the naive and Wang et al. [2020 ###reference_8###] methods. This is in line with our findings in Section 2. This is also true under the alternative , as shown in Figure 2 ###reference_###. On the other hand, for the method of Angelopoulos et al. [2023 ###reference_1###], this statistic converges in distribution to a regardless of the choice of .\nIn the last panel of Figures 1 ###reference_### and 2 ###reference_###, we consider the extreme setting considered in Section 2.2, in which . Here, the empirical distribution of the test statistic is approximately for all methods.\nWe next performed testing and constructed confidence intervals under the assumption made by Wang et al. [2020 ###reference_8###] and Angelopoulos et al. [2023 ###reference_1###] that the test statistic asymptotically follows . We examined how the violation of this distributional assumption impacts type 1 error control and coverage in Figures 5 ###reference_### and 6 ###reference_### in the Appendix. We see that the naive and Wang et al. [2020 ###reference_8###] methods do not control the type 1 error rate or have nominal coverage for an arbitrary , and they become increasingly anti-conservative as the sample sizes increase. This can be explained by the increasing discrepancy between the assumed and true distributions of as the sample sizes increase, as previously seen in Figures 1 ###reference_### and 2 ###reference_###. In fact, we can directly read off the type 1 error rate at level 0.05 as the proportion of points falling outside the dashed lines in Figure 1. Similarly, coverage can be read as the proportion of points inside the dashed lines in Figure 2.\nBecause the true distribution of the test statistic matches the assumed one for the method of Angelopoulos et al. [2023 ###reference_1###] for any , this method gives correct type 1 error control and coverage in general. For the same reason, under the extreme setting, all methods have correct type 1 error control and coverage."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "A direct replication of the simulation study of Wang et al. [2020]",
39
+ "text": "In the previous section, we considered a simulation setting that was very similar to that in Wang et al. [2020 ###reference_8###], but differed in that we considered the same prediction models across all simulation replicates. In this section, we instead directly replicate their simulation setting (\u201cSimulated data; continuous case\u201d), by generating a new training dataset in each simulation replicate (resulting in a different in each replicate). They considered a sample size of 300 for the training, labeled, and unlabeled datasets. In addition to replicating these results, we explore increasing the labeled and unlabeled dataset sizes.\n###figure_3### We first examine the type 1 error rate under the null hypothesis using each of the approaches described in Section 3 ###reference_###. Quantile-quantile plots of the resulting p-values are shown in Figure 3 ###reference_###. We see that in agreement with the findings in Wang et al. [2020 ###reference_8###], the naive approach does not control the type 1 error rate regardless of sample size. While it appears that when (first panel of Figure 3 ###reference_###) the methods of Wang et al. [2020 ###reference_8###] have uniform p-values under the null (as also reported in Wang et al. [2020 ###reference_8###]), with other sample sizes this no longer holds and the methods of Wang et al. [2020 ###reference_8###] fail to control the type 1 error rate. As expected based on the previous section, Angelopoulos et al. [2023 ###reference_1###] controls type 1 error. This finding is in agreement with the theoretical results in Angelopoulos et al. [2023 ###reference_1###], which hold for an arbitrary prediction function . Also as expected, the classical method controls type 1 error.\n###figure_4### Next we examine coverage of 95% confidence intervals under . We see from Figure 4 ###reference_### that the naive approach has coverage well below the nominal level. Again, the \u201ccorrected\u201d proposals of Wang et al. [2020 ###reference_8###] also fail to achieve the nominal coverage. The problem becomes increasingly pronounced as the sample sizes of the data used for inference increase. By contrast, the classical method and the proposal of Angelopoulos et al. [2023 ###reference_1###] do achieve the nominal coverage, supporting the theory of Angelopoulos et al. [2023 ###reference_1###].\nAs explained in Section 2, the lack of inferential guarantees for the naive approach and the approach of Wang et al. [2020 ###reference_8###] is a direct consequence of the fact that ."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Discussion",
45
+ "text": "In this paper, we found that the method of Angelopoulos et al. [2023 ###reference_1###] provides valid type 1 error control and coverage.\nBy contrast, the methods proposed by Wang et al. [2020 ###reference_8###] do not provide appropriate inferential guarantees in the absence of very strong assumptions: for instance, under the extreme (and unrealistic) scenario where the prediction model is the true regression function. Under this additional assumption, the naive approach also provides valid inference. Furthermore, we see in our simulation study that simply assuming that the prediction model was trained on data from the same population as the labeled and unlabeled data \u2014 an assumption made by Wang et al. [2020 ###reference_8###] \u2014 is not sufficient to achieve valid inference using their proposed methods.\nThroughout, for simplicity we have assumed that we are conducting inference (i.e. Step 2 of Box 1) using a linear model.\nHowever, our conclusions \u2014 that the naive method and methods of Wang et al. [2020 ###reference_8###] result in invalid inference, because they target the incorrect parameter \u2014 apply much more generally. The theory in Angelopoulos et al. [2023 ###reference_1###] shows that their approach applies to a wide variety of settings beyond linear regression, and has valid inferential properties."
46
+ }
47
+ ],
48
+ "appendix": [
49
+ {
50
+ "section_id": "Appendix 1",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix A Necessity of consistency for",
53
+ "text": "We formally state the necessity of targeting the correct parameter in order to use the test statistic for inference.\nSuppose and . Then does not converge in distribution.\nSuppose, for the sake of contradiction, that converges in distribution. Then . Thus\nso , which is a contradiction.\n\u220e"
54
+ },
55
+ {
56
+ "section_id": "Appendix 2",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix B Lack of consistency of analytical method of Wang et\u00a0al. [2020]",
59
+ "text": "In Sections 2.1 and 2.2, we analyzed the analytical correction as implemented in the code by Wang et al. [2020 ###reference_8###]. We now analyze the analytical correction as described in the publication, which is also not consistent for the parameter of interest in general. In the publication, the estimate obtained from the analytical correction is defined as\nwhere\nand\nThus as increases,\nand\nThus as and increase,\nwhich is not equal to in general.\nIn the extreme setting (described in Section 2.2), in which , we have that\nand\nThus using the same argument as for the naive estimator, this analytical correction is consistent for in this extreme setting."
60
+ },
61
+ {
62
+ "section_id": "Appendix 3",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix C Inferential consequences of wrong distribution",
65
+ "text": "In this section, we examine how violation of the distributional assumption impacts type 1 error control and coverage in the simulations in Section 3. In Figure 5 ###reference_###, we see that the methods proposed by Wang et al. [2020 ###reference_8###] do not control the type 1 error rate for arbitrary ; the situation gets worse as the sample size increases. However, the method proposed by Angelopoulos et al. [2023 ###reference_1###] does control the type 1 error rate. Wang et al. [2020 ###reference_8###] controls the type 1 error rate if the machine learning model is the true regression function ; of course, such a perfect machine learning model is unattainable in practice.\n###figure_5### ###figure_6### In Figure 6 ###reference_###, we see that the methods proposed by Wang et al. [2020 ###reference_8###] do not attain the nominal coverage for an arbitrary , whereas the proposal of Angelopoulos et al. [2023 ###reference_1###] does attain the nominal coverage. The naive method and the proposals of Wang et al. [2020 ###reference_8###] have appropriate coverage when ; again, this is unrealistic in practice."
66
+ }
67
+ ],
68
+ "tables": {},
69
+ "image_paths": {
70
+ "1": {
71
+ "figure_path": "2306.13746v2_figure_1.png",
72
+ "caption": "Figure 1: An examination of the distribution of \u03b2^1/SE^\u2062(\u03b2^1)subscript^\ud835\udefd1^SEsubscript^\ud835\udefd1\\hat{\\beta}_{1}/\\widehat{\\operatorname{SE}}(\\hat{\\beta}_{1})over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / over^ start_ARG roman_SE end_ARG ( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) under H0:\u03b21*=0:subscript\ud835\udc3b0superscriptsubscript\ud835\udefd10H_{0}:\\beta_{1}^{*}=0italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 0. For each of four different prediction models f^\u2062(\u22c5)^\ud835\udc53\u22c5\\hat{f}(\\cdot)over^ start_ARG italic_f end_ARG ( \u22c5 ) (three trained GAMs and one true regression function), we display the empirical distribution of\n\u03b2^1/SE^\u2062(\u03b2^1)subscript^\ud835\udefd1^SEsubscript^\ud835\udefd1\\hat{\\beta}_{1}/\\widehat{\\operatorname{SE}}(\\hat{\\beta}_{1})over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / over^ start_ARG roman_SE end_ARG ( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) as the sample sizes increase, with nlab=0.1\u2062nunlabsubscript\ud835\udc5blab0.1subscript\ud835\udc5bunlabn_{\\text{lab}}=0.1n_{\\text{unlab}}italic_n start_POSTSUBSCRIPT lab end_POSTSUBSCRIPT = 0.1 italic_n start_POSTSUBSCRIPT unlab end_POSTSUBSCRIPT. The N\u2062(0,1)\ud835\udc4101N(0,1)italic_N ( 0 , 1 ) distribution is shown in black. The dashed black lines show the 0.0250.0250.0250.025 and 0.9750.9750.9750.975 quantiles of this distribution.\nThe distributions of Wang et al. [2020]\u2019s test statistics increasingly diverge from the N\u2062(0,1)\ud835\udc4101N(0,1)italic_N ( 0 , 1 ) distribution as the sample sizes increase. The methods and simulation setup are described in Section 3.",
73
+ "url": "http://arxiv.org/html/2306.13746v2/x1.png"
74
+ },
75
+ "2": {
76
+ "figure_path": "2306.13746v2_figure_2.png",
77
+ "caption": "Figure 2: \nAn examination of the distribution of (\u03b2^1\u2212\u03b21*)/SE^\u2062(\u03b2^1)subscript^\ud835\udefd1superscriptsubscript\ud835\udefd1^SEsubscript^\ud835\udefd1(\\hat{\\beta}_{1}-\\beta_{1}^{*})/\\widehat{\\operatorname{SE}}(\\hat{\\beta}_{1})( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) / over^ start_ARG roman_SE end_ARG ( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) when \u03b21*=1superscriptsubscript\ud835\udefd11\\beta_{1}^{*}=1italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 1. For each of four different prediction models f^\u2062(\u22c5)^\ud835\udc53\u22c5\\hat{f}(\\cdot)over^ start_ARG italic_f end_ARG ( \u22c5 ) (three trained GAMs and one true regression function), we display the empirical distribution of\n(\u03b2^1\u2212\u03b21*)/SE^\u2062(\u03b2^1)subscript^\ud835\udefd1superscriptsubscript\ud835\udefd1^SEsubscript^\ud835\udefd1(\\hat{\\beta}_{1}-\\beta_{1}^{*})/\\widehat{\\operatorname{SE}}(\\hat{\\beta}_{1})( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) / over^ start_ARG roman_SE end_ARG ( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) as the sample sizes increase, with nlab=0.1\u2062nunlabsubscript\ud835\udc5blab0.1subscript\ud835\udc5bunlabn_{\\text{lab}}=0.1n_{\\text{unlab}}italic_n start_POSTSUBSCRIPT lab end_POSTSUBSCRIPT = 0.1 italic_n start_POSTSUBSCRIPT unlab end_POSTSUBSCRIPT. The N\u2062(0,1)\ud835\udc4101N(0,1)italic_N ( 0 , 1 ) distribution is shown in black. The dashed black lines show the 0.0250.0250.0250.025 and 0.9750.9750.9750.975 quantiles of this distribution.\nThe distributions of Wang et al. [2020]\u2019s test statistics increasingly diverge from the N\u2062(0,1)\ud835\udc4101N(0,1)italic_N ( 0 , 1 ) distribution as the sample sizes increase. The methods and simulation setup are described in Section 3.",
78
+ "url": "http://arxiv.org/html/2306.13746v2/x2.png"
79
+ },
80
+ "3": {
81
+ "figure_path": "2306.13746v2_figure_3.png",
82
+ "caption": "Figure 3: For data generated under H0:\u03b21*=0:subscript\ud835\udc3b0superscriptsubscript\ud835\udefd10H_{0}:\\beta_{1}^{*}=0italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 0, quantile-quantile plots of the p-values across simulation replicates are displayed. The methods are described in Section 3 and the simulation setup is described in Section 4. Each panel corresponds to a different sample sizes of the labeled and unlabeled datasets used for inference.\nThe bootstrap and analytical corrections considered by Wang et al. [2020] become increasingly anticonservative as the sample sizes increase. The classical approach, and that of Angelopoulos et al. [2023], are well-calibrated.",
83
+ "url": "http://arxiv.org/html/2306.13746v2/x3.png"
84
+ },
85
+ "4": {
86
+ "figure_path": "2306.13746v2_figure_4.png",
87
+ "caption": "Figure 4: For data generated with \u03b21*=1superscriptsubscript\ud835\udefd11\\beta_{1}^{*}=1italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 1, empirical coverage of 95% confidence intervals for each method across each simulation replicate, as the labeled and unlabeled sample sizes increase, with nlab=0.1\u2062nunlabsubscript\ud835\udc5blab0.1subscript\ud835\udc5bunlabn_{\\text{lab}}=0.1n_{\\text{unlab}}italic_n start_POSTSUBSCRIPT lab end_POSTSUBSCRIPT = 0.1 italic_n start_POSTSUBSCRIPT unlab end_POSTSUBSCRIPT. The methods are described in Section 3 and the simulation setup is described in Section 4.",
88
+ "url": "http://arxiv.org/html/2306.13746v2/x4.png"
89
+ },
90
+ "5": {
91
+ "figure_path": "2306.13746v2_figure_5.png",
92
+ "caption": "Figure 5: For labeled and unlabeled datasets generated under H0:\u03b21*=0:subscript\ud835\udc3b0superscriptsubscript\ud835\udefd10H_{0}:\\beta_{1}^{*}=0italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 0, quantile-quantile plots of the p-values across replicates of the modified simulation study are displayed for each of the four f^\u2062(\u22c5)^\ud835\udc53\u22c5\\hat{f}(\\cdot)over^ start_ARG italic_f end_ARG ( \u22c5 )\u2019s considered. The methods and simulation setup are described in Section 3. Each panel corresponds to different sample sizes of the labeled and unlabeled datasets used for inference.\nThe naive method and the bootstrap and analytical corrections considered by Wang et al. [2020] become increasingly anticonservative as the sample sizes increases, unless the machine learning model is perfect, i.e. f^\u2062(z)=E\u2061[Y|Z=z]^\ud835\udc53\ud835\udc67Econditional\ud835\udc4c\ud835\udc4d\ud835\udc67\\hat{f}(z)=\\operatorname{E}[Y|Z=z]over^ start_ARG italic_f end_ARG ( italic_z ) = roman_E [ italic_Y | italic_Z = italic_z ].",
93
+ "url": "http://arxiv.org/html/2306.13746v2/x5.png"
94
+ },
95
+ "6": {
96
+ "figure_path": "2306.13746v2_figure_6.png",
97
+ "caption": "Figure 6: For labeled and unlabeled datasets generated with \u03b21*=1superscriptsubscript\ud835\udefd11\\beta_{1}^{*}=1italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 1, empirical coverage of 95% confidence intervals for each method across each simulation replicate, for each of the four f^\u2062(\u22c5)^\ud835\udc53\u22c5\\hat{f}(\\cdot)over^ start_ARG italic_f end_ARG ( \u22c5 )\u2019s considered, as the labeled and unlabeled sample sizes increase, with nlab=0.1\u2062nunlabsubscript\ud835\udc5blab0.1subscript\ud835\udc5bunlabn_{\\text{lab}}=0.1n_{\\text{unlab}}italic_n start_POSTSUBSCRIPT lab end_POSTSUBSCRIPT = 0.1 italic_n start_POSTSUBSCRIPT unlab end_POSTSUBSCRIPT. The methods and simulation setup are described in Section 3.",
98
+ "url": "http://arxiv.org/html/2306.13746v2/x6.png"
99
+ }
100
+ },
101
+ "validation": true,
102
+ "references": [
103
+ {
104
+ "1": {
105
+ "title": "Prediction-powered inference.",
106
+ "author": "Anastasios N Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I Jordan,\nand Tijana Zrnic.",
107
+ "venue": "Science, 382(6671):669\u2013674, 2023.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "2": {
113
+ "title": "InSilicoVA: A method to automate cause of death assignment for\nverbal autopsy.",
114
+ "author": "Samuel J Clark, Tyler McCormick, Zehang Li, and Jon Wakefield.",
115
+ "venue": "arXiv preprint arXiv:1504.02129, 2015.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "3": {
121
+ "title": "Improving the value of public RNA-seq expression data by phenotype\nprediction.",
122
+ "author": "Shannon E Ellis, Leonardo Collado-Torres, Andrew Jaffe, and Jeffrey T Leek.",
123
+ "venue": "Nucleic Acids Research, 46(9):e54\u2013e54,\n2018.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "4": {
129
+ "title": "A gene-based association method for mapping traits using reference\ntranscriptome data.",
130
+ "author": "Eric R Gamazon, Heather E Wheeler, Kaanan P Shah, Sahar V Mozaffari, Keston\nAquino-Michaels, Robert J Carroll, Anne E Eyler, Joshua C Denny, GTEx\nConsortium, Dan L Nicolae, et al.",
131
+ "venue": "Nature Genetics, 47(9):1091\u20131098, 2015.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "5": {
137
+ "title": "A transcriptome-wide association study of high-grade serous\nepithelial ovarian cancer identifies new susceptibility genes and splice\nvariants.",
138
+ "author": "Alexander Gusev, Kate Lawrenson, Xianzhi Lin, Paulo C Lyra Jr, Siddhartha Kar,\nKevin C Vavra, Felipe Segato, Marcos AS Fonseca, Janet M Lee, Tanya Pejovic,\net al.",
139
+ "venue": "Nature Genetics, 51(5):815\u2013823, 2019.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "6": {
145
+ "title": "Highly accurate protein structure prediction with AlphaFold.",
146
+ "author": "John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov,\nOlaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin\n\u017d\u00eddek, Anna Potapenko, et al.",
147
+ "venue": "Nature, 596(7873):583\u2013589, 2021.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "7": {
153
+ "title": "Mortality and causes of death in Jordan 1995-96: assessment by\nverbal autopsy.",
154
+ "author": "SA Khoury, D Massad, and T Fardous.",
155
+ "venue": "Bulletin of the World Health Organization, 77(8):641, 1999.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "8": {
161
+ "title": "Methods for correcting inference based on outcomes predicted by\nmachine learning.",
162
+ "author": "Siruo Wang, Tyler H McCormick, and Jeffrey T Leek.",
163
+ "venue": "Proceedings of the National Academy of Sciences, 117(48):30266\u201330275, 2020.",
164
+ "url": null
165
+ }
166
+ }
167
+ ],
168
+ "url": "http://arxiv.org/html/2306.13746v2"
169
+ }
20240101/2306.16846v3.json ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Lightweight texture transfer based on texture feature preset",
3
+ "abstract": "In the task of texture transfer, reference texture images typically exhibit highly repetitive texture features, and the texture transfer results from different content images under the same style also share remarkably similar texture patterns. Encoding such highly similar texture features often requires deep layers and a large number of channels, making it is also the main source of the entire model\u2019s parameter count and computational load, and inference time. We propose a lightweight texture transfer based on texture feature preset (TFP). TFP takes full advantage of the high repetitiveness of texture features by providing preset universal texture feature maps for a given style. These preset feature maps can be fused and decoded directly with shallow color transfer feature maps of any content to generate texture transfer results, thereby avoiding redundant texture information from being encoded repeatedly. The texture feature map we preset is encoded through noise input images with consistent distribution (standard normal distribution). This consistent input distribution can completely avoid the problem of texture transfer differentiation, and by randomly sampling different noise inputs, we can obtain different texture features and texture transfer results under the same reference style. Compared to state-of-the-art techniques, our TFP not only produces visually superior results but also reduces the model size by 3.2-3538 times and speeds up the process by 1.8-5.6 times.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Style transfer is a highly attractive image processing technique that can transfer the unique colors and texture styles of artworks to content images. In recent years, methods for style transfer have been widely proposed, which can be roughly divided into two categories: online image optimization and model optimization.\nThe representative of image optimization methods is (Gatys et al. (2016 ###reference_6###)), which innovatively transfers gradients to the input image and iteratively optimizes the input content image directly. The style pattern is represented by the feature correlation of deep convolutional neural networks (VGG, Sengupta et al. (2019 ###reference_25###)). Subsequent work mainly focuses on different forms of loss functions (Kolkin et al. (2019 ###reference_14###); Risser et al. (2017 ###reference_24###)). However, this slow online optimization method has a high time cost and greatly reduces its actual citation value. In contrast, the model optimization method effectively solves the time-consuming problem of online iteration through offline model training and forward reasoning. There are three main types of model optimization: (1) Training exclusive style transformation models for a single artistic style (Johnson et al. (2016 ###reference_12###); Li and Wand (2016b ###reference_16###); Ulyanov et al. (2016a ###reference_30###, b ###reference_31###)) Synthesize stylized images using a single given artistic style image; (2) Training model that can convert multiple styles (Chen et al. (2017 ###reference_2###); Dumoulin et al. (2016 ###reference_5###); Wang et al. (2017 ###reference_33###); Li et al. (2017a ###reference_17###); Zhang and Dana (2018a ###reference_38###)) Introducing various network architectures while handling multiple styles; (3) Arbitrary style transformation model (Zhang and Dana (2018b ###reference_39###); Li et al. (2017b ###reference_18###); Wang et al. (2022 ###reference_35###, 2020 ###reference_32###); Shen et al. (2018 ###reference_26###); Jing et al. (2020 ###reference_10###)) used different mechanisms such as feature modulation and matching to transfer any artistic style.\nLooking back at all the above methods, , only Gayts (Gatys et al. (2016 ###reference_6###)), DcDae (ShiQi Jiang (2023b ###reference_28###)), CTDP (ShiQi Jiang (2023a ###reference_27###)), and IDD (ShiQi Jiang (2023c ###reference_29###)) can achieve high-quality texture transfer effects. Observing and analyzing the transfer results of CTDP in Fig.3 ###reference_###, it is found that for the same style image, the texture parts in different generated results have extremely high similarity. Such high similarity texture features require encoding at deeper levels and a larger number of channels, so this operation is also the main source of the entire model\u2019s parameter count, computational complexity, and inference time. Therefore, although significant progress has been made in recent years, existing methods have overlooked the highly repetitive nature of texture features and still require repeated encoding for such redundant texture information.\n###figure_1### ###figure_2### ###figure_3### In the face of the aforementioned challenges, we propose a lightweight texture transfer based on texture feature preset (TFP) model. This model can preset a well encoded universal deep texture feature map for a single style after training. In the inference stage, the preset texture feature map can be directly fused and decoded with shallow color transfer feature maps of any content, omitting the repeated encoding process for deep texture feature maps. On the basis of not changing the original framework of CTDP as much as possible, the model size of our texture feature preset scheme can be reduced by 3.2 times during the inference stage, and the inference speed can be accelerated by 1.8 times.\nIn addition to the improvements in model size and inference speed mentioned above, since the preset texture features are generated from noise, we can generate different texture feature maps such as Fig.8 ###reference_### in the inference stage by randomly sampling noise, thus generating different texture transfer results for the same content image. In addition, based on the input distribution differentiation experiment in IDD (ShiQi Jiang (2023c ###reference_29###)), we have learned that the distribution differences within the input content image can lead to texture suppression differentiation performance issues. Similarly, we found a similar issue in the texture transfer task, where distribution differences within the same content image can lead to texture transfer differentiation, as shown in the red box area in Fig.4 ###reference_###. However, in our method, deep texture feature maps are unconditionally generated pure texture images from noise images that completely follow the same noise distribution, which completely avoids the problem of texture transfer differentiated performance.\n###figure_4### Compared to state-of-the-art models, our TFP not only produces visually superior results, but also has a volume that is 3.2-3538 times smaller and a speed that is 1.8-5.6 times faster. In summary, our contributions are as follows:\nWe propose a lightweight texture transfer framework based on texture feature preset, which uses content independent noise input images to encode texture feature maps and fuse them with shallow feature maps of any content for decoding as the result of texture transfer.\nBy presetting a deep texture feature map, we can directly skip the encoding process of the deep texture feature map during the inference stage, greatly reducing the model\u2019s parameter count, computational complexity, and inference speed.\nTo prevent the semantic content from being completely masked by texture features, we designed a semantic noise texture fusion loss.\nTo address the issue of local texture loss in texture feature maps caused by feature fusion decoding, we added a semantic conditional texture generation branch.\nThe method of generating deep texture feature maps from noise can completely avoid the problem of texture transfer differentiation caused by the distribution differences within the content image.\nDue to the fact that texture features are generated by noise, random sampling of input noise can generate different texture feature maps and apply them to the texture transfer results.\nNumerous qualitative and quantitative experiments have shown that our method can quickly achieve high-quality texture transfer effects even with the fewest number of parameters."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1 Neural Style Transfer",
21
+ "text": "With the groundbreaking work of (Gatys et al. (2016 ###reference_6###)), the era of neural style transfer (NST) has arrived. The visual appeal of style transfer has inspired subsequent researchers to improve in many aspects, including efficiency (Johnson et al. (2016 ###reference_12###); Ulyanov et al. (2016a ###reference_30###)); Quality (Jing et al. (2018 ###reference_11###); Li and Wand (2016a ###reference_15###); Gu et al. (2018 ###reference_7###); Xie et al. (2022 ###reference_36###); ShiQi Jiang (2023b ###reference_28###)); Diversity (Wang et al. (2021 ###reference_34###); Chen et al. (2021 ###reference_3###)) and User Control (Zhang et al. (2019 ###reference_37###); Champandard (2016 ###reference_1###)); Despite significant progress, existing methods are difficult to process high-resolution images due to complex network structures and limited hardware resources."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2 Lightweight Style Transfer",
27
+ "text": "To address the above challenges, (Wang et al. (2020 ###reference_32###)) employed model compression techniques, known as collaborative distillation, to reduce the convolutional filters of VGG-19. While this method significantly reduced memory consumption, the pruned model was still not fast enough to run on 4K super-resolution images.(Shen et al. (2018 ###reference_26###)) and (Jing et al. (2020 ###reference_10###)) have designed lightweight networks, but still use pre-trained VGG models to extract style features, which can bring high computational costs and slow inference speed.\nIn order to achieve high-resolution style transfer, (Chen et al. (2022 ###reference_4###)) divides the input image into small patches and use thumbnail instance normalization for patch-wise stylization to ensure style consistency between different patches. Although this method achieves 4K super-resolution style transfer, it essentially does not solve the problem of excessive forward inference time consumption.\nRecently,(Wang et al. (2022 ###reference_35###)) completely removed VGG and added a dual modulation strategy to inject color and texture structure information during the decoding phase. However, as shown in Fig.7 ###reference_###, experiments have shown that removing VGG style transfer significantly reduces the performance of arbitrary style transfer task, mainly manifested as color leakage, content structure distortion, and pseudo texture structure transfer. The transfer results for different texture structures are extremely similar, because the encoding and decoding of texture structures collapse into a unified compromise suboptimal texture structure.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Method",
33
+ "text": "Given an arbitrary content image, our goal is to achieve fast texture transfer through preset texture feature maps. The main challenges of this task lie in three aspects: (1) How to prevent semantic information from being completely masked by texture features during the fusion decoding process; (2) How to solve the problem of local texture missing in texture feature maps; (3) How to solve the problem of texture transfer differentiation;"
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "3.1 Overview of TFP",
39
+ "text": "As shown in Fig.2 ###reference_###, our TFP framework consists of four main components: shallow encoder decoder , deep encoder , fusion decoder , and style discriminator (only used during the training phase). Under this framework, are mainly responsible for encoding the semantics, details, and color transfer, while are mainly responsible for encoding deep texture feature maps from noisy inputs. Paired encoders and decoders have a symmetrical lightweight structure, consisting of two standard convolutional layers at the beginning and end, as well as several depthwise separable convolutional layers (DW, Howard et al. (2017 ###reference_8###)) in the middle. The complete forward inference pipeline of our framework is as follows:\n(1) Extracting the shallow features of content image using a shallow encoder , denoted as .\n(2) Extracting the deep features of noise image using a deep encoder , denoted as .\n(3) Extracting the deep features of content image using a deep encoder , denoted as .\n(4) Obtain color transfer output by inputting shallow features into shallow decoder , denoted as .\n(5) Obtain noise texture transfer result by fusing and decoding the fusion features of and , denoted as .\n(6) Obtain content texture transfer result by fusing and decoding the fusion features of and , denoted as .\nAmong them, represents the Detail Attention-enhanced (ShiQi Jiang (2023b ###reference_28###)) module, this framework primarily focuses on whether the input images are noise or content images. Therefore, we specifically denote the current input form using superscripts, where represents content input, and represents noise input. and represent the fusion strength of shallow and deep feature maps, respectively.\nTraining Losses. In order to achieve style transfer, similar to the previous method (Gatys et al. (2016 ###reference_6###); Xie et al. (2022 ###reference_36###); Shen et al. (2018 ###reference_26###); Wang et al. (2022 ###reference_35###); Li et al. (2017c ###reference_19###); Park and Lee (2019 ###reference_22###); Huang and Belongie (2017 ###reference_9###); ShiQi Jiang (2023b ###reference_28###)), we use pre trained VGG-16 (Sengupta et al. (2019 ###reference_25###)) as our loss model to calculate content and style loss. We use perceptual loss (Johnson et al. (2016 ###reference_12###)) as our branch content loss , and all three of our branch content losses are calculated in the layers of VGG-16. The branch style loss is defined as the matching Gram matrix (Gatys et al. (2016 ###reference_6###)), and the three branches calculate the style loss at different levels (see details in Sec.LABEL:sec:bs). Introduce style discrimination loss similar to (ShiQi Jiang (2023b ###reference_28###)) to ensure the overall color and texture matching effect of stylized images. Please note that we only use VGG-16 during the training phase and do not require complex loss calculations or involve any large networks during the inference phase.\nIn summary, the overall goals of our TFP are:\nwhere hyper-parameters , , , and define the relative importance of each component in the total loss function."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "3.2 Background",
45
+ "text": "CTDP (ShiQi Jiang (2023a ###reference_27###)) pioneered the design of a dual pipeline framework for color and texture, which can simultaneously generate color and texture transfer results. As shown in Fig.1 ###reference_###(a), CTDP actually produces three results simultaneously, namely shallow color transfer result, deep texture transfer result, and fusion decoding result. It generally takes the fusion decoding result as the final style transfer output."
46
+ },
47
+ {
48
+ "section_id": "3.2.1",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "3.2.1 Texture Transfer Differentiation",
51
+ "text": "We found that all previous schemes that required encoding of content images, including CTDP, and generating texture transfer results through semantic conditions all had texture transfer differentiation issues, as shown in the red box area in Fig.4 ###reference_###. This phenomenon is similar to the texture suppression differentiation performance in IDD (ShiQi Jiang (2023c ###reference_29###)), which is believed to be caused by the continuity of the input image. Discontinuous inputs will generate noise features in the feature map, and noise features will evolve into texture structures through convolution operations. Continuous inputs will not generate noise features in the feature map, and will not evolve into texture structure features. As shown in the first column of Fig.4 ###reference_###, the extremely continuous areas in the content images directly do not generate any texture information, while the discontinuous parts outside the red box can effectively complete the texture transfer task. This phenomenon once again confirms the hypothesis of IDD."
52
+ },
53
+ {
54
+ "section_id": "3.2.2",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "3.2.2 Texture Similarity",
57
+ "text": "As shown in Fig.3 ###reference_###, four output images of the CTDP model are displayed, with the top and bottom rows showing the stylized results of two different reference styles. By zooming in on the red and yellow box areas, we found that under the same reference style, the stylized output texture of different content images all have extremely high texture similarity. In the training and inference process of the model, generating such high similarity texture features requires more convolutional encoding, and more convolutional encoding means deeper network depth and larger number of channels, which makes the generation of texture features the main source of model parameters and computation. Is it necessary for us to repeatedly encode such redundant texture information?\n###figure_10###"
58
+ },
59
+ {
60
+ "section_id": "3.2.3",
61
+ "parent_section_id": "3.2",
62
+ "section_name": "3.2.3 Noise Input",
63
+ "text": "We found in our experiment that replacing the input content image of CTDP with a pure noise image (standard normal distribution with mean 0 and variance 1) will generate a pure texture image as shown in the second row of Fig.5 ###reference_###. The first row is its corresponding reference style image, that is, if the input is noise, the model will produce a pure texture image without any semantics. This phenomenon leads us to the following conjecture:\n(1) Due to the fact that only the Gram (Gatys et al. (2016 ###reference_6###)) matrix is constrained for reference style images, the essence of such models is to perform color and texture reconstruction tasks;\n(2) When the input is a content image, it is essentially performing the task of conditional texture reconstruction, that is, content conditional generation;\n(3) When the input is a content image, it essentially engages in the task of conditional texture reconstruction, namely, content conditional texture generation;\n(4) If the result generated by unconditional generation is a pure texture image with no semantics or structure, can we achieve texture transfer by fitting the pure texture image onto the content image?"
64
+ },
65
+ {
66
+ "section_id": "3.2.4",
67
+ "parent_section_id": "3.2",
68
+ "section_name": "3.2.4 Conclusion",
69
+ "text": "Overall, our experiments on CTDP have yielded the following conclusions:\n(1) The distribution difference of input images can lead to texture transfer differentiation issues;\n(2) Different content input images will produce texture transfer results with extremely high similarity in texture features;\n(3) The CTDP model essentially performs texture reconstruction tasks. When inputting noisy images, the model generates unconditionally generated pure texture images, and when inputting content images, the model generates conditionally generated texture transfer images with content semantics."
70
+ },
71
+ {
72
+ "section_id": "3.3",
73
+ "parent_section_id": "3",
74
+ "section_name": "3.3 Texture Feature Preset Framework",
75
+ "text": "We attempt to utilize the high repeatability of texture features and the model\u2019s ability to unconditionally generate pure texture images from noisy inputs to achieve texture feature preset (TFP) effects. TFP aims to provide preset texture feature maps for a single style, which can be fused and decoded with any shallow color transfer feature map to directly generate texture transfer results, thereby avoiding duplicate encoding of redundant texture information."
76
+ },
77
+ {
78
+ "section_id": "3.3.1",
79
+ "parent_section_id": "3.3",
80
+ "section_name": "3.3.1 Pseudo Texture Feature Preset",
81
+ "text": "Firstly, we attempt to directly execute the pseudo texture feature preset scheme on the pre-trained CTDP (ShiQi Jiang (2023a ###reference_27###)) framework. As shown in Fig.1 ###reference_###(b), the three column outputs are the decoding results of shallow color transfer feature maps, noise feature map decoding, and fusion decoding of two feature maps. Observation shows that the pseudo TFP fusion decoding of the CTDP model results in a state where two images are directly superimposed, presenting an erroneous texture transfer effect where the content information is completely masked by texture features."
82
+ },
83
+ {
84
+ "section_id": "3.3.2",
85
+ "parent_section_id": "3.3",
86
+ "section_name": "3.3.2 Semantic Texture Fusion Loss",
87
+ "text": "We believe that the reason for the incorrect preset method of the pseudo texture features mentioned above is the lack of constraints on the noise encoded texture feature map. In fact, the pure texture feature map generated by noise in the previous experiment is only a side effect product of CTDP (ShiQi Jiang (2023a ###reference_27###)) framework training, and is not suitable for being directly used as the preset texture feature map.\nTo solve the fusion problem of shallow color transfer feature maps and deep noise texture feature maps, we designed a semantic texture fusion loss . Because we need to present the semantic and structural information of the reference content image and the texture features in the reference style image in the texture transfer results, is actually designed based on the style and content perception loss of Gatys et al. (2016 ###reference_6###) and Johnson et al. (2016 ###reference_12###). Unlike previous schemes that were calculated under the input of content images, our scheme calculates n * for randomly sampled noisy input images."
88
+ },
89
+ {
90
+ "section_id": "3.3.3",
91
+ "parent_section_id": "3.3",
92
+ "section_name": "3.3.3 Semantic Conditional Texture Generation Branch",
93
+ "text": "As shown in the first row of Fig.6 ###reference_###(b), we did avoid the content semantics being masked by texture features through semantic texture fusion loss . However, when we observed the second row, we found that the texture map generated based on noise had a problem of local texture loss, which led to poor fusion decoding performance.\nWe believe it is the side effect of the direct constraint of loss on noise that leads to the issue of local texture loss. Under the sole constraint of Loss , the model, in pursuit of the best semantic texture fusion effect, forces compromises in the encoding of noise-based texture feature maps, resulting in more significant local texture loss to better reduce Loss . Therefore, we introduced a semantic conditional texture generation branch, hoping that the semantic conditional encoding based on content images can guide the model in encoding deep texture feature maps beneficial for feature fusion.\nAs shown in Fig.6 ###reference_###(c), the top and bottom rows respectively show the predictive performance of two styles after training with the addition of semantic conditional texture generation branches. We observed that the color matching of the fusion decoding result in the first row is higher, the content color is almost not leaked, the artifacts in the noise output result are reduced, and the scale of the texture feature is increased. It presents an artistic effect where the content semantics are entirely composed of style texture patterns. The fusion decoding and texture feature decoding results in the second line have significantly solved the problem of local texture loss."
94
+ },
95
+ {
96
+ "section_id": "3.4",
97
+ "parent_section_id": "3",
98
+ "section_name": "3.4 Fast Texture Transfer",
99
+ "text": "The ultimate goal of our texture feature preset framework is to achieve faster inference speed in the inference stage by omitting repeated encoding of deep texture features. As shown in Fig.2 ###reference_###(b), it is the execution process of TFP in the inference stage. The gray feature map is the preset deep texture feature map after training. We only need to perform shallow encoding on the content image to obtain the shallow color transfer feature map and fuse it with the preset texture feature map to decode and output the texture transfer result quickly, denoted as:\nIn this process, the shallow color transfer feature map is responsible for providing the structure and detail information of the content and completing the encoding of color transfer, while the preset texture feature map is responsible for providing complex and highly repetitive texture patterns with reference styles. As shown in Tab.1 ###reference_###, TFP can achieve the fastest inference speed of 3.1ms for a single image at 256 resolution, which is 1.8 times faster than the previous fastest model."
100
+ },
101
+ {
102
+ "section_id": "3.5",
103
+ "parent_section_id": "3",
104
+ "section_name": "3.5 Random Texture Generation",
105
+ "text": "Since our deep texture feature maps are not encoded from content images, but from random noise maps, we can generate different texture feature maps during the inference stage by sampling different input noise and applying them to the texture transfer results. As shown in the Fig.8 ###reference_###, the upper and lower rows are pure texture images directly decoded from two styles of texture feature maps, and the two columns use different random sampling noise input images. The red and purple boxes are the enlarged results in the original image. We can observe that the texture patterns of the decoding results of texture feature maps of the same style are similar, but the combination and arrangement of features are not the same. This is the effect of different texture images with high similarity generated by random noise, and different texture feature maps can also be used to produce different texture transfer effects on a single content image through fusion decoding."
106
+ },
107
+ {
108
+ "section_id": "3.6",
109
+ "parent_section_id": "3",
110
+ "section_name": "3.6 Texture Transfer Differentiation",
111
+ "text": "As shown in the Fig.4 ###reference_###, observing the red box in the figure, it can be found that all previous schemes did not achieve good texture transfer effects in the red box area, and there are serious texture transfer differentiation problems. This is because the background part of the content image has different degrees of continuity, and the red box in the image is different from other areas, which are extremely continuous and smooth. Previously, all solutions required texture encoding for content images, and there were often localized differences in the distribution of content images, especially in extremely continuous parts where the extremely continuous input parts could not generate texture representations, resulting in such texture transfer differentiation issues.\nOur TFP scheme precisely avoids this content-based image based texture conditional encoding method, and instead relies entirely on an undifferentiated texture unconditional encoding method with the same noise distribution, generating a pure texture image with consistent global texture patterns. As shown in the last column of the Fig.4 ###reference_###, the red box in our TFP scheme generates a highly consistent texture pattern with the other background parts."
112
+ },
113
+ {
114
+ "section_id": "4",
115
+ "parent_section_id": null,
116
+ "section_name": "Experiments",
117
+ "text": ""
118
+ },
119
+ {
120
+ "section_id": "4.1",
121
+ "parent_section_id": "4",
122
+ "section_name": "4.1 Implementation Details",
123
+ "text": "We used MS-COO (Lin et al. (2014 ###reference_20###)) as the content image and extracted style images from Wikiart (Phillips and Mackintosh (2011 ###reference_23###)) to train our TFP model. In equation.3 ###reference_###, the values of , , , , and are set to 1e0, 1e5, 1e0, 2e-5, 1e0 and 1e0, respectively. We used the Adam (Kingma and Ba (2014 ###reference_13###)) optimizer with a learning rate of 0.001. During the training process, first adjust the size of the content image to 512, and then randomly crop it to 256 256 pixels for enhancement. Style images are processed using similar methods, but all images in a batch are randomly cropped from the same reference style image. Unlike the previous plan, we also need to randomly sample a batch of random noise with a size of 256 256 as input. We conducted all experiments on the RTX 3090 GPU."
124
+ },
125
+ {
126
+ "section_id": "4.2",
127
+ "parent_section_id": "4",
128
+ "section_name": "4.2 Comparisons with Prior Arts",
129
+ "text": "Due to our model\u2019s ability to quickly generate color and texture transfer results simultaneously, we compared our CTDP with state-of-the-art color transfer models and texture transfer models (arbitrary style transfer). In the comparison scheme, we directly ran the code with the default settings published by the author.\n###figure_11### ###figure_12### ###figure_13###"
130
+ },
131
+ {
132
+ "section_id": "4.2.1",
133
+ "parent_section_id": "4.2",
134
+ "section_name": "4.2.1 Qualitative Comparison",
135
+ "text": "The qualitative comparison results of different texture transfer methods are shown in Fig.7 ###reference_###.\nFirstly, compare with our most relevant work, CTDP (ShiQi Jiang (2023a ###reference_27###)). In the CTDP results of the first, fifth, and sixth rows of the Fig.7 ###reference_###, it can be observed that there is a halo problem at the semantic edges of the aircraft, house, and clock tower. TFP has to some extent solved this problem, and there is no obvious or conflicting edge halo. In the second row of CTDP results in the Fig.7 ###reference_###, there is a significant texture transfer differentiation problem in the lower left corner. The smoother content input results in CTDP not encoding texture information on it, while TFP avoids this problem. TFP achieves an effect comparable to CTDP in terms of matching texture structure information and color information.\nAdain (Huang and Belongie (2017 ###reference_9###)), SANet (Park and Lee (2019 ###reference_22###)), PAMA (Luo et al. (2022 ###reference_21###)), and Micro (Wang et al. (2022 ###reference_35###)) all have significant issues with texture transfer quality. In the second row of the Fig.7 ###reference_###, all schemes in the bottom left corner have obvious texture transfer differentiation issues. Although PAMA and Micro schemes completely lose their texture here, they at least have the effect of color transfer. Adain and SANet even failed to encode the color, completely leaking the background color of the content. In the results of all the schemes in the first, third, fifth, and sixth rows of the Fig.7 ###reference_###, it can be observed that there are very obvious halo problems at the edges of the main objects of the airplane, bed and chair, house, and clock tower. The texture matching degree in all results of Adain and Micro schemes is very low. In Micro, all results have similar texture patterns and do not correspond to the reference style. There are different texture patterns in Adain, but the matching degree with the reference texture pattern is not high. PAMA and SANet only match the texture patterns correctly in the third and fourth rows, but their color and overall migration effects are slightly inferior. The texture and color matching degree in other reference images are not high, and the migration effect is poor.\nIn contrast, our TFP achieves state-of-the-art texture transfer effects. The global color and texture structure of all our results have a high degree of matching, and it is the only solution that can avoid the problem of subject object edge halo and texture transfer differentiation."
136
+ },
137
+ {
138
+ "section_id": "4.2.2",
139
+ "parent_section_id": "4.2",
140
+ "section_name": "4.2.2 Quantitative Comparison",
141
+ "text": "Tab.1 ###reference_### shows the quantitative comparison between our model and state-of-the-art methods in the inference stage. Due to the lack of widely accepted quantitative evaluation metrics for style transfer tasks in the industry, we only compared model size and forward inference speed in this study. As shown in columns a and d of Tab.1 ###reference_###, our TFP is 3.2-3538 times smaller and 1.8-5.6 times faster than existing models."
142
+ },
143
+ {
144
+ "section_id": "4.2.3",
145
+ "parent_section_id": "4.2",
146
+ "section_name": "4.2.3 User Study",
147
+ "text": "The evaluation of stylized results is highly subjective. Therefore, we conducted user studies on these five methods. We randomly presented 30 randomly shuffled stylized images to each participant, and each of the six methods (CTDP, AdaIN, PAMA, MicroAST, SANet et al, and ours) generated five stylized images, which were then shuffled randomly. Participants had unlimited time to select their favorite image (top 1) and top three images (top 3). We collected a total of 108 valid votes (three per person) from 36 participants, and show the percentage of preferred results for each method in the last two columns of Table for top 1 and top 3, respectively. Finally, as shown in Tab.1 ###reference_###, the results indicate that our stylized images are more attractive than those of competitors."
148
+ },
149
+ {
150
+ "section_id": "4.3",
151
+ "parent_section_id": "4",
152
+ "section_name": "4.3 Ablation Study",
153
+ "text": "The result without content texture fusion loss is shown in Fig.9 ###reference_###(b), where the semantic information of the content image is completely masked by texture features, and the shallow color transfer feature map and deep texture feature map are not well fused and decoded.\nThe result without semantic conditional texture encoding branch is shown in Fig.9 ###reference_###(c), and there is a problem of local texture loss in the texture part of the result."
154
+ },
155
+ {
156
+ "section_id": "5",
157
+ "parent_section_id": null,
158
+ "section_name": "Conclusion",
159
+ "text": "In this article, we propose a dual pipeline lightweight framework called CTDP. For the first time, our dual channels can simultaneously generate color and texture transfer results corresponding to style images, and the weighted fusion of dual branch features achieves the effect of adding texture features with controllable intensity from color transfer results for the first time. In addition, mtv loss was designed to suppress texture information in the model matching Gram matrix, and it was found that smoothing the input in our framework can almost completely eliminate texture features. A large number of experiments have proven the effectiveness of this method. Compared to the current level of technology, our CTDP is the first model that can simultaneously achieve color and texture transfer. It not only produces visually superior results in both migration tasks, but also has a color migration branch model size as low as 20k."
160
+ }
161
+ ],
162
+ "appendix": [],
163
+ "tables": {
164
+ "1": {
165
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.1.1.2\">Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.1\">(a)Params/\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.3\">(b)Storage/MB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.4\">(c)GFLOPs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.5\">(d)Time/ms</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.6\">(e)Prefer/%(top1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.1.7\">(f)Prefer/%(top3)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.1\">PAMA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.2\">35.389</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.3\">138.275</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.4\">89.802</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.5\">17.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.6\">0.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.7\">0.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.3.2.1\">SANet</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.2\">20.911</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.3\">81.703</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.4\">66.924</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.5\">9.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.6\">0.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.7\">0.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.4.3.1\">Adain</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.2\">7.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.3\">27.397</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.4\">47.459</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.5\">6.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.6\">0.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.7\">0.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.5.4.1\">Micro</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.2\">0.472</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.3\">1.866</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.4\">2.765</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.5\">5.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.6\">0.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.5.4.7\">0.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.6.5.1\">CTDP</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.2\">0.032</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.3\">0.15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.4\">0.935</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.5\">5.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.6\">0.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.5.7\">0.15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.1.1\">TFP(Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.2.1\">0.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.3.1\">0.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.4.1\">0.545</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.5.1\">3.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.6.1\">0.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.7.6.7.1\">0.48</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S3.T1.1.8.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.1.1\">TFP-L(Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.2.1\">0.007</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.3.1\">0.039</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.4.1\">0.398</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.5.1\">2.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.6.1\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.1.8.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.8.7.7.1\">-</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1\">Quantitative Comparison</span> with State-of-the-Art Methods. Storage space is measured within the PyTorch model. GFLOPs and time are measured when both content and style are 4K images, and tested on the NVIDIA 3090 (24GB) GPU.The best results are highlighted in bold. OOM: Out of memory.</figcaption>\n</figure>",
166
+ "capture": "Table 1: Quantitative Comparison with State-of-the-Art Methods. Storage space is measured within the PyTorch model. GFLOPs and time are measured when both content and style are 4K images, and tested on the NVIDIA 3090 (24GB) GPU.The best results are highlighted in bold. OOM: Out of memory."
167
+ }
168
+ },
169
+ "image_paths": {
170
+ "1(a)": {
171
+ "figure_path": "2306.16846v3_figure_1(a).png",
172
+ "caption": "(a) CTDP\nFigure 1: Ablation study of feature decoding consistency loss. \u2112m\u2062t\u2062vsubscript\u2112\ud835\udc5a\ud835\udc61\ud835\udc63\\mathcal{L}_{mtv}caligraphic_L start_POSTSUBSCRIPT italic_m italic_t italic_v end_POSTSUBSCRIPT ensures that shallow features in the shallow and fusion decoders\u2019 outputs yield similar results.",
173
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/ctdptfp1.jpg"
174
+ },
175
+ "1(b)": {
176
+ "figure_path": "2306.16846v3_figure_1(b).png",
177
+ "caption": "(b) CTDP (Pseudo texture feature preset method)\nFigure 1: Ablation study of feature decoding consistency loss. \u2112m\u2062t\u2062vsubscript\u2112\ud835\udc5a\ud835\udc61\ud835\udc63\\mathcal{L}_{mtv}caligraphic_L start_POSTSUBSCRIPT italic_m italic_t italic_v end_POSTSUBSCRIPT ensures that shallow features in the shallow and fusion decoders\u2019 outputs yield similar results.",
178
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/ctdptfp2.jpg"
179
+ },
180
+ "1(c)": {
181
+ "figure_path": "2306.16846v3_figure_1(c).png",
182
+ "caption": "(c) TFP\nFigure 1: Ablation study of feature decoding consistency loss. \u2112m\u2062t\u2062vsubscript\u2112\ud835\udc5a\ud835\udc61\ud835\udc63\\mathcal{L}_{mtv}caligraphic_L start_POSTSUBSCRIPT italic_m italic_t italic_v end_POSTSUBSCRIPT ensures that shallow features in the shallow and fusion decoders\u2019 outputs yield similar results.",
183
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/ctdptfp3.jpg"
184
+ },
185
+ "2": {
186
+ "figure_path": "2306.16846v3_figure_2.png",
187
+ "caption": "Figure 2: Architecture illustration of the proposed CTDP. See Section 3 for details.",
188
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/jiegou.jpg"
189
+ },
190
+ "3": {
191
+ "figure_path": "2306.16846v3_figure_3.png",
192
+ "caption": "Figure 3: Architecture illustration of the proposed CTDP. See Section 3 for details.",
193
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/t.jpg"
194
+ },
195
+ "4": {
196
+ "figure_path": "2306.16846v3_figure_4.png",
197
+ "caption": "Figure 4: Architecture illustration of the proposed CTDP. See Section 3 for details.",
198
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/wenlichayi.jpg"
199
+ },
200
+ "5": {
201
+ "figure_path": "2306.16846v3_figure_5.png",
202
+ "caption": "Figure 5: Architecture illustration of the proposed CTDP. See Section 3 for details.",
203
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/noise.jpg"
204
+ },
205
+ "6": {
206
+ "figure_path": "2306.16846v3_figure_6.png",
207
+ "caption": "Figure 6: Visualization of feature maps for the first and third convolutions of three methods.",
208
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/method.jpg"
209
+ },
210
+ "7": {
211
+ "figure_path": "2306.16846v3_figure_7.png",
212
+ "caption": "Figure 7: Quantitative Comparison with the state-of-the-art color and texture transfer methods using 1024 resolution input images. Due to the selection of many challenging style images with complex texture structures, it is best to zoom in to better observe artifact suppression and texture structure transfer.",
213
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/duibi.jpg"
214
+ },
215
+ "8": {
216
+ "figure_path": "2306.16846v3_figure_8.png",
217
+ "caption": "Figure 8: Architecture illustration of the proposed CTDP. See Section 3 for details.",
218
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/random.jpg"
219
+ },
220
+ "9(a)": {
221
+ "figure_path": "2306.16846v3_figure_9(a).png",
222
+ "caption": "(a) Full Model\nFigure 9: Ablation study of branch style loss and masked total variation loss to evaluate their effectiveness in suppressing texture and artifacts in color transfer tasks.",
223
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/A1.jpg"
224
+ },
225
+ "9(b)": {
226
+ "figure_path": "2306.16846v3_figure_9(b).png",
227
+ "caption": "(b) w/o \u2112b\u2062ssubscript\u2112\ud835\udc4f\ud835\udc60\\mathcal{L}_{bs}caligraphic_L start_POSTSUBSCRIPT italic_b italic_s end_POSTSUBSCRIPT\nFigure 9: Ablation study of branch style loss and masked total variation loss to evaluate their effectiveness in suppressing texture and artifacts in color transfer tasks.",
228
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/A2.jpg"
229
+ },
230
+ "9(c)": {
231
+ "figure_path": "2306.16846v3_figure_9(c).png",
232
+ "caption": "(c) w/o \u2112m\u2062t\u2062vsubscript\u2112\ud835\udc5a\ud835\udc61\ud835\udc63\\mathcal{L}_{mtv}caligraphic_L start_POSTSUBSCRIPT italic_m italic_t italic_v end_POSTSUBSCRIPT\nFigure 9: Ablation study of branch style loss and masked total variation loss to evaluate their effectiveness in suppressing texture and artifacts in color transfer tasks.",
233
+ "url": "http://arxiv.org/html/2306.16846v3/extracted/5324913/A3.jpg"
234
+ }
235
+ },
236
+ "validation": true,
237
+ "references": [
238
+ {
239
+ "1": {
240
+ "title": "Semantic style transfer and turning two-bit doodles\ninto fine artworks.",
241
+ "author": "Champandard, A.J., 2016.",
242
+ "venue": "arXiv preprint arXiv:1603.01768 .",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "2": {
248
+ "title": "Stylebank: An explicit representation for neural\nimage style transfer, in: Proceedings of the IEEE\nconference on computer vision and pattern recognition, pp.\n1897\u20131906.",
249
+ "author": "Chen, D., Yuan, L., Liao,\nJ., Yu, N., Hua, G.,\n2017.",
250
+ "venue": null,
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "3": {
256
+ "title": "Diverse image style transfer via invertible\ncross-space mapping, in: 2021 IEEE/CVF International\nConference on Computer Vision (ICCV), IEEE Computer\nSociety. pp. 14860\u201314869.",
257
+ "author": "Chen, H., Zhao, L., Zhang,\nH., Wang, Z., Zuo, Z.,\nLi, A., Xing, W., Lu,\nD., 2021.",
258
+ "venue": null,
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "4": {
264
+ "title": "Towards ultra-resolution neural style transfer via\nthumbnail instance normalization, in: Proceedings of the\nAAAI Conference on Artificial Intelligence, pp. 393\u2013400.",
265
+ "author": "Chen, Z., Wang, W., Xie,\nE., Lu, T., Luo, P.,\n2022.",
266
+ "venue": null,
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "5": {
272
+ "title": "A learned representation for artistic style.",
273
+ "author": "Dumoulin, V., Shlens, J.,\nKudlur, M., 2016.",
274
+ "venue": "arXiv preprint arXiv:1610.07629 .",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "6": {
280
+ "title": "Image style transfer using convolutional neural\nnetworks, in: Proceedings of the IEEE conference on\ncomputer vision and pattern recognition, pp. 2414\u20132423.",
281
+ "author": "Gatys, L.A., Ecker, A.S.,\nBethge, M., 2016.",
282
+ "venue": null,
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "7": {
288
+ "title": "Arbitrary style transfer with deep feature\nreshuffle, in: Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, pp. 8222\u20138231.",
289
+ "author": "Gu, S., Chen, C., Liao,\nJ., Yuan, L., 2018.",
290
+ "venue": null,
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "8": {
296
+ "title": "Mobilenets: Efficient convolutional neural networks\nfor mobile vision applications.",
297
+ "author": "Howard, A.G., Zhu, M.,\nChen, B., Kalenichenko, D.,\nWang, W., Weyand, T.,\nAndreetto, M., Adam, H.,\n2017.",
298
+ "venue": "arXiv preprint arXiv:1704.04861 .",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "9": {
304
+ "title": "Arbitrary style transfer in real-time with adaptive\ninstance normalization, in: Proceedings of the IEEE\ninternational conference on computer vision, pp.\n1501\u20131510.",
305
+ "author": "Huang, X., Belongie, S.,\n2017.",
306
+ "venue": null,
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "10": {
312
+ "title": "Dynamic instance normalization for arbitrary style\ntransfer, in: Proceedings of the AAAI Conference on\nArtificial Intelligence, pp. 4369\u20134376.",
313
+ "author": "Jing, Y., Liu, X., Ding,\nY., Wang, X., Ding, E.,\nSong, M., Wen, S., 2020.",
314
+ "venue": null,
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "11": {
320
+ "title": "Stroke controllable fast style transfer with adaptive\nreceptive fields, in: Proceedings of the European\nConference on Computer Vision (ECCV), pp. 238\u2013254.",
321
+ "author": "Jing, Y., Liu, Y., Yang,\nY., Feng, Z., Yu, Y.,\nTao, D., Song, M., 2018.",
322
+ "venue": null,
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "12": {
328
+ "title": "Perceptual losses for real-time style transfer and\nsuper-resolution, in: Computer Vision\u2013ECCV 2016: 14th\nEuropean Conference, Amsterdam, The Netherlands, October 11-14, 2016,\nProceedings, Part II 14, Springer. pp.\n694\u2013711.",
329
+ "author": "Johnson, J., Alahi, A.,\nFei-Fei, L., 2016.",
330
+ "venue": null,
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "13": {
336
+ "title": "Adam: A method for stochastic optimization.",
337
+ "author": "Kingma, D.P., Ba, J., 2014.",
338
+ "venue": "arXiv preprint arXiv:1412.6980 .",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "14": {
344
+ "title": "Style transfer by relaxed optimal transport and\nself-similarity, in: Proceedings of the IEEE/CVF\nConference on Computer Vision and Pattern Recognition, pp.\n10051\u201310060.",
345
+ "author": "Kolkin, N., Salavon, J.,\nShakhnarovich, G., 2019.",
346
+ "venue": null,
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "15": {
352
+ "title": "Combining markov random fields and convolutional\nneural networks for image synthesis, in: Proceedings of\nthe IEEE conference on computer vision and pattern recognition, pp.\n2479\u20132486.",
353
+ "author": "Li, C., Wand, M., 2016a.",
354
+ "venue": null,
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "16": {
360
+ "title": "Precomputed real-time texture synthesis with\nmarkovian generative adversarial networks, in: Computer\nVision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands,\nOctober 11-14, 2016, Proceedings, Part III 14,\nSpringer. pp. 702\u2013716.",
361
+ "author": "Li, C., Wand, M., 2016b.",
362
+ "venue": null,
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "17": {
368
+ "title": "Diversified texture synthesis with feed-forward\nnetworks, in: Proceedings of the IEEE conference on\ncomputer vision and pattern recognition, pp. 3920\u20133928.",
369
+ "author": "Li, Y., Fang, C., Yang,\nJ., Wang, Z., Lu, X.,\nYang, M.H., 2017a.",
370
+ "venue": null,
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "18": {
376
+ "title": "Universal style transfer via feature transforms.",
377
+ "author": "Li, Y., Fang, C., Yang,\nJ., Wang, Z., Lu, X.,\nYang, M.H., 2017b.",
378
+ "venue": "Advances in neural information processing systems\n30.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "19": {
384
+ "title": "Demystifying neural style transfer.",
385
+ "author": "Li, Y., Wang, N., Liu,\nJ., Hou, X., 2017c.",
386
+ "venue": "arXiv preprint arXiv:1701.01036 .",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "20": {
392
+ "title": "Microsoft coco: Common objects in context, in:\nComputer Vision\u2013ECCV 2014: 13th European Conference,\nZurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13,\nSpringer. pp. 740\u2013755.",
393
+ "author": "Lin, T.Y., Maire, M.,\nBelongie, S., Hays, J.,\nPerona, P., Ramanan, D.,\nDoll\u00e1r, P., Zitnick, C.L.,\n2014.",
394
+ "venue": null,
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "21": {
400
+ "title": "Progressive attentional manifold alignment for\narbitrary style transfer, in: Proceedings of the Asian\nConference on Computer Vision, pp. 3206\u20133222.",
401
+ "author": "Luo, X., Han, Z., Yang,\nL., 2022.",
402
+ "venue": null,
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "22": {
408
+ "title": "Arbitrary style transfer with style-attentional\nnetworks, in: proceedings of the IEEE/CVF conference on\ncomputer vision and pattern recognition, pp. 5880\u20135888.",
409
+ "author": "Park, D.Y., Lee, K.H.,\n2019.",
410
+ "venue": null,
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "23": {
416
+ "title": "Wiki art gallery, inc.: A case for critical\nthinking.",
417
+ "author": "Phillips, F., Mackintosh, B.,\n2011.",
418
+ "venue": "Issues in Accounting Education\n26, 593\u2013608.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "24": {
424
+ "title": "Stable and controllable neural texture synthesis and\nstyle transfer using histogram losses.",
425
+ "author": "Risser, E., Wilmot, P.,\nBarnes, C., 2017.",
426
+ "venue": "arXiv preprint arXiv:1701.08893 .",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "25": {
432
+ "title": "Going deeper in spiking neural networks: Vgg and\nresidual architectures.",
433
+ "author": "Sengupta, A., Ye, Y.,\nWang, R., Liu, C., Roy,\nK., 2019.",
434
+ "venue": "Frontiers in neuroscience 13,\n95.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "26": {
440
+ "title": "Neural style transfer via meta networks, in:\nProceedings of the IEEE Conference on Computer Vision and\nPattern Recognition, pp. 8061\u20138069.",
441
+ "author": "Shen, F., Yan, S., Zeng,\nG., 2018.",
442
+ "venue": null,
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "27": {
448
+ "title": "Color and texture dual pipeline lightweight style\ntransfer.",
449
+ "author": "ShiQi Jiang, JunJie Kang, Y.L., 2023a.",
450
+ "venue": "arXiv preprint arXiv:2310.01321 .",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "28": {
456
+ "title": "Degree-controllable lightweight fast style transfer\nwith detail attention-enhanced.",
457
+ "author": "ShiQi Jiang, JunJie Kang, Y.L., 2023b.",
458
+ "venue": "arXiv preprint arXiv:2306.16846 .",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "29": {
464
+ "title": "Dual pipeline style transfer with input distribution\ndifferentiation.",
465
+ "author": "ShiQi Jiang, JunJie Kang, Y.L., 2023c.",
466
+ "venue": "arXiv preprint arXiv:2311.05432 .",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "30": {
472
+ "title": "Texture networks: Feed-forward synthesis of textures\nand stylized images.",
473
+ "author": "Ulyanov, D., Lebedev, V.,\nVedaldi, A., Lempitsky, V.,\n2016a.",
474
+ "venue": "arXiv preprint arXiv:1603.03417 .",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "31": {
480
+ "title": "Instance normalization: The missing ingredient for\nfast stylization.",
481
+ "author": "Ulyanov, D., Vedaldi, A.,\nLempitsky, V., 2016b.",
482
+ "venue": "arXiv preprint arXiv:1607.08022 .",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "32": {
488
+ "title": "Collaborative distillation for ultra-resolution\nuniversal style transfer, in: Proceedings of the\nIEEE/CVF conference on computer vision and pattern recognition, pp.\n1860\u20131869.",
489
+ "author": "Wang, H., Li, Y., Wang,\nY., Hu, H., Yang, M.H.,\n2020.",
490
+ "venue": null,
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "33": {
496
+ "title": "Multimodal transfer: A hierarchical deep\nconvolutional neural network for fast artistic style transfer, in:\nProceedings of the IEEE conference on computer vision and\npattern recognition, pp. 5239\u20135247.",
497
+ "author": "Wang, X., Oxholm, G.,\nZhang, D., Wang, Y.F.,\n2017.",
498
+ "venue": null,
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "34": {
504
+ "title": "Divswapper: towards diversified patch-based arbitrary\nstyle transfer.",
505
+ "author": "Wang, Z., Zhao, L., Chen,\nH., Zuo, Z., Li, A.,\nXing, W., Lu, D., 2021.",
506
+ "venue": "arXiv preprint arXiv:2101.06381 .",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "35": {
512
+ "title": "Microast: Towards super-fast ultra-resolution\narbitrary style transfer.",
513
+ "author": "Wang, Z., Zhao, L., Zuo,\nZ., Li, A., Chen, H.,\nXing, W., Lu, D., 2022.",
514
+ "venue": "arXiv preprint arXiv:2211.15313 .",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "36": {
520
+ "title": "Artistic style discovery with independent\ncomponents, in: Proceedings of the IEEE/CVF Conference\non Computer Vision and Pattern Recognition, pp.\n19870\u201319879.",
521
+ "author": "Xie, X., Li, Y., Huang,\nH., Fu, H., Wang, W.,\nGuo, Y., 2022.",
522
+ "venue": null,
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "37": {
528
+ "title": "Metastyle: Three-way trade-off among speed,\nflexibility, and quality in neural style transfer, in:\nProceedings of the AAAI Conference on Artificial\nIntelligence, pp. 1254\u20131261.",
529
+ "author": "Zhang, C., Zhu, Y., Zhu,\nS.C., 2019.",
530
+ "venue": null,
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "38": {
536
+ "title": "Multi-style generative network for real-time\ntransfer, in: Proceedings of the European Conference on\nComputer Vision (ECCV) Workshops, pp. 0\u20130.",
537
+ "author": "Zhang, H., Dana, K., 2018a.",
538
+ "venue": null,
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "39": {
544
+ "title": "Multi-style generative network for real-time\ntransfer, in: Proceedings of the European Conference on\nComputer Vision (ECCV) Workshops, pp. 0\u20130.",
545
+ "author": "Zhang, H., Dana, K., 2018b.",
546
+ "venue": null,
547
+ "url": null
548
+ }
549
+ }
550
+ ],
551
+ "url": "http://arxiv.org/html/2306.16846v3"
552
+ }
20240101/2307.12083v3.json ADDED
@@ -0,0 +1,618 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Active Control of Flow over Rotating Cylinder by Multiple Jets using Deep Reinforcement Learning",
3
+ "abstract": "The real power of artificial intelligence appears in reinforcement learning, which is more sophisticated due to its dynamic nature. Rotation and injection are some of the proven ways in active flow control for drag reduction on blunt bodies. In this paper, rotation will be added to the cylinder alongside the deep reinforcement learning (DRL) algorithm, which uses multiple controlled jets to reach the maximum possible drag suppression. Characteristics of the DRL code, including control parameters, their limitations, and optimization of the DRL network for use with rotation will be presented. This work will focus on optimizing the number and positions of the jets, the sensors location, and the maximum allowed flow rate to jets in the form of the maximum allowed flow rate of each actuation and the total number of them per episode. It is found that combining the rotation and DRL is promising since it suppresses the vortex shedding, stabilizes the Karman vortex street, and reduces the drag coefficient by up to . Also, it will be shown that having more sensors at more locations is not always a good choice and the sensor number and location should be determined based on the need of the user and corresponding configuration. Also, allowing the agent to have access to higher flow rates, mostly reduces the performance, except when the cylinder rotates. In all cases, the agent can keep the lift coefficient at a value near zero, or stabilize it at a smaller number.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Active flow control is a long-standing topic in fluid mechanics. By using different kinds of actuators, it alters the flow behavior to improve the aerodynamics/hydrodynamic performance of a flow system. One such usage is for drag reduction, which is of importance because annually a large amount of energy is being consumed to overcome the drag forces encountered in many engineering applications such as ground, air, and sea vehicles [1 ###reference_1###], For example, in ground vehicles, the aerodynamics drag has a part of approximately in the total fuel consumption [2 ###reference_2###]. Thus, achieving effective and stable flow control for drag force reduction is important. Unfortunately, due to the combination of non-linearity, time-dependence, and high-dimensionality intrinsic to the Navier-Stokes equations and the computational power needed for calculations, finding efficient strategies for performing active flow control is highly sophisticated [3 ###reference_3###, 4 ###reference_4###].\nActive flow control has been discussed in simulations using reduced-order models and harmonic forcing [5 ###reference_5###], direct simulations coupled with the adjoint method [6 ###reference_6###], or linearized models [7 ###reference_7###], and mode tracking methods [8 ###reference_8###]. Realistic actuation mechanisms such as acoustic excitation [9 ###reference_9###], synthetic jets [9 ###reference_9###], plasma actuators [10 ###reference_10###, 11 ###reference_11###], suction mechanisms [12 ###reference_12###], transverse motion [13 ###reference_13###], periodic oscillations [14 ###reference_14###], oscillating foils [15 ###reference_15###], air jets [16 ###reference_16###], and the Lorentz forces in conductive media [16 ###reference_16###], are also discussed in detail, as well as limitations imposed by real-world systems[17 ###reference_17###].\nSimilar experimental work has been performed, to control either the cavitation instability [18 ###reference_18###], the vortex flow behind a conical fore-body [19 ###reference_19###], or the flow separation over a circular cylinder [20 ###reference_20###]. While most of the literature has focused on complex, closed-loop control methods, some open-loop methods also have been discussed, both in simulations [21 ###reference_21###] and in experiments [22 ###reference_22###]. Despite the remarkable results from active flow control, achieving effective and stable flow control strategies is a highly sophisticated process, while there are candidates for both sensors and actuators to be used in such systems as the discussed papers have shown, finding algorithms for using those measurements and possible actuations to perform active flow control efficiently and effectively, is challenging and resources heavy. In addition, challenges such as disturbances inherent to real-world applications, imperfections in the building of the sensors or actuators, and adaptivity to the ever-changing external conditions are also parts of the problem.\nThe machine learning algorithms offer a completely different paradigm from the old active flow control methods. There are three different machine learning sub-domains, namely, supervised learning which uses historically labeled data, unsupervised learning which uses historically unlabeled data to find the pattern of the said data, and reinforcement learning which typically has a very different approach compared to the supervised and unsupervised learning as it does not rely on a historical data. Instead, it relies on a system of an agent, environment, observations, and rewards to teach itself the best way to accomplish a task through self-exercise.\nIn a typical reinforcement learning algorithm, there is an agent that works based on a specific policy, interacts with an environment in a closed-loop fashion, and receives a reward corresponding to an action that the agent has taken in/on that environment. Then the agent learns from experiences/rewards to take actions that maximizes the expected result/cumulative reward. This means that it can reach quicker solutions without prior information about the physics of the problem [3 ###reference_3###]. Since the data-driven and the learning-based methods are suitable to be used in non-linear, high-dimensional, and complex problems, they are therefore, also suitable for performing active Flow Control [4 ###reference_4###]. More specifically, such promising methods include deep reinforcement learning.\nIn 2018 three papers[23 ###reference_23###, 24 ###reference_24###, 25 ###reference_25###] showed the success of the DRL method in performing AFC through using recurrent neural networks (RNN), deep neural networks (DRL) which could reduce drag coefficient around a cylinder using two jets by , and deep Q-networks which used DRL to approximate Q-function and was able to control the flow in micro-channels as good as a human operator. In 2019 one paper[26 ###reference_26###] used a multi-environment approach to speed up DRL learning and enable it to use more than one core per learning. Another paper[27 ###reference_27###] used DRL to achieve gliding with either minimum energy expenditure or the fastest time of arrival, at a predetermined location. They found that model-free reinforcement learning leads to more robust gliding than model-based optimal control strategies with a modest additional computational cost. They also demonstrate that the gliders with DRL can generalize their strategies to reach the target location from previously unseen starting positions. In 2020, Hongwei et al.[28 ###reference_28###], used the same method for controlling four jets to reduce the drag force around a cylinder. Reynolds numbers of 100, 200, 300, and 400 were used, and they showed that the DRL-controlled jets can reduce the drag coefficient by , , , and , respectively. Another work[29 ###reference_29###] used DRL to control two rotating cylinders behind a main cylinder as actuators, in a 2D flow field with a Reynolds number of 240, for drag reduction. They reported that the counter-rotating small cylinder pair were able to stabilize the periodic shedding of the main cylinder wake. Also, there have been cases that used RL for active flow control in turbulent flows, such work[30 ###reference_30###] was done for Re number of 10,160, and for flow control of a bluff cylindrical body in cross-flow, using two small rotating control cylinders in both experimental and simulation environments. The agent was able to reduce drag force by up to or reach another specified optimum point. Although it should be noted that in simulations, the current bottleneck is the computation limitations, as it would take a 64 cores Intel E5-2670 more than three weeks to finish 500 episodes. Another example is FengRen et al.[31 ###reference_31###] which used DRL for controlling jets around a cylinder to reduce the drag force in a flow with a Reynolds number of 1000 in simulation. Also, they were able to achieve a drag reduction of around . In 2021 another paper[32 ###reference_32###] focused on introducing the S-PPO-CMA algorithm which it\u2019s usage was to discard unnecessary or redundant sensor information.\nZheng et al.[33 ###reference_33###] compared the active learning framework to the reinforcement learning framework for suppressing vortex-induced vibrations. They showed that active learning can reduce the vibration amplitude of the cylinder by , but RL can reduce it by . Another work used DRL to achieve hydrodynamic stealth around bluff bodies [34 ###reference_34###]. They used a group of windward-suction-leeward-blowing (WSLB) actuators and were able to reduce the deficit in streamwise velocity by . There have been instances where DRL was used for open-loop control, such example is Ghraieb et al.[35 ###reference_35###], They introduced a single-step proximal policy optimization (PPO) [36 ###reference_36###], a \u201cdegenerate\u201d version of the PPO algorithm, intended for situations where the optimal policy to be learned by a neural network does not depend on state, as is notably the case in open-loop control problems. The approach proved relevant to map the optimum positions for placement of a small control cylinder in an attempt to reduce drag in laminar and turbulent cylinder flows. It was able to reduce the drag force of a square cylinder at Reynolds numbers in the range of a few thousand by .\nIn 2022 a paper[12 ###reference_12###] used DRL for AFC to control disturbed flow over a NACA 0012 airfoil under weak turbulent conditions of Re = 3000. When using constant inlet velocity, the agent was able to reduce drag by and enhance lift by . Later, the pulsation at two different frequencies and their combination were applied for inlet velocity conditions, where the airfoil wake became more difficult to suppress dynamically and precisely; the reward function additionally contained the goal of saving the energy consumed by the jets, in this case, the DRL agent still was able to find a proper control strategy, where significant drag reduction and lift stabilization were achieved, and the agent was able to save the energy consumption of the synergetic jets by . Yu-Fei et al.[37 ###reference_37###] used DRL for performing AFC on ellipse, square, hexagon, and diamond models in laminar flow with Re number of 80 and were able to reduce drag by , , , and , (with 0 as the angle of attack) respectively. They also tested different angles of attack for ellipse, in which case the DRL agent was able to reduce the drag force for 5\u00b0, 10\u00b0, 15\u00b0 and 20\u00b0 AOA by , , , and , respectively. Tests were done for Reynolds numbers 160 and 320 with AOA of 0, in this case, the drag force reduction reached and , respectively.\nDeep reinforcement learning is also used for shape optimization. One notable paper[38 ###reference_38###] used DRL for shape optimization as their main focus and introduced the SL-DDPG algorithm which was able to beat other trail-and-error methods and achieve a lift-to-drag ratio of 3.53. There are more papers on shape optimization which were published in 2021[39 ###reference_39###] and 2022[40 ###reference_40###]. In 2022 Xie et al.[41 ###reference_41###] used DRL for actively controlling heaving plate breakwaters and Hardman et al.[42 ###reference_42###] used DRL for the Manipulation of free-floating objects using Faraday flows in an experimental environment.\nAll papers showed a good drag reduction and all of this is possible because a breakthrough in RL research has been achieved after the integration between reinforcement learning and deep neural networks (DNNs), for what is called deep reinforcement learning (DRL). Artificial Neural networks are inspired by a simplification of biological neurons in an attempt to reproduce in machines some of the features that are believed to be at the origin of the intelligent thinking of the brain, i.e. biological neuron through mathematics, perceptron models [43 ###reference_43###], and it is called the deep neural network when two or more of such networks are connected.\nThe main concept involves conducting computations through a group of basic processing units known as artificial neurons. Each neuron\u2019s output value is calculated by a non-linear function of the sum of its inputs. The connection between each neuron is called the edge. Neurons and edges typically have a weight, and as learning progresses, this weight is adjusted through an algorithm. This weight changes the strength of the signal at each edge. As an example, in supervised learning, these weights are tuned using algorithms such as stochastic gradient descent in order to minimize the cost function [44 ###reference_44###]. Considering the effectiveness of this method, It is theoretically possible for artificial neural networks to solve any problem as they are universal approximators. A feed-forward neural network with a non-linear activation function can fit any function with high accuracy [45 ###reference_45###]. This means that ANNs have the potential to be applied to virtually any problem or phenomenon that can be represented mathematically. However, the challenge in designing the ANNs, as well as developing the algorithms that train and utilize them, remains. The mentioned problems are currently an active area of research.\nIn the present work, we apply the deep reinforcement learning algorithm to an active flow control problem. The proximal policy optimization (PPO) method was used together with two 512 fully Connected artificial neural networks (all neurons of each layer are connected to the other layer) to control synthetic jets located on the walls of a cylinder. The agent interacts with an environment that uses FEnics to simulate the flow and actions taken by the agent. As rotation is a drag reduction method in AFC of fluid mechanics[46 ###reference_46###, 47 ###reference_47###], later on, we added rotation to the cylinder and reported the DRL limits and behavior in such situations. and its behavior. Computing the optimum number of jets and their locations, finding the number and the location of sensors needed for the agent to have an effective observation of the flow field, adding rotation to the cylinder, and observing the DRL\u2019s effectiveness in lowering the drag in this situation, its behavior, and limits, suggesting effective control parameters for the jet configurations with and without the cylinder rotation, and comparison of the best-performing configuration with a researched configuration will be examined. Such results have not been investigated by other researchers yet."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Methodology",
15
+ "text": "In this section, we will summarize the methodology for performing the numerical simulations and details of the DRL algorithm."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Simulation environment",
21
+ "text": "The simulation\u2019s geometry is based on the 2D test case developed by Sch\u00e4fer et al.[48 ###reference_48###]. It consists of a cylinder with a non-dimensional diameter of D = 1, situated in a box with a total non-dimensional length of L = 22 (along the X-axis) and a height of H = 4.1 (along the Y-axis). The coordinate system\u2019s origin is located at the cylinder\u2019s center, which is 0.05 off-axis in the y direction. The geometry is shown in figure 1 ###reference_###. The inflow profile formula follows a parabolic function which is expressed as 1 ###reference_###:\nThe no-slip boundary condition is applied to the top and bottom walls, as well as the solid walls of the cylinder. In the case of rotation, the speeds are applied over the boundary of the cylinder\u2019s wall. On the right side of the domain, an outflow boundary condition is enforced. The Reynolds number is calculated using the average velocity magnitude and the diameter of the cylinder , with being the kinematic viscosity, and . To perform computations, an unstructured mesh has been created using Gmsh [49 ###reference_49###]. This mesh is made up of triangular elements and is refined around the cylinder. The simulation utilizes a consistent non-dimensional time interval of . The instantaneous drag force on the cylinder is computed as follows:\nThis equation involves the Cauchy stress tensor (represented by ), the unit vector normal to the outer cylinder surface (represented by ), and the vector , which has a value of (1,0). From this equation, the drag force can be normalized into the drag coefficient:\nAnd the lift is calculated as follows:\n###figure_1### The IPCS method, developed by Goda [50 ###reference_50###], is used to solve the Navier-Stokes equations in a segregated manner, and the non-linear term is treated explicitly in this method. The FEniCS framework is used to implement the finite-element method for spatial discretization [51 ###reference_51###]. A maximum of six jets (numbered 0 to 5) normal to the cylinder wall is implemented on the surface of the cylinder, at angles to relative to the flow direction. The control of these jets is achieved by adjusting their non-dimensional mass flow rates, denoted by , where i ranges from 0 to 5. These flow rates are set by a parabolic velocity profile that reaches zero at the edges of the jets. Additionally, the width of them is .\nFurthermore, the control scheme is arranged to ensure that the overall mass flow rate introduced by the synthetic jets is zero, meaning that the sum of all the mass flow rates ( to ) is equivalent to zero. This particular condition for the synthetic jets is preferred because it is more ideal than a situation where the mass flow rate is either added or subtracted from the flow. In total, we ran 34 simulations (excluding the validation and test simulations). To summarize the results of these simulations, we grouped them based on the jet positions as shown in the table 1 ###reference_###.\n###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "2.1.1 The network and reinforcement learning framework",
27
+ "text": "Here Deep reinforcement learning (DRL) agent sees the simulation as another environment to interact with through three channels namely: observation, action, and reward. Here, the reward is the time-averaged drag coefficient calculated by simulation and provided by the environment then punished by the averaged lift coefficient.\nThe DRL uses this limited information to train a deep neural network. The network discovers a closed-loop control strategy by observing each time step and deciding on the action, which in this case is the flow rate of the jets. The goal is to maximize the reward, which is the suppression of drag force.\nAs stated before, the DRL agent uses the proximal policy optimization [36 ###reference_36###] method. The proximal policy optimization is a type of reinforcement learning algorithm that falls under the category of policy gradient methods. There are several reasons why this method was chosen. For one, it\u2019s mathematically less complex, and as a result, it is faster than other methods like the trust region policy techniques [36 ###reference_36###]. Additionally, it requires less hyperparameter tuning and is easier to set up. It\u2019s also well-suited for problems that involve continuous control, which sets it apart from the deep Q-learning [52 ###reference_52###] and its variations [53 ###reference_53###].\nThe PPO method follows an episode-based approach, where it learns from actively controlling for a limited period of time before analyzing the results and continuing with a new episode. In our case, we initially run the simulation without active control until an unsteady wake is developed at approximately 5 seconds. Then this state is saved and serves as the starting point for each subsequent learning episode. The reward function, represented as , is calculated as follows:\nThe goal of the agent is to reduce drag while mitigating the lift fluctuation. and are the drag and lift coefficients, respectively. Here indicates an average over a typical vortex shedding period.\nThere are several benefits in using this particular reward function rather than just the instantaneous drag coefficient. Firstly, by averaging values over one vortex shedding cycle, the reward function becomes less variable, which improves learning speed and stability. Secondly, a penalization term based on the lift coefficient is necessary to prevent the network from cheating. Without this penalization, the ANN can modify the flow configuration to achieve a larger drag reduction, but at the expense of a large induced lift, which can be harmful in practical applications.\nWe also used Eq. 7 ###reference_### to control instantaneous actuation. This is used so that the agent won\u2019t set a high minus flow rate (suction) and instantly after it, a high plus flow rate (blow) as this case is not possible in reality.\nThe artificial neural network (ANN) used in this study comprises of two dense layers containing 512 fully connected neurons each, in addition to the necessary layers for data acquisition from the probes and data generation for jets. The configuration of this network was determined through trial and error. Larger networks were found to be less effective due to their increased difficulty in training and longer run times Additionally, the results of the validation showed that utilizing larger networks does not yield any improvements in terms of performance. A learning rate of 1e-3, a sub-sampling fraction of 0.2, and a batch size of 20 are used. Our code is based on the original paper of Rebault et al.[24 ###reference_24###] with some tweaks and changes, as we started our work right after their publication.\nAs for the reported data after training, in most cases, the simulation of flow needed more than 8000 steps (each step is ), for the controlled flow to stabilize. Also since some cases needed more than 1800 epochs to find a suitable strategy, we ran all the tests with at least 2100 epochs. These points were not reported before and were ignored in previous publications although they are very important."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Validation/verification",
33
+ "text": "In this section, we discuss the validations done for the flow solver, the time-step independence, the mesh convergence, the sensors\u2019 numbers and positions, and finally the ANN network\u2019s depth. Validation of the flow solver was performed by observing the drag coefficient and the Strouhal number , where f is the vortex shedding frequency calculated by performing FFT on lift coefficients. As it can be seen in table 2 ###reference_### our and Strouhal number were in agreement with the data of Tang et al.[28 ###reference_28###] and Sch\u00e4fer et al.[48 ###reference_48###] ( difference in St number. using the main mesh as discussed below). Also, a comparison with the control model of Rebault et al.[24 ###reference_24###] was done, and they were in agreement with each other (considering the two decimal number that was reported).\n###figure_3### In this work, a non-dimensional constant numerical time-step of is used. To confirm that this time-step is small enough, we compared it to and . As shown in table 3 ###reference_### the maximum difference is at which is acceptable if we factor in the calculation time difference.\n###figure_4### The mesh-independence study for this benchmark was done with the meshes of three different resolutions, as can be seen in table 4 ###reference_###. The resolution of the mesh which we used in this paper, is fine enough for the simulation and fast enough for a CPU with 16 threads. Figure 2 ###reference_### compares the line chart of the drag coefficients with the difference between the main and fine being only .\nThe overall mesh and a snapshot of the mesh near walls used in simulations are shown in figure 3 ###reference_###.\n###figure_5### ###figure_6### ###figure_7### For the main work, we used a total number of 151 sensors and tested up to 600 sensors in different locations in the channel. We performed these tests on configuration III with 200 actuations per episode, no spin, and the maximum allowed flow rate of each jet actuation set to 1. The numbers in the table were extracted after the stabilization of the controlled flow (except the averaged ), (figure 4 ###reference_###) and we ran each test multiple times with 2100 episodes and higher. As it can be seen in table 5 ###reference_### more sensors in other words more data for the agent does not always lead to better performance. We believe that this fact is because when we add data of the far reach of the cylinder, our network tries to over-control those locations. So, the performance near the cylinder gets worse.\nWe should also note that the most efficient number and position of sensors are different for each configuration with different parameters. One such example is when we use 123 sensors from 0.075 until 0.25 (the same configuration as the tests presented here). In this case, goes up from 2.2047 to 2.2240. This is an increase of compared to the 151 sensors, which in some cases may be acceptable. But it is not acceptable for configuration III with 1000 actuations per episode/epoch, 0.2 spin, and maximum allowed flow rate of each jet actuation set to 2, because in this case our increases from 1.6288 to 1.8235 which is an difference compared to the main sensor number and position. Hence, we opted to use the main sensor configuration as it provides sufficient data for the agent in almost all cases.\n###figure_8### ###figure_9### Test for the network depth needed for this use case was done with four 512 fully connected networks instead of the two that we are using in our work. As it can be seen in figure 5 ###reference_###, using a deeper network does not bring any benefit, and makes the run-time more than twice longer compared to the two 512 fully connected networks. Also, deeper network with less total neurons (four 128 networks) were compared to a shallower network with a higher amount of neurons (two 512 networks). It was observed that the deeper network with lower total neurons (512) was 0.3% faster, had an 18.1% lower cpu-time (7.73 compared to 6.55 minutes), but performed 5.11% worse. To make sure that the network was trained long enough and properly, we ran this part with up to 4200 epochs and with different learning rates and entropy regularization numbers. The learning rate and entropy regularization are two of the most important parameters during training a network. So, these two parameters were tested with different numbers to check for possible hyperparameter balancing. However, the finest results are shown in figure 5 ###reference_###.\n###figure_10###"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Results",
39
+ "text": "Here, we will find the best-performing configuration regarding drag reduction among the tested ones. First, we will test different parameters to find the best control parameters for the most efficient case, and then, the rotary motion will be added to the cylinder. Finally, we will compare the most efficient jet configuration with configuration II which was mostly used in other works, and test the results with different control parameters as before to find the best-performing ones."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "The no-spin case",
45
+ "text": "First, in order to find the optimum performing configuration regarding the number and location of the jets, we used configurations I through IV with the default parameters of the no-spin, the maximum allowed flow rate of each actuation set to 1, and the number of actuation per episode set to 80.\nConfiguration I has one jet placed at , the case II has two jets located at , , the case III has five jets inserted at , , , , and configuration IV has six jets placed at , , , , , , all jets are numbered 0 to 5.\nTable 6 ###reference_### shows the start, minimum, maximum, and average drag coefficients of the mentioned configurations. The minimum and maximum values were extracted after the stabilization of the controlled flow, but the average values are on the whole episode. The start drag coefficient refers to the at the start of an episode (which is the center point of when there is no control). As can be seen in this table, the most efficient case is the third configuration. with a drag reduction of (from without control). So, we kept this case for our next studies.\nFigure 6 ###reference_### shows the variation of during the learning process. Each point on this chart belongs to an epoch. As can be seen in this chart, we need around 1300 epochs at the very least for the agent to find the best controlling strategy, and because of this, we ran each case with 2100 epochs or higher. The agent needed 817, 1106, 1241, and 1237 epochs for configurations I to IV, respectively in order to find the best controlling strategy. Figure 7 ###reference_### illustrates the pressure and vorticity field of this configuration at the start and the end, after flow stabilization. As can be seen in these figures, the wake is thinner in the case with active control, which results in a lower mean pressure difference around the cylinder and lower drag force. Another point that can be seen from the vorticity field is that separation is delayed due to active control. The position of the jets is obvious in close snapshots. The jet flow rates for this configuration are shown in figure 8 ###reference_###, which after around 1500 steps stabilize.\n###figure_11### ###figure_12### ###figure_13### ###figure_14### Now that we have found the optimum jet configuration, it will be tested with different parameters in order to find the most efficient performing system with the purpose of lowering the drag force. The spin, the number of actuation per episode, and the maximum allowed flow rate of each actuation will be compared with the same parameters in configuration II.\nIn the first test, we doubled the maximum allowed flow rate of each actuation, and performance got worse. Its maximum, minimum, and average arises from 2.1050 to 2.8053, from 2.0608 to 2.6171, and from 2.0928 to 2.6980 respectively, which translates to an uplift of , and in . The variance of when the control is stabilized in this case is also higher at compared to . As it can be seen from the jet flow rates shown in figure 9 ###reference_###, compared to the figure 8 ###reference_###, it seems that the agent struggles to stabilize the flow, which can be the cause of this trend. In this case, the agent needed only 40 epochs in order to find the optimum model.\n###figure_15### Next, we tried to change the number of actuation per episode from 80 to 200,500,1000 in order to see its effects as compared to doubling the maximum allowed flow rate of each actuation. The agent has the possibility of using larger flow rates like the case before, just by using different means. First, we increased the number of actuation from 80 to 200 and observed that our system performed better than when we used a higher allowed flow rate for each actuation. But it performed worse compared to the test with 80 actuations. The computed in this case were 2.2047, 2.2016, and 2.2034 for minimum, maximum, and average values, respectively. The variance in this case was and the agent found the best model at episode number 1805.\nFurther increasing the number of actuation to 500 results in maximum, minimum, and average of 2.2804, 2.2544, 2.2626 respectively, which are worse than when using 80 and 200 as the number of actuation by and . But they are more efficient than the case with by . In this case, the variance of was . Also, the agent found an efficient control policy in episode 382. With changing this parameter to 1000, a drop of in minimum compared to the case with 500 actuation per episode, from 2.2544 to 2.2302, an increase of in maximum from 2.2804 to 2.4411, and an increase of in the average was observed. Also, the variance is worse at and our agent needed 982 episodes to find the best policy.\nResults of the tests showed that in this configuration without spin, allowing the agent to have access to an excessive flow rate is not a good idea. As the finest performing control parameters considering minimum and maximum is with 80 actuation per episode and maximum allowed flow rate of each actuation = 1 (). Also, the best-performing parameters in terms of stability were the case with 200 actuation per episode and with a variance of only after the flow stabilization. The agent needed the most episodes for 200 actuation per episode and the least for and 80 actuation per episode. We should also mention that the higher number of actuation slows down the simulation speed."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": " The rotating cylinder",
51
+ "text": "Rotation is a proven way to reduce the drag force. Here, we tested three different rotation speeds of 0.2, 0.1, and 0.08 in conjunction with our DRL network to further reduce the drag coefficient and see if the agent can find an efficient control strategy. Again, we will test them with different numbers of actuation per episode and the maximum allowed flow rate per actuation to find the optimum configuration.\nA rotation speed of 0.2 alone reduces from 3.2112 to 2.7093, which is a reduction of , and reduces , and to 2.7115, 2.7172 and 2.7084. Adding the DRL network (configuration III) with 80 actuation per episode and , decreases further to a maximum, minimum, and average of 2.3703, 2.3702, and 2.3702, respectively. Compared to after adding rotation, it\u2019s a further drop of , with a variance of only , which can be said that there are no fluctuations at all. Here, the agent needs 517 episodes to find the finest strategy. We ran configuration II with the same parameters, and our agent couldn\u2019t find a control strategy in this jet\u2019s configuration that decreased further. When we only doubled our maximum allowed flow rate for both configurations II and III, the network could not find a control strategy even after 2100 epochs.\nIncreasing the number of actuation per episode to 200, results in the system performing better at 2.3197 for minimum (2.31968), maximum (2.31972), and average (the run time is so long that fluctuations are removed from the averaged ). The variance is which again, is so small that we can ignore the fluctuations after the flow stabilization. The network needed 713 epochs to find the best model. Doubling the maximum allowed flow rate decreases by to 2.0758 with a variance of which is more stable than before, the best performing strategy was found at the 117th epoch. Again, when we ran configuration II with the same parameters (both ), the agent could not find a suitable control strategy.\nUsing 500 actuations per episode, causes to decrease by to 2.2994, compared to the 200 actuations (), with a variance of , Which is also more stable than before. The network found the most efficient strategy at 541 epochs. Performance with Double the was better by at 1.6985 with a variance of . Training the network needed 391 epochs. As before, configuration II with the same parameters could not find a proper controlling strategy.\nIn consequence of increasing the number of actuation per episode to 1000 and keeping , the increased to 2.3394 but variance was better at . Furthermore, the agent needed 831 epochs to find the best control strategy. With , stabilized at 1.6288, which is the best-performing configuration with a drop in drag coefficient of compared to the starting point with spin. variance was also nonexistent at . The network found the model at 544 epochs, also configuration II could not find a control strategy.\nIn these tests, we found that our agent indeed can control and further lower the drag force exerted on a rotating cylinder. But compared to a fixed cylinder, the agent needs access to a higher amount of flow rate, as it performed best with a maximum allowed flow rate of each actuation = 2 and an increased number of actuation per episode of 1000. In this state, our system managed to lower the from 2.7093 to 1.6288 with a completely stable flow after stabilization. Pressure and vorticity fields of this configuration can be seen in figure 10 ###reference_###. The vortex shedding has been completely omitted due to the high momentum of the injected fluid and a high-pressure zone appears behind the cylinder in the wake. The creation of this high-pressure zone in the wake is the main origin of drag reduction.\n###figure_16### As we discussed before, configuration II was unable to lower the drag force when the cylinder had a rotation speed of 0.2. In order to examine the performance of this configuration, and to find its limits when used with a rotating cylinder, we ran further tests with 0.1 and 0.08 spin and compared them with those of configuration III. Indeed configuration II can find a control strategy for rotation speeds of 0.1 and lower. We ran our tests with 80, 200, 500, and 1000 actuation per episode, a of 1, and compared them with configuration III. Table 7 ###reference_### shows the results. variance was , , , , respectively. The best-performing control parameter for configuration II was 200 actuation per episode with a minimum, maximum, and average of 2.8231, 2.8247, 2.8248. Which is a drop of compared to the without control. For reference, configuration II without the cylinder rotation could only lower the drag force by . Also, it was the most stable, but still, configuration III with 80 actuation per episode was superior with minimum and maximum of 2.0920 and more stable with a variance of only . The agent needed 681, 342, 530, 1173, and 997 epochs to find the best strategy for each test respective to the data in table 7 ###reference_###. Figures 11 ###reference_### and 12 ###reference_### compare the pressure and vorticity contours of the finest performing configuration II (200 actuation) with those of configuration III. It is seen that the vortex shedding has been suppressed at the end of the controlled flow in configuration III.\nDoubling the with 80 actuations per episode has no noticeable benefit nor disadvantage in configuration II as its minimum, maximum, and average are at 2.8671, 2.8721, and 2.8702. But variance gets worse at . The agent finds the optimum model at 548 epochs, but configuration III with the same parameters worsens noticeably (compared to the 80 actions per episode) with minimum, maximum, and average of 2.5723, 2.6690 and 2.6250 which are , and worse, respectively. Variance is also noticeably worse at . The network takes only 29 epochs to find the best model.\n###figure_17### ###figure_18### ###figure_19### As the final test, we used 0.08 rotation speed, which without control, was lowered to a maximum, minimum, and average of 2.9738, 2.9448, and 2.9633, respectively. We also tested and ran these parameters for configurations II and III with 80 actuations per episode. The start was 2.9353 and configuration II found a strategy to modify it to a minimum of 2.8685, maximum of 2.8854, and average of 2.8776. Doubling the had no noticeable impact on . It lowered the variance from to , but at the cost of higher epochs as this needed 1577 epochs but only needed 22. Configuration III was much better at controlling the drag force as it managed to optimize it to a minimum and maximum of 2.2254 and an average of 2.2254. Also, its variance was much better at with the agent needing 1753 episodes to find this strategy. Doubling the worsens both systems performance (by ) and variance (at )."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Real-World Scenario",
57
+ "text": "In order to show the capability and feasibility of the best-performing configuration for real-world applications, sensors were only placed on the face of the cylinder, and the Reynolds number was increased to 200 (the limit of the accessible hardware). In order to showcase the feasibility of a spinning cylinder a 3D design was created, which is shown in figure 13 ###reference_###, this is only one of the many possible designs and configurations, and the case in which the cylinder connects to walls. The cylinder can also be disconnected from walls and connected to jet bases for support and rotation. We believe that it is useful for drag reduction in applications like mini-turbines for home use (cases where the cylindrical body of an object is in the wind direction, and rotating it has no adverse effect.).\n###figure_20### As a result of limiting the number of sensors to 36 and placing them only on the face of the cylinder every 10 degrees, the system performed worse and the stabilized experienced an increase of 2.72% from 1.6288 to 1.6731 which is acceptable for real-world applications considering sensors number difference. After doubling the Reynolds number to 200 it was observed that the agent could indeed find a suitable control strategy after 500 epochs. The start drag coefficient was 2.2834 which the DRL agent was able to reduce to a of 1.6715, and a of 1.6725. Figure 14 ###reference_### shows the vorticity field at the start and the end of the controlled flow.\n###figure_21### Figure 15 ###reference_### shows the drag coefficient of three different cases for after the flow stabilization, the first being the cylinder in the flow field without rotation or active flow control, the second graph is the case of the cylinder with 0.2 rotation and no active flow control, and lastly, the third graph shows the case with cylinder rotation of 0.2 and active flow control performed by the DRL agent. In all three cases, the best-performing parameters were used (number of actuation = 1000 and maximum allowed flow rate = 0.2).\n###figure_22### Figure 16 ###reference_### shows the drag coefficient of the case with AFC and rotation, the variance of the drag coefficient in the first case (without AFC and rotation) was , for the second case (rotating cylinder without AFC) was , and finally the agent was able to further reduce the variance to only . Also, each jet\u2019s flow rate can be seen in figure 17 ###reference_###.\n###figure_23### ###figure_24### Unfortunately, due to the hardware limitations and the exponential calculation time required for higher Reynolds numbers, this number was the limit for us."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusions",
63
+ "text": "In this work, the most efficient number and position of jets, the optimum sensor number, and locations were computed. Also, rotation was added to the cylinder alongside the DRL-controlled jets, and the behavior of the agent with different controlling parameters and accesses was observed. In the end, we found the most appropriate control parameters for the rotating cylinder and the fixed case. First, we found that having more sensors at diverse locations is not always an effective choice and the sensor number and locations should be determined based on the need of the user and configuration. Also, we showed that in order to have robust learning, there is a need for at least 2100 epochs.\nSecondly, the most efficient configuration of jets, based on the cases tested here and with the discussed parameters occurs when we have one jet on the opposite side of the stagnation point and four others at , , , . In this case, it was found that allowing agent to have access to higher flow rates is not appropriate as it cannot stabilize the flow. The best-performing parameters were the case with each actuation maximum allowed flow rate of one and 80 actuation per episode, which lowered the drag from a of 3.2415 to 2.1050. This is equal to a considerable drop of .\nThirdly, adding rotation alongside the DRL-controlled jets can lower the drag coefficient from 3.2416 to 1.6288 in the best case, which is almost a reduction. In this case, the vortex shedding is almost suppressed. Contrary to the previous case, giving access to higher flow rates for the agent is usually beneficial, since in this case, the constraint is partly jet outputs. This was true for configuration III with 0.2 rotation speed as in this case, configuration II could not control the flow and reduce the drag coefficient. But, this configuration could decrease the drag force at smaller rotation speeds of 0.1 and 0.08, although its performance was worse than that of configuration III. Again we believe it\u2019s partly because of the lower maximum output of 2 jets compared to the 5 and partly because of the jet at .\nLastly, we introduced a possible design for rotating cylinder, reduced the sensor numbers and location to a minimum, and doubled the Reynolds number in order to showcase the possibility and the performance of our best configuration and parameters for real-world and challenging applications. It should be noted that across all cases, the agent effectively lowered the lift coefficient to zero or maintained it at a lower level compared to the initial state."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Declaration of Interests",
69
+ "text": "The authors report no conflict of interest."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span> Details of different simulated configurations.</figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_portrait\" height=\"1043\" id=\"S2.T1.g1\" src=\"x2.png\" width=\"831\"/>\n</figure>",
76
+ "capture": "Table 1: Details of different simulated configurations."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Details of the flow solver validation. </figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"109\" id=\"S3.T2.g1\" src=\"x3.png\" width=\"830\"/>\n</figure>",
80
+ "capture": "Table 2: Details of the flow solver validation. "
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Results of the time-step independence study. </figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"101\" id=\"S3.T3.g1\" src=\"x4.png\" width=\"830\"/>\n</figure>",
84
+ "capture": "Table 3: Results of the time-step independence study. "
85
+ },
86
+ "4": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Report of the mesh independence study.</figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"96\" id=\"S3.T4.g1\" src=\"x5.png\" width=\"830\"/>\n</figure>",
88
+ "capture": "Table 4: Report of the mesh independence study."
89
+ },
90
+ "5": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Comparing the minimum, maximum, and average drag coefficients of the different sensor numbers and the area occupied by them for configuration III with the no-spin, the maximum flow rate of 1 and 200 actuations.</figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_square\" height=\"779\" id=\"S3.T5.g1\" src=\"x7.png\" width=\"830\"/>\n</figure>",
92
+ "capture": "Table 5: Comparing the minimum, maximum, and average drag coefficients of the different sensor numbers and the area occupied by them for configuration III with the no-spin, the maximum flow rate of 1 and 200 actuations."
93
+ },
94
+ "6": {
95
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span> The mean value of the drag coefficient at the start of the control, the maximum and the minimum CD when the control is stabilized, and the average CD on the whole controlled episode.</figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"250\" id=\"S4.T6.g1\" src=\"x8.png\" width=\"831\"/>\n</figure>",
96
+ "capture": "Table 6: The mean value of the drag coefficient at the start of the control, the maximum and the minimum CD when the control is stabilized, and the average CD on the whole controlled episode."
97
+ },
98
+ "7": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>The mean value of the drag coefficient at the start of the control, maximum and minimum CD when the control is stabilized and average CD on the whole controlled episode.</figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"214\" id=\"S4.T7.g1\" src=\"x11.png\" width=\"829\"/>\n</figure>",
100
+ "capture": "Table 7: The mean value of the drag coefficient at the start of the control, maximum and minimum CD when the control is stabilized and average CD on the whole controlled episode."
101
+ }
102
+ },
103
+ "image_paths": {
104
+ "1": {
105
+ "figure_path": "2307.12083v3_figure_1.png",
106
+ "caption": "Figure 1: Schematics of the computational domain.",
107
+ "url": "http://arxiv.org/html/2307.12083v3/x1.jpg"
108
+ },
109
+ "2": {
110
+ "figure_path": "2307.12083v3_figure_2.png",
111
+ "caption": "Figure 2: variation of the drag coefficient with respect to the steps obtained from three meshes.",
112
+ "url": "http://arxiv.org/html/2307.12083v3/mesh-draw.eps"
113
+ },
114
+ "3": {
115
+ "figure_path": "2307.12083v3_figure_3.png",
116
+ "caption": "Figure 3: Two-dimensional unstructured mesh of the computation domain of the flow over a circular cylinder and a close snapshot near walls.",
117
+ "url": "http://arxiv.org/html/2307.12083v3/x6.png"
118
+ },
119
+ "4": {
120
+ "figure_path": "2307.12083v3_figure_4.png",
121
+ "caption": "Figure 4: The drag coefficient for the controlled flow with respect to the number of steps. The numbers were extracted from the region inside the rectangle.",
122
+ "url": "http://arxiv.org/html/2307.12083v3/ns2-baseline-control.eps"
123
+ },
124
+ "5": {
125
+ "figure_path": "2307.12083v3_figure_5.png",
126
+ "caption": "Figure 5: Drag coefficient values of two networks with respect to the number of steps.",
127
+ "url": "http://arxiv.org/html/2307.12083v3/net.eps"
128
+ },
129
+ "6": {
130
+ "figure_path": "2307.12083v3_figure_6.png",
131
+ "caption": "Figure 6: The average drag coefficient during 2000 episodes of the learning process.",
132
+ "url": "http://arxiv.org/html/2307.12083v3/Cd5jet.eps"
133
+ },
134
+ "7": {
135
+ "figure_path": "2307.12083v3_figure_7.png",
136
+ "caption": "Figure 7: Pressure and vorticity contours for configuration III at the start (left) and the end of control (right).",
137
+ "url": "http://arxiv.org/html/2307.12083v3/x9.png"
138
+ },
139
+ "8": {
140
+ "figure_path": "2307.12083v3_figure_8.png",
141
+ "caption": "Figure 8: The flow rate of each jet during control and their average value, which sits at zero.",
142
+ "url": "http://arxiv.org/html/2307.12083v3/flowrate5.eps"
143
+ },
144
+ "9": {
145
+ "figure_path": "2307.12083v3_figure_9.png",
146
+ "caption": "Figure 9: The flow rate of each jet during the control process.",
147
+ "url": "http://arxiv.org/html/2307.12083v3/Qnb80.eps"
148
+ },
149
+ "10": {
150
+ "figure_path": "2307.12083v3_figure_10.png",
151
+ "caption": "Figure 10: Pressure and vorticity contours at the start (the no-control scenario), the left column; and end of the controlled flow, the right column.",
152
+ "url": "http://arxiv.org/html/2307.12083v3/x10.png"
153
+ },
154
+ "11": {
155
+ "figure_path": "2307.12083v3_figure_11.png",
156
+ "caption": "Figure 11: Pressure field at the start (left), and the end of the controlled flow (right) for two configurations.",
157
+ "url": "http://arxiv.org/html/2307.12083v3/x12.png"
158
+ },
159
+ "12": {
160
+ "figure_path": "2307.12083v3_figure_12.png",
161
+ "caption": "Figure 12: Vorticity field at the start (left), and the end of the controlled flow (right) for two configurations.",
162
+ "url": "http://arxiv.org/html/2307.12083v3/x13.png"
163
+ },
164
+ "13": {
165
+ "figure_path": "2307.12083v3_figure_13.png",
166
+ "caption": "Figure 13: Possible 3D design of the rotating system.",
167
+ "url": "http://arxiv.org/html/2307.12083v3/extracted/5325550/un.jpg"
168
+ },
169
+ "14": {
170
+ "figure_path": "2307.12083v3_figure_14.png",
171
+ "caption": "Figure 14: Pressure and vorticity field at the start (left), and the end of the controlled flow (right) for configuration III and Re = 200.",
172
+ "url": "http://arxiv.org/html/2307.12083v3/x14.png"
173
+ },
174
+ "15": {
175
+ "figure_path": "2307.12083v3_figure_15.png",
176
+ "caption": "Figure 15: Drag coefficient of Re = 200 in three different configurations after the flow stabilization.",
177
+ "url": "http://arxiv.org/html/2307.12083v3/re200dragcoef.eps"
178
+ },
179
+ "16": {
180
+ "figure_path": "2307.12083v3_figure_16.png",
181
+ "caption": "Figure 16: Drag coefficient of Re = 200 after the flow stabilization.",
182
+ "url": "http://arxiv.org/html/2307.12083v3/re200dragcoefofcontrol.eps"
183
+ },
184
+ "17": {
185
+ "figure_path": "2307.12083v3_figure_17.png",
186
+ "caption": "Figure 17: Jet flow rate during control.",
187
+ "url": "http://arxiv.org/html/2307.12083v3/re200jets.eps"
188
+ }
189
+ },
190
+ "validation": true,
191
+ "references": [
192
+ {
193
+ "1": {
194
+ "title": "Flow control: passive, active, and reactive flow management.",
195
+ "author": "M.Gad el Hak.",
196
+ "venue": "Cambridge University Press, 2007.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "2": {
202
+ "title": "Advances and challenges in periodic forcing of the turbulent boundary\nlayer on a body of revolution.",
203
+ "author": "V. I. Kornilov and A. V Boiko.",
204
+ "venue": "Progress in Aerospace Sciences, 98:57\u201373, 2018.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "3": {
210
+ "title": "Closed-loop turbulence control: Progress and challenges.",
211
+ "author": "S. L. Brunton and B. R. Noack.",
212
+ "venue": "Appl. Mech., 67:050801, 2015.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "4": {
218
+ "title": "Machine Learning Control-Taming Nonlinear Dynamics and\nTurbulence.",
219
+ "author": "S. L. Brunton T. Duriez and B. R. Noack.",
220
+ "venue": "Springer, 2016.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "5": {
226
+ "title": "Optimal rotary control of the cylinder wake using proper orthogonal\ndecomposition reduced-order model.",
227
+ "author": "L. Cordier M. Bergmann and J.-P. Brancher.",
228
+ "venue": "Phys. Fluids, 17:097101, 2005.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "6": {
234
+ "title": "Optimal control of circular cylinder wakes using long control\nhorizons.",
235
+ "author": "T. L. B. Flinois and T. Colonius.",
236
+ "venue": "Phys. Fluids, 27:087105, 2015.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "7": {
242
+ "title": "Application of reduced-order controller to turbulent flows for drag\nreduction.",
243
+ "author": "J. Kim K. H. Lee, L. Cortelezzi and J. Speyer.",
244
+ "venue": "Phys. Fluids, 13:1321\u20131330, 2001.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "8": {
250
+ "title": "Dynamic mode tracking and control with a relaxation method.",
251
+ "author": "F. Dupuy A. Misdariis M. Queguineur, L. Y. M. Gicquel and G. Staffelbach.",
252
+ "venue": "Phys. Fluids, 31:034101, 2019.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "9": {
258
+ "title": "Distributed forcing of flow over a circular cylinder.",
259
+ "author": "J. Kim and H. Choi.",
260
+ "venue": "Phys. Fluids, 17:33103, 2005.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "10": {
266
+ "title": "Flow control over a circular cylinder using virtual moving surface\nboundary layer control.",
267
+ "author": "Y. Huang X. Zhang, K.-S. Choi and H. Li.",
268
+ "venue": "Experiments in Fluids, 60:104, 2019.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "11": {
274
+ "title": "Mechanisms for laminar separated-flow control using\ndielectric-barrier discharge plasma actuator at low reynolds number.",
275
+ "author": "K. Okada K. Asada H. Aono A. Yakeno Y. Abe M. Sato, T. Nonomura and K. Fujii.",
276
+ "venue": "Phys. Fluids, 27:117101, 2015.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "12": {
282
+ "title": "Active control of vortex-induced vibrations of a circular cylinder\nusing windward-suction-leeward-blowing actuation.",
283
+ "author": "S. C. M. Yu C. Wang, H. Tang and F. Duan.",
284
+ "venue": "Phys. Fluids, 28:053601, 2016.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "13": {
290
+ "title": "Feedback control of a flow past a cylinder via transverse motion.",
291
+ "author": "F. Li and N. Aubry.",
292
+ "venue": "Phys. Fluids, 15:2163\u20132176, 2003.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "14": {
298
+ "title": "Numerical investigations of lift suppression by feedback rotary\noscillation of circular cylinder at low reynolds number.",
299
+ "author": "B. Teng L. Lu, J.-M. Qin and Y.-C. Li.",
300
+ "venue": "Phys. Fluids, 23:033601, 2011.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "15": {
306
+ "title": "Active control of a cylinder wake flow by using a streamwise\noscillating foil.",
307
+ "author": "Y. Bao and J. Tao.",
308
+ "venue": "Phys. Fluids, 25:053601, 2013.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "16": {
314
+ "title": "Control of vortex-induced vibration of a circular cylinder using a\npair of air jets at low reynolds number.",
315
+ "author": "H. Zhao H. Zhu, T. Tang and Y. Gao.",
316
+ "venue": "Phys. Fluids, 31:043603, 2019.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "17": {
322
+ "title": "Feedback control of instabilities in the two-dimensional blasius\nboundary layer: The role of sensors and actuators.",
323
+ "author": "C. W. Rowley B. A. Belson, O. Semeraro and D. S. Henningson.",
324
+ "venue": "Phys. Fluids, 25:054106, 2013.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "18": {
330
+ "title": "Control effect of micro vortex generators on attached cavitation\ninstability.",
331
+ "author": "L. Cao S. J. Schmidt D. Likhachev B. Che, N. Chu and D. Wu.",
332
+ "venue": "Phys. Fluids, 31:064102, 2019.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "19": {
338
+ "title": "Dynamics and control of the vortex flow behind a slender conical\nforebody by a pair of plasma actuators.",
339
+ "author": "J. Wang F. Liu X. Meng, Y. Long and S. Luo.",
340
+ "venue": "Phys. Fluids, 30:024101, 2018.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "20": {
346
+ "title": "Control of unsteady flow separation over a circular cylinder using\ndielectric-barrier-discharge surface plasma.",
347
+ "author": "T. N. Jukes and K.-S. Choi.",
348
+ "venue": "Phys. Fluids, 21:094106, 2009.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "21": {
354
+ "title": "Open-loop control of compressible afterbody flows using adjoint\nmethods.",
355
+ "author": "D. Sipp P. Meliga and J.-M. Chomaz.",
356
+ "venue": "Phys. Fluids, 22:054109, 2010.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "22": {
362
+ "title": "The control of flow separation: Study of optimal open loop\nparameters.",
363
+ "author": "A. F. Shahrabi.",
364
+ "venue": "Phys. Fluids, 31:035104, 2019.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "23": {
370
+ "title": "Efficient collective swimming by harnessing vortices through deep\nreinforcement learning.",
371
+ "author": "G. Novati S. Verma and P. Koumoutsakos.",
372
+ "venue": "Proc. Natl. Acad. Sci. U. S. A., 115:5849, 2018.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "24": {
378
+ "title": "Artificial neural networks trained through deep reinforcement\nlearning discover control strategies for active flow control.",
379
+ "author": "A. Jensen U. R\u00e9glade J. Rabault, M. Kuchta and N. Cerardi.",
380
+ "venue": "J. Fluid Mech, 865:281\u2013302, 2019.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "25": {
386
+ "title": "Reinforcement learning for dynamic microfluidic control.",
387
+ "author": "Howes Philip D. Choo Jaebum deMello Andrew J. Dressler, Oliver J.",
388
+ "venue": "ACS Omega, 8:10084\u201310091, 2018.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "26": {
394
+ "title": "Accelerating deep reinforcement learning strategies of flow control\nthrough a multi-environment approach.",
395
+ "author": "Alexander Kuhnle Jean Rabault.",
396
+ "venue": "Physics of Fluids, 31:094105, 2019.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "27": {
402
+ "title": "Controlled gliding and perching through deep-reinforcement-learning.",
403
+ "author": "Guido Novati, L. Mahadevan, and Petros Koumoutsakos.",
404
+ "venue": "Phys. Rev. Fluids, 4:093902, 2019.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "28": {
410
+ "title": "Robust active flow control over a range of reynolds numbers using an\nartificial neural network trained through deep reinforcement learning.",
411
+ "author": "Alexander Kuhnle Yan Wang Tongguang Wang Hongwei Tang, Jean Rabault.",
412
+ "venue": "Physics of Fluids, 32:053605, 2020.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "29": {
418
+ "title": "Active flow control with rotating cylinders by an artificial neural\nnetwork trained by deep reinforcement learning.",
419
+ "author": "Zhang W. Deng J. et al. Xu, H.",
420
+ "venue": "J Hydrodyn, 32:254\u2013258, 2020.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "30": {
426
+ "title": "Reinforcement learning for bluff body active flow control in\nexperiments and simulations.",
427
+ "author": "Dixia Fan, Liu Yang, Zhicheng Wang, Michael S. Triantafyllou, and George Em\nKarniadakis.",
428
+ "venue": "Proceedings of the National Academy of Sciences,\n117:26091\u201326098, 2020.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "31": {
434
+ "title": "Applying deep reinforcement learning to active flow control in weakly\nturbulent conditions.",
435
+ "author": "Hui Tang Feng Ren, Jean Rabault.",
436
+ "venue": "Physics of Fluids, 33:037121, 2021.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "32": {
442
+ "title": "Robust flow control and optimal sensor placement using deep\nreinforcement learning.",
443
+ "author": "Beneddine S. & Dandois J. Paris, R.",
444
+ "venue": "Journal of Fluid Mechanics, 913:25, 2021.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "33": {
450
+ "title": "From active learning to deep reinforcement learning: Intelligent\nactive flow control in suppressing vortex-induced vibration.",
451
+ "author": "Fangfang Xie Xinshuai Zhang Hongyu Zheng Yao Zheng Changdong Zheng, Tingwei Ji.",
452
+ "venue": "Physics of Fluids, 33:063607, 2021.",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "34": {
458
+ "title": "Bluff body uses deep-reinforcement-learning trained active flow\ncontrol to achieve hydrodynamic stealth.",
459
+ "author": "Hui Tang Feng Ren, Chenglei Wang.",
460
+ "venue": "Physics of Fluids, 33:093602, 2021.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "35": {
466
+ "title": "Single-step deep reinforcement learning for open-loop control of\nlaminar and turbulent flows.",
467
+ "author": "H. Ghraieb, J. Viquerat, A. Larcher, P. Meliga, and E. Hachem.",
468
+ "venue": "Phys. Rev. Fluids, 6:053902, 2021.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "36": {
474
+ "title": "Proximal policy optimization algorithms.",
475
+ "author": "Prafulla Dhariwal Alec Radford Oleg Klimov John Schulman, Filip Wolski.",
476
+ "venue": "arXiv, 1707:06347, 2017.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "37": {
482
+ "title": "Active control for the flow around various geometries through deep\nreinforcement learning.",
483
+ "author": "Yu-Fei Mei, Chun Zheng, Yue Hua, Qiang Zhao, Peng Wu, and Wei-Tao Wu.",
484
+ "venue": "Fluid Dynamics Research, 54:015510, 2022.",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "38": {
490
+ "title": "Deep reinforcement learning in fluid mechanics: A promising method\nfor both active flow control and shape optimization.",
491
+ "author": "Ren F. Zhang W. et al. Rabault, J.",
492
+ "venue": "J Hydrodyn, 32:234\u2013246, 2020.",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "39": {
498
+ "title": "Direct shape optimization through deep reinforcement learning.",
499
+ "author": "Jonathan Viquerat, Jean Rabault, Alexander Kuhnle, Hassan Ghraieb, Aur\u00e9lien\nLarcher, and Elie Hachem.",
500
+ "venue": "Journal of Computational Physics, 428:110080, 2021.",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "40": {
506
+ "title": "Deep reinforcement learning for heat exchanger shape optimization.",
507
+ "author": "Hadi Keramati, Feridun Hamdullahpur, and Mojtaba Barzegari.",
508
+ "venue": "International Journal of Heat and Mass Transfer, 194:123112,\n2022.",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "41": {
514
+ "title": "An active-controlled heaving plate breakwater trained by an\nintelligent framework based on deep reinforcement learning.",
515
+ "author": "Yulin Xie, Xizeng Zhao, and Min Luo.",
516
+ "venue": "Ocean Engineering, 244:110357, 2022.",
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "42": {
522
+ "title": "Manipulation of free-floating objects using faraday flows and deep\nreinforcement learning.",
523
+ "author": "George Thuruthel T. & Iida F. Hardman, D.",
524
+ "venue": "Sci Rep, 12:335, 2022.",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "43": {
530
+ "title": "Deep learning.",
531
+ "author": "Bengio Y. & Hinton G. LeCun, Y.",
532
+ "venue": "Nature, 521:436\u2013444, 2015.",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "44": {
538
+ "title": "Deep learning.",
539
+ "author": "Bengio Yoshua Courville Aaron Goodfellow, Ian.",
540
+ "venue": "MIT press Cambridge, 2016.",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "45": {
546
+ "title": "Multilayer feedforward networks are universal approximators.",
547
+ "author": "Stinchcombe Maxwell & White Halbert Hornik, Kurt.",
548
+ "venue": "Neural Networks, 2:359 \u2013 366, 1989.",
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "46": {
554
+ "title": "Control of flow past bluff bodies using rotating control cylinders.",
555
+ "author": "S. MITTAL.",
556
+ "venue": "Journal of Fluids and Structures, 15:291\u2013326, 2001.",
557
+ "url": null
558
+ }
559
+ },
560
+ {
561
+ "47": {
562
+ "title": "A numerical investigation of fluid flow over a rotating cylinder with\ncross flow oscillation.",
563
+ "author": "M.R.H. Nobari and J. Ghazanfarian.",
564
+ "venue": "Journal of Fluids and Structures, 38:2026\u20132036, 2009.",
565
+ "url": null
566
+ }
567
+ },
568
+ {
569
+ "48": {
570
+ "title": "Benchmark Computations of Laminar Flow Around a Cylinder.",
571
+ "author": "Turek S. Durst F. Krause E. Rannacher R. Sch\u00e4fer, M.",
572
+ "venue": "Flow Simulation with High-Performance Computers II: DFG Priority\nResearch Programme Results 1993\u20131995, 1996.",
573
+ "url": null
574
+ }
575
+ },
576
+ {
577
+ "49": {
578
+ "title": "Gmsh: A 3-d finite element mesh generator with built-in pre-and\npost-processing facilities.",
579
+ "author": "Jean-Francois Geuzaine, Christophe & Remacel.",
580
+ "venue": "International journal for numerical methods in engineering,\n79:1309\u20131331, 2009.",
581
+ "url": null
582
+ }
583
+ },
584
+ {
585
+ "50": {
586
+ "title": "A multistep technique with implicit difference schemes for\ncalculating two- or three-dimensional cavity flows.",
587
+ "author": "Katuhiko Goda.",
588
+ "venue": "Journal of Computational Physics, 30:76 \u2013 95, 1979.",
589
+ "url": null
590
+ }
591
+ },
592
+ {
593
+ "51": {
594
+ "title": "A multistep technique with implicit difference schemes for\ncalculating two- or three-dimensional cavity flows.",
595
+ "author": "Mardal Kent-Andre & Wells Garth Logg, Anders.",
596
+ "venue": "Springer Science & Business Media, 84, 2012.",
597
+ "url": null
598
+ }
599
+ },
600
+ {
601
+ "52": {
602
+ "title": "Playing atari with deep reinforcement learning.",
603
+ "author": "Kavukcuoglu Koray-Silver David Graves Alex Antonoglou Ioannis Wierstra Daan &\nRiedmiller Martin Mnih, Volodymyr.",
604
+ "venue": "arXiv, 1312:5602, 2013.",
605
+ "url": null
606
+ }
607
+ },
608
+ {
609
+ "53": {
610
+ "title": "Continuous deep q-learning with model-based acceleration.",
611
+ "author": "Lillicrap Timothy-Sutskever Ilya & Levine Sergey Gu, Shixiang.",
612
+ "venue": "In International Conference on Machine Learning, page\n2829\u20132838, 2016.",
613
+ "url": null
614
+ }
615
+ }
616
+ ],
617
+ "url": "http://arxiv.org/html/2307.12083v3"
618
+ }
20240101/2308.04102v3.json ADDED
@@ -0,0 +1,599 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Asynchronous Evolution of Deep Neural Network Architectures",
3
+ "abstract": "Many evolutionary algorithms (EAs) take advantage of parallel evaluation of candidates. However, if evaluation times vary significantly, many worker nodes (i.e., compute clients) are idle much of the time, waiting for the next generation to be created. Evolutionary neural architecture search (ENAS), a class of EAs that optimizes the architecture and hyperparameters of deep neural networks, is particularly vulnerable to this issue. This paper proposes a generic asynchronous evaluation strategy (AES) that is then adapted to work with ENAS. AES increases throughput by maintaining a queue of up to individuals ready to be sent to the workers for evaluation and proceeding to the next generation as soon as individuals have been evaluated. A suitable value for is determined experimentally, balancing diversity and efficiency. To showcase the generality and power of AES, it was first evaluated in eight-line sorting network design (a single-population optimization task with limited evaluation-time variability), achieving an over two-fold speedup. Next, it was evaluated in 11-bit multiplexer design (a single-population discovery task with extended variability), where a 14-fold speedup was observed. It was then scaled up to ENAS for image captioning (a multi-population open-ended-optimization task), resulting in an over two-fold speedup. In all problems, a multifold performance improvement was observed, suggesting that AES is a promising method for parallelizing the evolution of complex systems with long and variable evaluation times, such as those in ENAS.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Evolutionary algorithms (EAs) have recently been extended to solving computationally expensive problems, such as evolutionary neural architecture search (ENAS; Lu et al., 2018 ###reference_29###; Miikkulainen et al., 2019 ###reference_31###; Real et al., 2019 ###reference_35###). A main challenge in this domain is to reduce the amount of time spent in evaluating candidate solutions. For instance, when evolving architectures for deep neural networks (DNNs), a fitness evaluation includes training the network for multiple epochs, which is very time-consuming.\nFortunately, such evolutionary applications can take good advantage of parallel\nsupercomputing resources that have recently become available. Each evaluation can\nbe done on a separate machine (i.e., a GPU resource), and thus the whole population can be evaluated at\nthe same time. However, individual evaluation times can vary significantly, making the process inefficient. A simple network may be trained in a few minutes, but larger ones may take several days on current GPUs (Miikkulainen et al., 2019 ###reference_31###; Liang et al., 2019 ###reference_25###, 2021 ###reference_24###). The EA has to wait for the longest evaluation to finish before the next generation can be created, during which time the other computational resources are idle (Scott and De Jong, 2015b ###reference_39###). As such, simple parallel EAs are not well suited for ENAS.\nAs a solution, this paper proposes an asynchronous evaluation strategy called AES that is designed to take full advantage of the available computational resources.\nAt each generation, a constant number of individuals are either being evaluated on the compute workers, have just finished their evaluation, or are waiting in a queue to be evaluated. As compute workers become available, they pull candidates from the queue for evaluation. As soon as a predetermined batch size of evaluations finish, new individuals are generated and placed in the queue. In this manner, all available computational resources are used at all times. This process can be seen as a mix of generational and steady-state GAs (Goswami et al., 2023 ###reference_9###): Each batch of new individuals can be seen as a generation, but individuals from several generations may be evaluated in parallel.\nAES was evaluated in a series of three experiments in this paper. The first experiment showed that in sorting network design (i.e., a single-population optimization task with known optima), AES finds optimal solutions over twice as fast as synchronous evolution. However, evaluation times do not vary much in this domain; to demonstrate what is possible with higher variation, AES was applied to multiplexer design (i.e., a single-population discovery task) with significantly more varied but still short enough evaluation times so that statistical significance could be measured. Further, the proper batch size was determined to be roughly 1/4 of the total population. The third experiment then scaled up the approach to ENAS for image captioning (i.e., to multiple populations in an open-ended optimization task). In single production-size runs, AES was found to develop solutions of similar quality over twice as fast as synchronous evolution, and to find better solutions overall. AES is thus a promising tool for scaling up evolutionary algorithms to parallel computing resources.\nThe main contributions of the paper are:\nA new algorithm, AES, for asynchronous evolution.\nA demonstration of the effectiveness of AES over synchronous EAs experimentally on a typical EA as well as ENAS.\nA statistical analysis of how and why AES is able to gain this performance advantage.\nThe rest of the paper is\norganized as follows: Section 2 ###reference_### reviews related work on parallel EAs, asynchronous EAs, and neuroevolution of DNNs.\nSection 3 ###reference_### introduces the generic version of AES and describes how it can be adapted to ENAS. Section 4 ###reference_###\npresents experimental results comparing the performance of AES with\nsynchronous evaluation in the sorting-network, multiplexer, and ENAS domains. Section 5 ###reference_### analyzes the sources of the speedup\nand proposes future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "This section reviews prior work on parallel evaluation strategies for EAs, asynchronous evaluation strategies, and methods for evolutionary neural architecture search."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Parallel Evaluation Strategies",
21
+ "text": "A common computation bottleneck in EAs is the evaluation step, where all individuals in the population must have their fitness determined. To overcome this bottleneck, the evaluation can be performed in parallel on multiple computing resources simultaneously. The simplest strategy is to run copies of a separate independent EA on each worker node and to return the best result of all the independent runs (Sudholt, 2015 ###reference_46###). While this strategy works for EAs where evaluation times are relatively short, it does not scale to problems where evaluation time are extremely long, such as neuroevolution.\nA more synergetic approach is global parallelization (Adamidis, 1998 ###reference_2###; Schuman et al., 2016 ###reference_36###), where a single master node (server) assigns individuals to multiple worker nodes for evaluation. When all individuals have been evaluated, the master node proceeds onto the next generation. Further, multiple master nodes can be linked to a super-master node, which design may improve performance especially with a large number of workers (Schuman et al., 2016 ###reference_36###). Unfortunately, when evaluation times vary significantly, the master node will have to wait for the slowest worker node, during which the other workers are idle. Global parallelization is thus poorly suited for neuroevolution, where evaluations are not only long, but also vary widely in length.\nAnother challenge with parallelizing EAs is the communication bottleneck that occurs when individuals are sent to workers. To overcome this challenge, an island model (Adamidis, 1998 ###reference_2###; Sudholt, 2015 ###reference_46###) can be used, where the population is divided into several subpopulations and each subpopulation is evolved and evaluated on a separate worker node. Periodically, the worker nodes exchange individuals in order to avoid converging to local optima. The topology in which the worker nodes are linked to each other is a major design concern for island models. An even more fine-grained parallelization approach is the cellular model (Sudholt, 2015 ###reference_46###), where each worker node is assigned to a single individual. Island and cellular models are typically synchronous and still suffer from the same issues as other synchronous EAs. Luckily for most problem domains, including neuroevolution, communication between workers is not the bottleneck, but rather the computation time spent by the worker nodes themselves."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Asynchronous Evaluation Strategies",
27
+ "text": "Asynchronous evaluation strategies are a way to overcome the issues with synchronous EAs. The key difference is that while synchronous EAs are based on a generational model, asynchronous EAs are a parallel version of steady state EAs (Scott and De Jong, 2015a ###reference_38###, b ###reference_39###). In other words, they do not wait for the entire population to be evaluated but proceed with evolution when only a subset of the population has been evaluated. As a result, asynchronous EAs are well suited for problems with long, highly variable evaluation times. This class of EAs were proposed in the early 1990s, and have been occasionally used by practitioners (Rasheed and Davison, 1999 ###reference_34###; Depolli et al., 213 ###reference_7###; Luke, 2014 ###reference_30###; Harada and Takadama, 2020 ###reference_13###). However, little work is done analyzing the behavior and benefits of such algorithms (see Zeigler and Kim, 1993 ###reference_55###; Kim, 1994 ###reference_18###; Scott and De Jong, 2015b ###reference_39###; Abdelhafez et al., 2019 ###reference_1###, for exceptions). Such methods have recently become more relevant when parameter tuning for large simulations has become more common (Kiran and Ozyildirim, 2022 ###reference_20###).\nA current state-of-the-art approach to asynchronous evaluation methods is SWEET (Scott et al., 2023 ###reference_37###), which allows individuals to be selected as parents for reproduction even if these individuals have not finished evaluation. Since fitness information is not available, the parents are randomly selected as input into a tournament-selection operator. Other recent approaches include TLAPEA (Harada, 2020 ###reference_12###), which automatically determines the optimal time to wait after the evaluation of an individual before continuing evolution. These methods are similar to AES in that both rely on partial evaluation of the population before continuing with evolution. However, while they improve over a basic asynchronous algorithm, they still favor solutions that evaluate fast. This limitation makes SWEET and TALAPEA less well suited for neuroevolution, where the solutions (i.e. network architectures) that take the longest to evaluate may perform the best.\nAn asynchronous neuroevolution algorithm that is closely related to the AES is rtNEAT\n(Stanley et al., 2005 ###reference_44###; Papavasileiou et al., 2021 ###reference_32###). In that approach, a population of neural networks is\nevaluated asynchronously one at a time. Each neural network is tested in a video\ngame, and its fitness is measured over a set time period. At the end of the period,\nit is taken out of the game; if it evaluated well, it is mutated and crossed over\nwith another candidate to create an offspring that is then tested in the game. In\nthis manner, evolution and evaluation are happening continually at the same time.\nThe goal of rtNEAT is to make replacement less disruptive for the player; it was not designed to parallelize evolution to speed it up in a distributed environment. Thus, unlike AES, it does not provide a performance advantage (Stanley et al., 2005 ###reference_44###).\nThe rtNEAT approach to asynchronous neuroevolution assumes that the evaluation times are approximately the same. Therefore, it is not well suited for neuroevolution where evaluation times can vary significantly. Such variation is especially prominent in the training of deep neural network architectures, described in the next section. Another drawback of existing approaches is that they are only designed for a single population of individuals (Scott and De Jong, 2015b ###reference_39###; Chitty, 2021 ###reference_5###). As such, existing asynchronous EAs cannot deal with the coevolution of multiple populations, such as in the neuroevolution domain described later. AES was designed to overcome these limitations, and thus presents an improvement over rtNEAT and asynchronous EAs in general."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Evolutionary Neural Architecture Search (ENAS)",
33
+ "text": "###figure_1### DNNs have achieved state-of-the-art performance on many machine learning\ncompetitions and benchmarks in areas like computer vision, speech, and natural language processing (Collobert and Weston, 2008 ###reference_6###; Graves et al., 2013 ###reference_10###; Szegedy et al., 2016 ###reference_47###; Dosovitskiy et al., 2020 ###reference_8###). Often, hyperparameter choice and the structure of the network have a massive impact on its performance, and as a result, much research effort has been spent on discovering better\narchitectures (He et al., 2016 ###reference_14###; Szegedy et al., 2016 ###reference_47###; Tan and Le, 2019 ###reference_48###; Wu et al., 2021 ###reference_52###).\nRecently, EAs have been proposed as a viable way to optimize the architecture and hyperparameters of a DNN automatically (Miikkulainen et al., 2019 ###reference_31###; Liang et al., 2018 ###reference_26###, 2019 ###reference_25###). Evolution can generate DNNs with diverse topologies and achieve state-of-the-art performance on large-scale visual domains (Real et al., 2019 ###reference_35###). In addition, they can optimize multiple conflicting objectives such as performance and network complexity (Lu et al., 2018 ###reference_29###). Advanced EAs like CMA-ES (Loshchilov and Hutter, 2016a ###reference_27###) can discover good hyperparameters in high-dimensional search spaces (Loshchilov and Hutter, 2016b ###reference_28###), performing comparably with statistical algorithms such as Bayesian optimization (Snoek et al., 2015 ###reference_43###).\nIn this paper, a powerful EA called CoDeepNEAT (Miikkulainen et al., 2019 ###reference_31###)\nis used to explore the search space for potential DNN topologies and hyperparameters.\nCoDeepNEAT consists of a population of blueprints and a population of modules. Each\npopulation is evolved separately with a modified version of NEAT\n(Stanley and Miikkulainen, 2002 ###reference_45###). NEAT automatically divides each population into subpopulations, or species, of similar\nindividuals. An individual in the blueprint population is a graph\nwhere each node contains a pointer to a particular module species in the module population.\nAn individual in a module population is a graph where each node represents a particular\nDNN layer and its corresponding hyperparameters (number of neurons, activation\nfunction, etc.). As shown in Figure 1 ###reference_###, the modules and\nblueprints are combined to create a temporary population of assembled networks.\nEach individual in this assembled population is then evaluated by training it\non some supervised task, determining their performance on a dataset, and\nassigning that performance metric as fitness. The fitness of the individuals\n(networks) is attributed back to blueprints and modules as the average fitness\nof all the assembled networks containing that blueprint or module. One of the\nadvantages of CoDeepNEAT is that it can discover modular, repetitive\nstructures seen in state-of-the-art networks such as Googlenet and ResNet\n(Szegedy et al., 2016 ###reference_47###; He et al., 2016 ###reference_14###). CoDeepNEAT has achieved state-of-the-art performance in multiple problem domains, including image captioning, multitask learning, and automatic machine learning (Miikkulainen et al., 2019 ###reference_31###; Liang et al., 2019 ###reference_25###, 2021 ###reference_24###). However, CoDeepNEAT still suffers from the same issues as any synchronous EA. Improving CoDeepNEAT through asynchronous evaluation of the population is the main technical challenge solved in this paper."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "The Asynchronous Evaluation Strategy Method (AES)",
39
+ "text": "This section provides the motivation for the AES approach, presents a generic single-population version of it, as well as a multi-population version suitable for ENAS."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Overview",
45
+ "text": "A key problem that AES aims to solve is the inefficiency of synchronous\nevaluation strategies when running an EA in a parallel, distributed environment.\nThis problem is especially challenging for the evolution of DNN architectures that have high variance in\nevaluation times due to the various amounts of time needed to train different networks. As a\nresult, the slowest individuals\u2019 evaluation becomes a bottleneck. This problem can be alleviated through two\nmechanisms: (1) If there is a constant supply (i.e., a queue) of individuals to\nbe readily evaluated, the worker nodes will have optimal throughput and minimal\nidle time: They can immediately pull new individuals from the queue\nafter they have evaluated their current individual. (2) Server idle time can be minimized if\nevolution immediately proceeds to the next generation once a certain fraction of\nthe total number of individuals in the queue has been returned.\nSince the number of individuals in the queue exceeds the number of\nindividuals in a generation, it is not scalable to have the EA server\nkeep track of all the individuals being evaluated. The solution is simple:\nDistribute the bookkeeping to the workers. That is, after the server has placed\nthe next generation of individuals into the Evaluation Queue, it no longer keeps track of them.\nInstead, as workers become available, they pull individuals from the queue, and when they are\ndone evaluating them, return both the fitness values and the\ncorresponding individuals back to the server.\nThus, the server only needs to be activated periodically to generate the next generation of individuals.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Generic Single-Population AES",
51
+ "text": "AES can be easily added onto parallel, synchronous EA with a single population, as shown in Figure 2 ###reference_### and Algorithm 1 ###reference_###. They specify a\ngeneric version of AES with few assumptions regarding the underlying EA\nframework and no additional computational burden. The Evaluation Queue is\ninitialized in the beginning with randomly generated individuals (Step 1 in Algorithm 1).\n is the size of the population, i.e., the number of individuals, AES waits to return (Step 3) before it creates the next generation of individuals. is a hyperparameter that controls the\nratio between and . Thus, the number of individuals in the Evaluation Queue decreases from in the beginning of a generation to in the end, then jumps back up to as the newly generated individuals are added to it.\nTogether, the individuals that are returned and the elite individuals from the previous generation constitute the population from which the next generation is created (Steps 4-5).\nIf any of the returned individuals are better than the worst individuals in the current elite set, the elite set is updated, keeping its size constant at (Step 6). The entire new generation of individuals is then submitted for evaluation (Step 7).\nNote that individuals from multiple such generations may be under evaluation at the same time,\nso the process is not strictly generational, but can be seen as a mix of generational and steady-state GAs (Goswami et al., 2023 ###reference_9###).\nAs shown in Section 4 ###reference_###, AES can be used to enhance the performance of an existing large-scale EA that utilizes hundreds of thousands of worker nodes.\n\n###figure_3###"
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Multi-population AES for ENAS",
57
+ "text": "Figure 3 ###reference_### and Algorithm 2 ###reference_### describe CoDeepNEAT-AES, i.e., a version of AES adapted for the CoDeepNEAT method of ENAS. The\nmain difference between CoDeepNEAT-AES and the generic AES is\nthat the evolutionary operations take place at the level of blueprints and modules, not\nthe evaluated individuals. Therefore, an assembly step is needed before placing individuals\ninto the Evaluation Queue, and a disassembly step before the evolutionary operations. Also, blueprint and module populations persist continuously across generations: While it would be possible to use only the elites and the returned individuals to construct the population for each generation (as is done in the generic AES), larger and persistent populations provide more comprehensive and diverse source material, making evolution more effective.\nThere are two populations in CoDeepNEAT-AES: one for blueprints and another for modules. They are both evolved with the NEAT neuroevolution method. One important aspect of NEAT is that it speciates the population automatically, i.e., divides the population into subpopulations of similar individuals, runs evolution primarily within those species (with occasional crossover between species), and adjusts the size of the species according to their overall fitness. In this manner, species may emerge and die out over evolution.\nMore specifically, the species that emerge in the module population are numbered and used to supply the modules for each blueprint slot, as shown in Figure 1 ###reference_###. This assembly step is unchanged in CoDeepNEAT-AES (Step 12 in Algorithm 2).\nThe disassembly step is more elaborate. It consists of identifying each blueprint and module in each of the returned individuals (Step 6), calculating their fitness as the average of all the networks in which they participated (Step 7), and merging them into the existing populations: If the blueprint or module already exists in the population, the new fitness is used to replace the old one, thus keeping the fitnesses up to date wrt. the current other network components (Step 8).\nEach population is then evolved as usual with NEAT: Within each species, the top members are preserved as the elite set, and the rest are discarded (Step 9; in CoDeepNEAT-AES, the elite size is defined as a percentage of the species, instead of an absolute number of individuals as in generic AES). NEAT then assigns new individuals to be generated in each species proportional to their fitness, keeping the total population size is constant at (Step 10). After these individuals have been generated, the species are recreated based on the current similarities within the population (Step 11). In this manner, the species change and grow and shrink during evolution.\nThese extensions make it possible to run AES on a multi-population domain such as CoDeepNEAT ENAS. At the high-level, CoDeepNEAT-AES retains the efficiency of AES, as will be demonstrated experimentally in the next section."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experimental Results",
63
+ "text": "The generic version of AES was implemented in EC-STAR, a distributed\ngenetic programming (GP) platform that is scalable to hundreds of thousands of\nworker nodes (Shahrzad and Hodjat, 2015 ###reference_41###). It was tested in two single-population domains: Eight-line sorting-network design with known optima, and 11-bit multiplexer design where the goal is to discover a valid solution. The two experiments serve to evaluate how effective AES is in speeding up evolution when the evaluation times vary a little and when they vary a lot. Since the evaluation times are relatively short in both domains, it was possible to repeat the runs multiple times and confirm that the differences are statistically significant. For the same reason, it was possible to determine an effective value for the hyperparameter . The third experiment, then, scaled up AES to ENAS, i.e., to CoDeepNEAT neuroevolution in the image-captioning domain. In contrast to the first two experiments, multiple populations are evolved at once, and the optimization is open-ended, i.e., the optimal solution is not known. Also, as is common in deep learning experiments, extremely long evaluation times limit the experiment to comparing single runs. These three experiments thus evaluate AES in two contrasting settings."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Sorting-Network Domain",
69
+ "text": "A sorting network of inputs is a fixed layout of comparison exchange operators (comparators) that sorts all possible inputs (Figure 4 ###reference_###; Knuth, 1998 ###reference_22###). Since the same layout can sort any input, it represents an oblivious or data-independent sorting algorithm, that is, the layout of comparisons does not depend on the input data. Sorting-network design has been a fundamental problem in computer science for many years, providing an important element for graphics processing units, multi-processor computers, and switching networks (Baddar, 2009 ###reference_3###; Kipfer et al., 2004 ###reference_19###; Valsalam et al., 2013 ###reference_49###).\n###figure_4### Beyond validity, the goal in designing sorting networks is to minimize the number of comparators. Designing such minimal sorting networks is a challenging optimization problem that has been the subject of active research since the 1950s (Knuth, 1998 ###reference_22###; Shahrzad et al., 2018 ###reference_40###, 2020 ###reference_42###; Valsalam et al., 2013 ###reference_49###). For smaller networks of up to eight lines, the optimal solutions are known, making it a verifiable optimization challenge.\nSorting networks can be represented as a sequence of two-leg comparators where each leg is connected to a different input line and the first leg is connected to a lower line than the second:\nAlthough the space of possible networks is infinite, it is relatively easy to test whether a particular network is correct: If it sorts all combinations of zeros and ones correctly, it will sort all inputs correctly (Knuth, 1998 ###reference_22###). This property makes it possible to evaluate networks systematically and efficiently: For instance for an eight-line network, only 256 inputs need to be evaluated to verify that the network is correct.\nThe evaluation times depend linearly on the size of the network. For instance, evaluating a valid 24-comparator network of eight inputs takes about 25% more time than evaluating an optimal network with 19 comparators. Such variation in evaluation times is much smaller than with ENAS networks, but provides a test of how much speedup is possible even in a limited case.\n###figure_5### To determine the optimal parameter value for , was set to 1000, to 1, and experiments run with 2, 10, 50, 250, and 1000. In each experiment, the goal was to find the optimal eight-line sorting network, and the time required on a 32-core machine (i.e. ) was recorded. Each experiment was repeated 10 times.\nThe results are presented in Figure 5 ###reference_###. The highest speedup is achieved in the midrange, i.e. when (i.e. ). In that case, AES finds solutions 2.2 times faster than synchronous evolution (i.e., when ). This result shows that even with minor variation in evaluation times, AES can achieve significant speedups. What happens with larger variations will be evaluated next."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Multiplexer Domain",
75
+ "text": "Multiplexer functions have long been used to evaluate machine-learning methods\nbecause they are difficult to learn but easy to check (Koza, 1990 ###reference_23###). In\ngeneral, the input to the multiplexer function includes address bits \nand data bits , i.e., it is a string of length of the form\n. The value of the multiplexer function\nis the value (0 or 1) of the particular data bit that is singled out by the \naddress bits. For example, for the 11-Multiplexer, where , if the three\naddress bits are 110, the multiplexer singles out data bit\nnumber 6 (i.e., ) to be its output. A Boolean function with \narguments has rows in its truth table. Thus, the sample space for the\nBoolean multiplexer is of size . When , the search space is of\nsize (Koza, 1990 ###reference_23###). However, since\nevolution can also generate redundant expressions that are all logically equal,\nthe real size of the search space can be much larger, depending on the\nrepresentation.\nFollowing prior work on the 11-Multiplexer problem (Shahrzad and Hodjat, 2015 ###reference_41###), a rule-based representation was used where each candidate specifies a set of rules of the type\nThe conditions specify\nvalues on the bit string and the action identifies the index of the bit whose\nvalue is then output. For instance, the following rule outputs the value of data\nbit 6 when the first three bits are 110:\nThese rules were evolved through the usual genetic operators in genetic programming\n(Berlanga et al., 2010 ###reference_4###). Note that with this definition, although logical OR is\nnot explicitly represented in the grammar, there may be\nseveral rules with the same action. Such a representation is equivalent to a logical OR and allows\nthe representation to be functionally complete. In other words, the grammar\nabove, which includes the AND, OR and NOT operators, can be used to express all\npossible Boolean functions. This system can produce a range of genes, from only a\nsingle condition rule, up to the maximum number of rules and conditions allowed\nper configuration. The maximum number of rules was set to 256 and the maximum number of conditions per rule to 64.\nLike the sorting-network domain, the multiplexer domain employs a single population. The search space is much larger, however evolution terminates when a valid solution is found instead of a minimal solution. It is therefore still a simpler setting than ENAS, which includes multiple populations and open-ended optimization. Also, evaluation times are short enough so that runs can be repeated several times and statistical significance estimated.\nLike ENAS, multiplexer evolution starts with simple solutions and gradually makes them more complex; also, multiplexer solutions require sufficient complexity, as do successful neural networks. The conclusions from the multiplexer are thus likely to carry over to ENAS.\n###figure_6### However, for the conclusions to carry, it is important to make the evaluation times vary\nin the multiplexer the same way as they do in ENAS. In principle, every fitness evaluation\nin the multiplexer domain takes a similar amount\nof time; therefore, an artificial delay was added to the end of every evaluation. The amount\nof delay was modeled after the evaluation timings of an actual run of CoDeepNEAT\non the CIFAR-10 image-classification domain (Miikkulainen et al., 2019 ###reference_31###). Two linear\nregression models were fit on a scatter plot of (1) the mean evaluation time vs. the number\nof generations elapsed, and (2) the standard deviation of evaluation time vs. the\nnumber of generations elapsed. During each generation of EC-Star, the\ntwo linear models were used to predict the mean and standard deviation;\nthese values were used to construct a Gaussian distribution from which the delays\nfor fitness evaluations were sampled.\nIn order to determine an appropriate value for , was set to\n4000, to 100, and three different values of tested (500, 1000, 4000). In each test, the amount of\ntime necessary for EC-Star to converge and solve the multiplexer problem was recorded, using .\nThe experiments were repeated 10 times for each value of .\nThe results are summarized in Figure 6 ###reference_###, which plots\nconvergence time for the different values. Interestingly, setting to an extremely\nlow or high value can hurt performance. Too small batches are akin to too small populations: Enough diversity is needed in the batch to allow evolution to progress well. On the other hand, too large batches result in longer evolution. In cases where , evolution shows\nthe most substantial speedups. In this case is approximately 4, in contrast with in the sorting network domain. Thus, depending on the amount of variation in the evaluation times, can be adjusted to obtain significant speedups. In the multiplexer domain, AES finds solutions 14 times faster than synchronous evolution (i.e., when ), which is a remarkable speedup indeed."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Image-Captioning Domain",
81
+ "text": "Deep learning has recently provided state-of-the-art performance in image\ncaptioning, and several diverse architectures have been suggested\n(Vinyals et al., 2015 ###reference_51###; Xu et al., 2015 ###reference_53###; Karpathy and Li, 2015 ###reference_17###; You et al., 2016 ###reference_54###; Vedantam et al., 2017 ###reference_50###; Hossain et al., 2019 ###reference_16###; Po\u0142ap, 2023 ###reference_33###).\nThe input to an image-captioning system is a raw image, and the output is a text\ncaption describing the contents of the image. In many of these\narchitectures, a convolutional network is used to process the image into\nan embedding. This embedding is then given to recurrent layers such as LSTMs to\ngenerate coherent sentences with long-range dependencies. Further, an U-NET convolutional architecture can be used to segment objects before classifying them for captioning, improving performance (Po\u0142ap, 2023 ###reference_33###).\nAs is common in existing approaches, a pretrained ImageNet model\n(Szegedy et al., 2016 ###reference_47###) was used to produce the initial image embeddings. The evolved\nnetwork took an image embedding as input, along with a sequence of one-hot text\ninputs. During training, the text input contained the previous word of the ground\ntruth caption; during inference, it contained the previous word generated by the model\n(Vinyals et al., 2015 ###reference_51###; Karpathy and Li, 2015 ###reference_17###). In the initial CoDeepNEAT population, the\nimage and text inputs were fed to a shared embedding layer, which was densely\nconnected to a softmax output over words. From this simple starting point,\nCoDeepNEAT evolved architectures that included fully connected layers, LSTM\nlayers, pooling layers, concatenation layers, as well as sets of hyperparameters associated\nwith each layer, along with a set of global hyperparameters (Miikkulainen et al., 2019 ###reference_31###).\nIn fact, the well-known Show-and-Tell image-captioning architecture\n(Vinyals et al., 2015 ###reference_51###) is in this search space.\nTwo separate runs of CoDeepNEAT for evolving DNNs in the\nimage-captioning domain were performed. The baseline version of CoDeepNEAT was synchronous, while the improved version, called CoDeepNEAT-AES, made use of asynchronous evaluations. To keep the\ncomputational costs reasonable, during evolution, the networks were trained for\nsix epochs, and on one-fifth of the entire MSCOCO image-captioning\ndataset. Identical hyperparameters were used in both runs: Population sizes were and , , divided into and species.\nFor CoDeepNEAT-AES, and (i.e., , on par with in the multiplexer experiment) were used. The worker nodes were composed\nof up to Amazon EC2 spot instances with GPU support for training DNNs.\nBecause EC2 spot instances are inherently unreliable and may be temporarily\nunavailable for any reason, both runs were started at the same time to\nremove a potential source of bias. Each method was run until convergence,\nwhich took about 89 hrs. Due to this cost (in terms of time, money, and carbon footprint)\nthe conclusions were drawn from these single runs, as is common in modern\ndeep-learning experiments.\n###figure_7### ###figure_8### ###figure_9### The CoDeepNEAT and CoDeepNEAT-AES runs resulted in a similar range of architectures, similar to those reported in the original CoDeepNEAT experiments (Miikkulainen et al., 2019 ###reference_31###).\nFrom Figures 7 ###reference_###, 8 ###reference_### and 9 ###reference_###, it is clear that CoDeepNEAT-AES runs significantly faster than the synchronous version of CoDeepNEAT. Although both versions achieve similar fitness after the same number of generations (Figure 7 ###reference_###), each generation of CoDeepNEAT-AES takes far less time (Figure 8 ###reference_###), and as a result, CoDeepNEAT-AES progresses much faster in wall-clock time (Figure 9 ###reference_###). At 130,000 seconds elapsed, CoDeepNEAT-AES was able to reach the same fitness as CoDeepNEAT at 300,000 seconds, thus resulting in a 2.3-fold speedup. Also, when run until 300,000 seconds, CoDeepNEAT-AES was able to find better solutions than CoDeepNEAT.\nOverall, the experimental results suggest that AES accelerates the convergence of CoDeepNEAT over two fold in the image-captioning domain."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Discussion and Future Work",
87
+ "text": "As the experimental results show, AES provides a significant speedup in\nthe sorting-network, multiplexer, and image-captioning domains. Furthermore, the hyperparameter\n is crucial to the improved performance of AES. With little variation in the evaluation times, it needs to be large; with more variation, small. When approaches 1 (i.e., approaches )\nAES becomes similar to synchronous evaluation, and increasingly needs to wait for the slowest individuals to finish evaluation.\nHowever, setting a value for that is too large also hurts\nperformance. This is likely because as gets smaller, both the returned\nindividuals and the new population that is generated from them become less\ndiverse.\nThe histogram in Figure 10 ###reference_### reveals how AES improves\nperformance over synchronous evaluation. This plot visualizes the\ndistribution of times at which individuals (along with their fitness) return from\nevaluation over the duration of an average generation. In the\nsynchronous version of CoDeepNEAT, individuals in the population are submitted\nat the same time, and all come back in the same generation before evolution can proceed. As a\nresult, the histogram for synchronous CoDeepNEAT is roughly a Gaussian distribution,\nwith some individuals returning early and some returning late. A lot of time is thus\nwasted waiting for the last few individuals. On the other hand, this delay does not occur with CoDeepNEAT-AES; the\ndistribution is uniform, indicating that individuals are returned at a\nsteady, regular rate over the course of a generation and there are no\nslow individuals that might bottleneck the EA.\nThere is one measure where the synchronous version of CoDeepNEAT has\nan advantage. This result is seen in the histogram in\nFigure 11 ###reference_###, which visualizes the delay between\nwhen an individual is placed in the Evaluation Queue and when\nthat same individual assigned to a worker node. The delay is\nslightly higher for CoDeepNEAT-AES than the synchronous version of CoDeepNEAT. The reason is\nthat CoDeepNEAT-AES maintains more individuals in the Evaluation Queue. However, as the fitness plot in Figure 9 ###reference_###\nindicates, this longer delay does not affect performance significantly.\nAnother aspect of CoDeepNEAT-AES that differs from its synchronous counterpart is that it tends to favor candidates with lower evaluation times: Their lineage can go through more generations in the same amount of time. Evidence of this effect can be seen in Figure 8 ###reference_### where the mean amount of time spent per generation is significantly lower for CoDeepNEAT-AES than for CoDeepNEAT. While asynchronous EAs are known to have such evaluation bias, and efforts have been developed to avoid it (Guijt et al., 2023 ###reference_11###), it is not undesirable in the case of neuroevolution. Discovering DNNs that are faster to train is often a secondary goal of many architecture search algorithms, and as Figures 7 ###reference_### and 9 ###reference_### show, CoDeepNEAT-AES is able to achieve the same quality of solutions as CoDeepNEAT while taking much less time.\nAlthough AES was developed primarily as a method for ENAS, it is a general method of asynchronous evolution. In future work, CoDeepNEAT-AES may be combined with other\nimprovements such as age-layering and learning curve prediction (Hodjat et al., 2016 ###reference_15###; Klein et al., 2017 ###reference_21###). Furthermore,\nmore extensive experiments can be done to analyze how different values for \nand will affect the performance of CoDeepNEAT-AES. It should also be possible to use the generic version of AES to scale up evolutionary experiments in many other domains as well.\n###figure_10### ###figure_11###"
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusion",
93
+ "text": "This paper proposed a new asynchronous EA called AES designed for complex problems such as optimizing the architecture of DNNs. It can use the available distributed computing resources efficiently with both single and multi-population EAs, and in verifiable discovery and in open-ended optimization tasks. AES works by maintaining a queue of networks that are ready to be evaluated, and by proceeding with evolution once a fraction of the networks have returned from the workers. Experimental results in the sorting-network, multiplexer, and image-captioning domains show that AES can attain a two to 14-fold speedup over its synchronous counterpart with no loss in accuracy or final fitness. AES is thus a promising way to extend evolutionary optimization to complex domains where traditional parallelization methods are ineffective."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {},
98
+ "image_paths": {
99
+ "1": {
100
+ "figure_path": "2308.04102v3_figure_1.png",
101
+ "caption": "Figure 1: A visualization of how CoDeepNEAT assembles networks for\nfitness evaluation. The blueprints and modules are evolved in separate populations, divided into species (or subpopulations). For evaluation, they are assembled into a network by replacing the blueprint nodes\nwith modules drawn from the corresponding module species. This approach makes it possible to evolve repetitive and deep structures seen in many recent successful DNNs.",
102
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/coevolutionapproach.png"
103
+ },
104
+ "2": {
105
+ "figure_path": "2308.04102v3_figure_2.png",
106
+ "caption": "Figure 2: An overview of generic single-population AES. R\ud835\udc45Ritalic_R workers pull individuals from the Evaluation Queue, evaluate them, and return them (with fitnesses F\ud835\udc39Fitalic_F) to the server. As soon as M\ud835\udc40Mitalic_M individuals have been returned, the server uses them and the L\ud835\udc3fLitalic_L elite individuals (from the previous generation) to create a new generation of M\ud835\udc40Mitalic_M individuals. They are then placed into the Evaluation Queue, and the elite set is updated. In this manner, the workers in AES do not have to stay idle waiting for a generation to finish their evaluations. The ratio of M/K=D\ud835\udc40\ud835\udc3e\ud835\udc37M/K=Ditalic_M / italic_K = italic_D strikes a balance between diversity and efficiency.",
107
+ "url": "http://arxiv.org/html/2308.04102v3/x1.png"
108
+ },
109
+ "3": {
110
+ "figure_path": "2308.04102v3_figure_3.png",
111
+ "caption": "Figure 3: An overview of CoDeepNEAT-AES. The AES process of Figure 2 is extended with blueprint and module populations. Complete networks are assembled from the blueprints and modules and sent to the Evaluation Queue. When M\ud835\udc40Mitalic_M networks are returned, they are disassembled into their blueprints and modules, whose fitnesses are calculated as an average over all networks in which they participated. These blueprints and modules are then merged into the current populations, replacing any existing fitnesses. NEAT neuroevolution is run in each population, replacing the bottom N\u2212L\ud835\udc41\ud835\udc3fN-Litalic_N - italic_L with new blueprints/modules, and updating the species. Similar to generic AES, the workers are fully employed in evaluating individuals, resulting in significant speedup over synchronized CoDeepNEAT.",
112
+ "url": "http://arxiv.org/html/2308.04102v3/x2.png"
113
+ },
114
+ "4": {
115
+ "figure_path": "2308.04102v3_figure_4.png",
116
+ "caption": "Figure 4: A Four-Input Sorting Network, represented as ((0,1),(2,3),(0,2),(1,3),(1,2)). This network takes as its input (left) four numbers and produces output (right) where those numbers are sorted (large to small, top to bottom). Each comparator (a connection between the lines) swaps the numbers on its two lines if they are not in order, otherwise it does nothing. This network has five comparators, and is the minimal four-input sorting network. Minimal networks are generally not known for input sizes larger than eight, and designing them is a challenging optimization problem.",
117
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/sample-sn.png"
118
+ },
119
+ "5": {
120
+ "figure_path": "2308.04102v3_figure_5.png",
121
+ "caption": "Figure 5: An overview of how different values for M\ud835\udc40Mitalic_M (batch size) affect the convergence time in the sorting-network domain. The settings K=1000\ud835\udc3e1000K=1000italic_K = 1000 and M=10\ud835\udc4010M=10italic_M = 10 (D=100\ud835\udc37100D=100italic_D = 100) provide the best performance for this problem.\nThe rightmost box (M\ud835\udc40Mitalic_M=1000) amounts to synchronous evolution, thus demonstrating that AES results in over two-fold speedup even with limited variation in evaluation times.\nThe differences are statistically significant with p=2.4\u00d710\u22125\ud835\udc5d2.4superscript105p=2.4\\times 10^{-5}italic_p = 2.4 \u00d7 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT (M=10\ud835\udc4010M=10italic_M = 10 vs. M=1000\ud835\udc401000M=1000italic_M = 1000), p=4.8\u00d710\u22123\ud835\udc5d4.8superscript103p=4.8\\times 10^{-3}italic_p = 4.8 \u00d7 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT (M=50\ud835\udc4050M=50italic_M = 50 vs. M=1000\ud835\udc401000M=1000italic_M = 1000), and p=8.9\u00d710\u22123\ud835\udc5d8.9superscript103p=8.9\\times 10^{-3}italic_p = 8.9 \u00d7 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT (M=250\ud835\udc40250M=250italic_M = 250 vs. M=1000\ud835\udc401000M=1000italic_M = 1000).",
122
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/Sorting-Simulation-Results.png"
123
+ },
124
+ "6": {
125
+ "figure_path": "2308.04102v3_figure_6.png",
126
+ "caption": "Figure 6: An overview of how different values for M\ud835\udc40Mitalic_M (batch size) affect the convergence time in the multiplexer domain. The settings K=4000\ud835\udc3e4000K=4000italic_K = 4000 and M=1000\ud835\udc401000M=1000italic_M = 1000 (D=4\ud835\udc374D=4italic_D = 4) provide the best performance for this problem.\nThe rightmost box (M\ud835\udc40Mitalic_M=4000) amounts to synchronous evolution, thus demonstrating that AES results in a 14-fold speedup in this domain.\nThe differences are statistically significant with p=1.03\u00d710\u221211\ud835\udc5d1.03superscript1011p=1.03\\times 10^{-11}italic_p = 1.03 \u00d7 10 start_POSTSUPERSCRIPT - 11 end_POSTSUPERSCRIPT (M=1000\ud835\udc401000M=1000italic_M = 1000 vs. M=4000\ud835\udc404000M=4000italic_M = 4000) and p=6.29\u00d710\u221209\ud835\udc5d6.29superscript1009p=6.29\\times 10^{-09}italic_p = 6.29 \u00d7 10 start_POSTSUPERSCRIPT - 09 end_POSTSUPERSCRIPT (M=1000\ud835\udc401000M=1000italic_M = 1000 vs. M=500\ud835\udc40500M=500italic_M = 500).",
127
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/Asynchrony-Simulation-Results.png"
128
+ },
129
+ "7": {
130
+ "figure_path": "2308.04102v3_figure_7.png",
131
+ "caption": "Figure 7: A plot of fitness vs. number of generations elapsed for synchronous CoDeepNEAT and CoDeepNEAT-AES. The algorithms perform comparably at each generation. However, CoDeepNEAT-AES is much faster, as seen in Figures 7 and 9 (for this reason, the CoDeepNEAT was run only 14 generations while CoDeepNEAT-AES was run until 50.)",
132
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/fit_vs_gen.png"
133
+ },
134
+ "8": {
135
+ "figure_path": "2308.04102v3_figure_8.png",
136
+ "caption": "Figure 8: A histogram of time per generation for synchronous CoDeepNEAT and CoDeepNEAT-AES. CoDeepNEAT-AES uses significantly less time per generation.",
137
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/Time_Per_Generation_Seconds_hist.png"
138
+ },
139
+ "9": {
140
+ "figure_path": "2308.04102v3_figure_9.png",
141
+ "caption": "Figure 9: A plot of fitness vs. wallclock time elapsed for synchronous CoDeepNEAT and CoDeepNEAT-AES. Each marker in the plot represents the fitness at a different generation. CoDeepNEAT-AES improves faster than regular CoDeepNEAT and achieves a higher fitness in the same amount of time.",
142
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/fit_vs_time.png"
143
+ },
144
+ "10": {
145
+ "figure_path": "2308.04102v3_figure_10.png",
146
+ "caption": "Figure 10: A histogram of the times when individuals return from evaluation over the course of an average generation for both algorithms. CoDeepNEAT-AES has a uniform distribution while CoDeepNEAT has a Gaussian distribution, thus demonstrating that CoDeepNEAT-AES wastes little time waiting for slow individuals.",
147
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/Normalized_Return_Time_hist.png"
148
+ },
149
+ "11": {
150
+ "figure_path": "2308.04102v3_figure_11.png",
151
+ "caption": "Figure 11: A histogram comparing the delay between submission of individuals to the Evaluation Queue and when they actually start training. CoDeepNEAT-AES has a slightly higher delay, but it is not sufficient to affect performance (as demonstrated by Figure 9).",
152
+ "url": "http://arxiv.org/html/2308.04102v3/extracted/5325335/Delay_Between_Submission_and_Training_Time_Seconds_hist.png"
153
+ }
154
+ },
155
+ "validation": true,
156
+ "references": [
157
+ {
158
+ "1": {
159
+ "title": "Performance analysis of synchronous and asynchronous\ndistributed genetic algorithms on multiprocessors.",
160
+ "author": "Abdelhafez, A., Alba, E.,\nLuque, G., 2019.",
161
+ "venue": "Swarm and Evolutionary Computation\n49, 147\u2013157.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "2": {
167
+ "title": "Parallel evolutionary algorithms: A review .",
168
+ "author": "Adamidis, P., 1998.",
169
+ "venue": null,
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "3": {
175
+ "title": "Finding Better Sorting Networks.",
176
+ "author": "Baddar, S.W.A., 2009.",
177
+ "venue": "Ph.D. thesis. Kent State University.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "4": {
183
+ "title": "Gp-coach: Genetic programming-based learning of\ncompact and accurate fuzzy rule-based classification systems for\nhigh-dimensional problems.",
184
+ "author": "Berlanga, F.J., Rivera, A.,\ndel Jes\u00fas, M.J., Herrera, F.,\n2010.",
185
+ "venue": "Information Sciences 180,\n1183\u20131200.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "5": {
191
+ "title": "A partially asynchronous global parallel genetic\nalgorithm, in: Proceedings of the Genetic and\nEvolutionary Computation Conference Companion, pp.\n1771\u20131778.",
192
+ "author": "Chitty, D.M., 2021.",
193
+ "venue": null,
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "6": {
199
+ "title": "A unified architecture for natural language\nprocessing: Deep neural networks with multitask learning, in:\nProceedings of the 25th international conference on\nMachine learning, ACM. pp. 160\u2013167.",
200
+ "author": "Collobert, R., Weston, J.,\n2008.",
201
+ "venue": null,
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "7": {
207
+ "title": "Asynchronous master-slave parallelization of\ndifferential evolution for multi-objective optimization.",
208
+ "author": "Depolli, M., Trobec, R.,\nFilipic, B., 213.",
209
+ "venue": "Evolutionary computation 21,\n261\u2013291.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "8": {
215
+ "title": "An image is worth 16x16 words: Transformers for image\nrecognition at scale.",
216
+ "author": "Dosovitskiy, A., Beyer, L.,\nKolesnikov, A., Weissenborn, D.,\nZhai, X., Unterthiner, T.,\nDehghani, M., Minderer, M.,\nHeigold, G., Gelly, S., et al.,\n2020.",
217
+ "venue": "arXiv preprint arXiv:2010.11929 .",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "9": {
223
+ "title": "Variants of genetic algorithms and their\napplications, in: Applied Genetic Algorithm and Its\nVariants: Case Studies and New Developments. Springer,\npp. 1\u201320.",
224
+ "author": "Goswami, R.D., Chakraborty, S.,\nMisra, B., 2023.",
225
+ "venue": null,
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "10": {
231
+ "title": "Speech recognition with deep recurrent neural\nnetworks, in: 2013 IEEE International Conference on\nAcoustics, Speech and Signal Processing, IEEE. pp.\n6645\u20136649.",
232
+ "author": "Graves, A., Mohamed, A.r.,\nHinton, G., 2013.",
233
+ "venue": null,
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "11": {
239
+ "title": "The impact of asynchrony on parallel model-based\neas.",
240
+ "author": "Guijt, A., Thierens, D.,\nAlderliesten, T., Bosman, P.A.N.,\n2023.",
241
+ "venue": "arXiv:2303.15543.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "12": {
247
+ "title": "A study on efficient asynchronous parallel\nmulti-objective evolutionary algorithm with waiting time limitation, in:\nInternational Conference on the Theory and Practice of\nNatural Computing, Springer. pp.\n121\u2013132.",
248
+ "author": "Harada, T., 2020.",
249
+ "venue": null,
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "13": {
255
+ "title": "Analysis of semi-asynchronous multi-objective\nevolutionary algorithm with different asynchronies.",
256
+ "author": "Harada, T., Takadama, K.,\n2020.",
257
+ "venue": "Soft Computing 24,\n2917\u20132939.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "14": {
263
+ "title": "Identity mappings in deep residual networks.",
264
+ "author": "He, K., Zhang, X., Ren,\nS., Sun, J., 2016.",
265
+ "venue": "CoRR abs/1603.05027.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "15": {
271
+ "title": "Distributed age-layered novelty search, in:\nProceedings of the Fifteenth International Conference on\nthe Synthesis and Simulation of Living Systems (Alife\u201916).",
272
+ "author": "Hodjat, B., Shahrzad, H.,\nMiikkulainen, R., 2016.",
273
+ "venue": null,
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "16": {
279
+ "title": "A comprehensive survey of deep learning for image\ncaptioning.",
280
+ "author": "Hossain, M.Z., Sohel, F.,\nShiratuddin, M.F., Laga, H.,\n2019.",
281
+ "venue": "ACM Computing Surveys (CsUR) 51,\n1\u201336.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "17": {
287
+ "title": "Deep visual-semantic alignments for generating image\ndescriptions, in: Proc. of CVPR, pp.\n3128\u20133137.",
288
+ "author": "Karpathy, A., Li, F.F.,\n2015.",
289
+ "venue": null,
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "18": {
295
+ "title": "Hierarchical asynchronous genetic algorithms for\nparallel/distributed simulation-based optimization.",
296
+ "author": "Kim, J., 1994.",
297
+ "venue": "Ph.D. thesis. University of Arizona.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "19": {
303
+ "title": "Uberflow: A gpu-based particle engine, in:\nHWWS \u201904: Proceedings of the ACM\nSIGGRAPH/EUROGRAPHICS conference on Graphics hardware,\nACM, New York, NY, USA. pp.\n115\u2013122.",
304
+ "author": "Kipfer, P., Segal, M.,\nWestermann, R., 2004.",
305
+ "venue": "doi:http://doi.acm.org.ezproxy.lib.utexas.edu/10.1145/1058129.1058146.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "20": {
311
+ "title": "Hyperparameter tuning for deep reinforcement learning\napplications.",
312
+ "author": "Kiran, M., Ozyildirim, M.,\n2022.",
313
+ "venue": "arXiv preprint arXiv:2201.11182 .",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "21": {
319
+ "title": "Learning curve prediction with bayesian neural\nnetworks, in: International Conference on Learning\nRepresentations.",
320
+ "author": "Klein, A., Falkner, S.,\nSpringenberg, J.T., Hutter, F.,\n2017.",
321
+ "venue": null,
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "22": {
327
+ "title": "Art of Computer Programming: Sorting and\nSearching. volume 3.",
328
+ "author": "Knuth, D.E., 1998.",
329
+ "venue": "2 ed., Addison-Wesley\nProfessional.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "23": {
335
+ "title": "A hierarchical approach to learning the boolean\nmultiplexer function.",
336
+ "author": "Koza, J.R., 1990.",
337
+ "venue": "Foundations of genetic algorithms ,\n171\u2013192.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "24": {
343
+ "title": "Regularized evolutionary population-based training,\nin: Proceedings of the Genetic and Evolutionary\nComputation Conference.",
344
+ "author": "Liang, J., Gonzalez, S.,\nShahrzad, H., Miikkulainen, R.,\n2021.",
345
+ "venue": null,
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "25": {
351
+ "title": "Evolutionary neural AutoML for deep learning, in:\nProceedings of the Genetic and Evolutionary Computation\nConference (GECCO-2019).",
352
+ "author": "Liang, J., Meyerson, E.,\nHodjat, B., Fink, D.,\nMutch, K., Miikkulainen, R.,\n2019.",
353
+ "venue": "URL: http://nn.cs.utexas.edu/?liang:gecco19.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "26": {
359
+ "title": "Evolutionary architecture search for deep multitask\nnetworks, in: Proceedings of the Genetic and\nEvolutionary Computation Conference.",
360
+ "author": "Liang, J., Meyerson, E.,\nMiikkulainen, R., 2018.",
361
+ "venue": null,
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "27": {
367
+ "title": "CMA-ES for hyperparameter optimization of deep\nneural networks.",
368
+ "author": "Loshchilov, I., Hutter, F.,\n2016a.",
369
+ "venue": "CoRR abs/1604.07269.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "28": {
375
+ "title": "CMA-ES for hyperparameter optimization of deep\nneural networks.",
376
+ "author": "Loshchilov, I., Hutter, F.,\n2016b.",
377
+ "venue": "arXiv preprint arXiv:1604.07269 .",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "29": {
383
+ "title": "Nsga-net: a multi-objective genetic algorithm for\nneural architecture search.",
384
+ "author": "Lu, Z., Whalen, I.,\nBoddeti, V., Dhebar, Y.,\nDeb, K., Goodman, E.,\nBanzhaf, W., 2018.",
385
+ "venue": "arXiv preprint arXiv:1810.03522 .",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "30": {
391
+ "title": "The ECJ Owner\u2019s Manual. 22nd\ned.",
392
+ "author": "Luke, S., 2014.",
393
+ "venue": "URL: http://cs.gmu.edu/~eclab/projects/ecj/.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "31": {
399
+ "title": "Evolving deep neural networks, in:\nArtificial Intelligence in the Age of Neural Networks and\nBrain Computing. Elsevier, pp.\n293\u2013312.",
400
+ "author": "Miikkulainen, R., Liang, J.,\nMeyerson, E., Rawal, A.,\nFink, D., Francon, O.,\nRaju, B., Shahrzad, H.,\nNavruzyan, A., Duffy, N., et al.,\n2019.",
401
+ "venue": null,
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "32": {
407
+ "title": "A systematic literature review of the successors of\n\u201cneuroevolution of augmenting topologies\u201d.",
408
+ "author": "Papavasileiou, E., Cornelis, J.,\nJansen, B., 2021.",
409
+ "venue": "Evolutionary Computation 29,\n1\u201373.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "33": {
415
+ "title": "Hybrid image analysis model for hashtag\nrecommendation through the use of deep learning methods.",
416
+ "author": "Po\u0142ap, D., 2023.",
417
+ "venue": "Expert Systems with Applications ,\n120566.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "34": {
423
+ "title": "Effect of global parallelism on the behavior of a\nsteady state genetic algorithm for design optimization., in:\nIn Proceedings of the 1999 Congress on Evolutionary\nComputation,. IEEE.",
424
+ "author": "Rasheed, K., Davison, B.D.,\n1999.",
425
+ "venue": null,
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "35": {
431
+ "title": "Regularized evolution for image classifier\narchitecture search, in: Proceedings of the AAAI\nConference on Artificial Intelligence, pp. 4780\u20134789.",
432
+ "author": "Real, E., Aggarwal, A.,\nHuang, Y., Le, Q.V.,\n2019.",
433
+ "venue": null,
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "36": {
439
+ "title": "Parallel evolutionary optimization for neuromorphic\nnetwork training, in: 2016 2nd Workshop on Machine\nLearning in HPC Environments (MLHPC), IEEE. pp.\n36\u201346.",
440
+ "author": "Schuman, C.D., Disney, A.,\nSingh, S.P., Bruer, G.,\nMitchell, J.P., Klibisz, A.,\nPlank, J.S., 2016.",
441
+ "venue": null,
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "37": {
447
+ "title": "Avoiding excess computation in asynchronous\nevolutionary algorithms.",
448
+ "author": "Scott, E.O., Coletti, M.,\nSchuman, C.D., Kay, B.,\nKulkarni, S.R., Parsa, M.,\nGunaratne, C., De Jong, K.A.,\n2023.",
449
+ "venue": "Expert Systems 40,\ne13100.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "38": {
455
+ "title": "Evaluation-time bias in asynchronous evolutionary\nalgorithms, in: Proceedings of the companion publication\nof the 2015 annual conference on genetic and evolutionary computation, pp.\n1209\u20131212.",
456
+ "author": "Scott, E.O., De Jong, K.A.,\n2015a.",
457
+ "venue": null,
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "39": {
463
+ "title": "Understanding simple asynchronous evolutionary\nalgorithms, in: Foundations of Genetic Algorithms.",
464
+ "author": "Scott, E.O., De Jong, K.A.,\n2015b.",
465
+ "venue": null,
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "40": {
471
+ "title": "Enhanced optimization with composite objectives and\nnovelty selection, in: Proceedings of the 2018\nConference on Artificial Life, Tokyo, Japan.",
472
+ "author": "Shahrzad, H., Fink, D.,\nMiikkulainen, R., 2018.",
473
+ "venue": "URL: http://www.cs.utexas.edu/users/ai-lab?shahrzad:alife18.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "41": {
479
+ "title": "Tackling the boolean multiplexer function using a\nhighly distributed genetic programming system, in:\nGenetic Programming Theory and Practice XII.\nSpringer, pp. 167\u2013179.",
480
+ "author": "Shahrzad, H., Hodjat, B.,\n2015.",
481
+ "venue": null,
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "42": {
487
+ "title": "Enhanced optimization with composite objectives and\nnovelty pulsation, in: Banzhaf, W.,\nGoodman, E., Sheneman, L.,\nTrujillo, L., Worzel, B. (Eds.),\nGenetic Programming Theory and Practice XVII.\nSpringer, New York, pp.\n275\u2013293.",
488
+ "author": "Shahrzad, H., Hodjat, B.,\nDolle, C., Denissov, A.,\nLau, S., Goodhew, D.,\nDyer, J., Miikkulainen, R.,\n2020.",
489
+ "venue": null,
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "43": {
495
+ "title": "Scalable bayesian optimization using deep neural\nnetworks, in: International conference on machine\nlearning, pp. 2171\u20132180.",
496
+ "author": "Snoek, J., Rippel, O.,\nSwersky, K., Kiros, R.,\nSatish, N., Sundaram, N.,\nPatwary, M., Prabhat, M.,\nAdams, R., 2015.",
497
+ "venue": null,
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "44": {
503
+ "title": "Real-time neuroevolution in the NERO video game.",
504
+ "author": "Stanley, K.O., Bryant, B.D.,\nMiikkulainen, R., 2005.",
505
+ "venue": "IEEE Transactions on Evolutionary Computation\n9, 653\u2013668.",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "45": {
511
+ "title": "Evolving Neural Networks Through Augmenting\nTopologies.",
512
+ "author": "Stanley, K.O., Miikkulainen, R.,\n2002.",
513
+ "venue": "Evolutionary Computation 10,\n99\u2013127.",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "46": {
519
+ "title": "Parallel evolutionary algorithms.",
520
+ "author": "Sudholt, D., 2015.",
521
+ "venue": "Springer Handbook of Computational Intelligence ,\n929\u2013959.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "47": {
527
+ "title": "Rethinking the inception architecture for computer\nvision, in: Proc. of CVPR, pp.\n2818\u20132826.",
528
+ "author": "Szegedy, C., Vanhoucke, V.,\nIoffe, S., Shlens, J., ,\nWojna, Z., 2016.",
529
+ "venue": null,
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "48": {
535
+ "title": "Efficientnet: Rethinking model scaling for\nconvolutional neural networks, in: International\nconference on machine learning, PMLR. pp.\n6105\u20136114.",
536
+ "author": "Tan, M., Le, Q., 2019.",
537
+ "venue": null,
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "49": {
543
+ "title": "Constructing controllers for physical multilegged\nrobots using the enso neuroevolution approach.",
544
+ "author": "Valsalam, V., Hiller, J.,\nMacCurdy, R., Lipson, H.,\nMiikkulainen, R., 2013.",
545
+ "venue": "Evolutionary Intelligence 14,\n303\u2013331.",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "50": {
551
+ "title": "Context-aware captions from context-agnostic\nsupervision.",
552
+ "author": "Vedantam, R., Bengio, S.,\nMurphy, K., Parikh, D.,\nChechik, G., 2017.",
553
+ "venue": "arXiv preprint arxiv/1701.02870.",
554
+ "url": null
555
+ }
556
+ },
557
+ {
558
+ "51": {
559
+ "title": "Show and tell: A neural image caption generator, in:\nProc. of CVPR, pp. 3156\u20133164.",
560
+ "author": "Vinyals, O., Toshev, A.,\nBengio, S., Erhan, D.,\n2015.",
561
+ "venue": null,
562
+ "url": null
563
+ }
564
+ },
565
+ {
566
+ "52": {
567
+ "title": "Cvt: Introducing convolutions to vision\ntransformers, in: Proceedings of the IEEE/CVF\nInternational Conference on Computer Vision, pp. 22\u201331.",
568
+ "author": "Wu, H., Xiao, B., Codella,\nN., Liu, M., Dai, X.,\nYuan, L., Zhang, L.,\n2021.",
569
+ "venue": null,
570
+ "url": null
571
+ }
572
+ },
573
+ {
574
+ "53": {
575
+ "title": "Show, attend and tell: Neural image caption\ngeneration with visual attention, in: Proc. of ICML,\npp. 77\u201381.",
576
+ "author": "Xu, K., Ba, J., Kiros,\nR., Cho, K., Courville, A.C.,\nSalkhutdinov, R., Zemel, R.S.,\nBengio, Y., 2015.",
577
+ "venue": null,
578
+ "url": null
579
+ }
580
+ },
581
+ {
582
+ "54": {
583
+ "title": "Image captioning with semantic attention, in:\nProc. of CVPR, pp. 4651\u20134659.",
584
+ "author": "You, Q., Jin, H., Wang,\nZ., Fang, C., Luo, J.,\n2016.",
585
+ "venue": null,
586
+ "url": null
587
+ }
588
+ },
589
+ {
590
+ "55": {
591
+ "title": "Asynchronous genetic algorithms on parallel\ncomputers, in: Proceedings of the 5th International\nConference on Genetic Algorithms. Kaufmann.",
592
+ "author": "Zeigler, B.P., Kim, J.,\n1993.",
593
+ "venue": null,
594
+ "url": null
595
+ }
596
+ }
597
+ ],
598
+ "url": "http://arxiv.org/html/2308.04102v3"
599
+ }
20240101/2308.12682v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2309.12269v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2309.14181v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2310.02128v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2310.15790v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2311.00912v3.json ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Whitney-type estimates for convex functions",
3
+ "abstract": "We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction and results",
9
+ "text": ""
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "1.1. Introduction",
15
+ "text": "Whitney [Wh] showed that for any function continuous on there exists an algebraic polynomial of degree such that\nwhere is a positive constant depending only on , and is an arbitrary positive integer. In the inequality (1.1 ###reference_###), the left-hand side is the error of uniform approximation of by , while the maximum in the right-hand side is the -th order modulus of smoothness. Bounds of the approximation error by a measure of smoothness are classical in approximation theory, see, e.g. [DeLo]*Chapters 2, 7.\n\nTypically, a Whitney type estimate like (1.1 ###reference_###) would be applied on an interval of a partition and used for construction of piecewise polynomial approximation, so the degree is fixed while the number of pieces grows. In other words, Whitney type inequalities help bound local approximation error.\n\nEstimating the values of involved (Whitney) constants is an important question which attracted a lot of attention. As an example of such a result, let us mention that Gilewicz, Kryakin and Shevchuk [GiKrSh] obtained the best known bound on the smallest possible in (1.1 ###reference_###) valid for all positive integers , which is .\n\nIn the multivariate settings, a comprehensive study of Whitney constants was conducted by Brudnyi and Kalton in [BrKa]. For approximation on convex domains, it was shown by Dekel and Leviatan [DeLe] that the corresponding Whitney constant does not depend on specific geometry of the domain. Recently, the first author and Dai [DaPr] obtained directional Whitney inequalities valid for certain classes of domains which are not necessarily convex.\n\nOur goal in this work is to establish Whitney-type estimates for the case when the function we need to approximate is convex. This additional restriction may, by itself, lead to better approximation rate (smaller values of Whitney constant). Another and different problem of our interest is to study Whitney-type estimates for approximation of convex function by polynomials which are also required to be convex, i.e., the problem of shape preserving approximation. For a survey of shape preserving polynomial approximation on an interval (in global settings), see [KLPS]. Very little is known for convexity preserving multivariate polynomial approximation, where perhaps the most significant contribution is that of Shvedov [Sh-multivar].\n\nIn our investigations in this work, we made an effort to track and decrease the dependence of involved constants on the dimension of the space and the geometry of the domain, which could be of particular importance for possible application in data science where one needs to work with high dimensional data."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "1.2. Notations",
21
+ "text": "Let denote the space of algebraic polynomials of total degree in variables, and be the space of continuous real valued functions on equipped with the norm , where is a compact set. For the error of uniform polynomial approximation is defined as\n(note that in some literature, e.g., in [BrKa], is used in place of in the definition of , but in our opinion matching the index to the total degree of the polynomial is more natural in multivariate settings) and the -th modulus of smoothness of on is\nwhere\nOne can find elementary properties of moduli of smoothness in [DeLo]*Sect. 2.7.\n\nWe define the Whitney constant by\nIn what follows, we assume that , where is the class of -dimensional convex bodies, i.e., compact convex (the segment joining any two points of entirely belongs to ) subsets of having nonempty interior. Note that any compact convex set in with empty interior is a convex body in an appropriate affine subspace of of smaller dimension. We let to be all functions from which are convex on , i.e. those satisfying for any . Now we can define Whitney constant for convex functions by\nFor , the error of approximation by convex polynomials is\nFinally, the convexity preserving Whitney constant is\nThe following relation between the three constants defined in (1.2 ###reference_###), (1.3 ###reference_###) and (1.4 ###reference_###) is immediate:\nWe define the corresponding global Whitney constants , and as the suprema of the left-hand-sides of (1.2 ###reference_###), (1.3 ###reference_###) and (1.4 ###reference_###), respectively, over .\n\nIt is straightforward that for any we have and , so we will proceed by discussing the cases when ."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "1.3. Approximation by linear functions",
27
+ "text": "One of the main results of [BrKa] is the next theorem.\nThe following inequalities hold:\nin particular,\n( denotes the largest integer not exceeding .)\n\nSince any element of is a convex function, , so we will be only concerned with . By minor modifications of the proofs from [BrKa], we will show that the behaviour of the corresponding Whitney constant for convex functions is smaller by essentially the factor of , namely, we prove:\nThe following inequalities hold:\nin particular,\nThe lower bound in Theorem 1.1 ###reference_heorem1### is obtained by considering as the Cartesian product of two simplexes, while in Theorem 1.2 ###reference_heorem2### we simply take as a simplex. Either way, the \u201cbad\u201d domains are not centrally symmetric. However, it is known that asymptotically the behavior of the global Whitney constant is the same up to an absolute constant factor (independent of dimension) even if one restricts the domains to be symmetric. Namely (see [BrKa]*Remark (c), p. 162)\nwhere . In particular, we have .\n\nWe show that the situation for Whitney constants for convex functions is completely different when the domain is symmetric. Moreover, we find the exact value of the corresponding constant for arbitrary centrally symmetric convex domain.\nFor arbitrary , , we have .\nThe main idea of the upper bound is to utilize the existence of a supporting hyperplane at the center of symmetry to the graph of the function which needs to be approximated. The lower bound is rather standard and is essentially one-dimensional."
28
+ },
29
+ {
30
+ "section_id": "1.4",
31
+ "parent_section_id": "1",
32
+ "section_name": "1.4. Approximation of convex functions by polynomials of degree",
33
+ "text": "In contrast to the previous subsection, once the degree of the approximating polynomial is at least 2, then one does not get any improvement in the values of Whitney constants even if the function to be approximated is convex. Our main result for this situation is the following.\nFor any and we have .\nThe key idea for the proof is that for any sufficiently smooth function one can add an appropriate quadratic polynomial to that function to make it convex.\n\nUsing Theorem 1.4 ###reference_heorem4### and the corresponding results on the usual Whitney constant , one can obtain direct corollaries regarding the behaviour of , when . For the global Whitney constants, the following is conjectured: if , then for any\nwhere the constants in the equivalences may depend on only. As we have seen from the previous subsection, this is confirmed for . For only the upper estimate on is known, as well as the following lower bounds: . An interested reader is referred to [BrKa] for details and other known results for specific domains, such as unit balls in metric with ."
34
+ },
35
+ {
36
+ "section_id": "1.5",
37
+ "parent_section_id": "1",
38
+ "section_name": "1.5. Convexity preserving approximation by polynomials of degree",
39
+ "text": "We begin with a negative result for .\nFor any , , we have .\nThe proof readily follows from the one-dimensional version due to Shvedov [Sh-orders]*Th. 3.\n\nThus, we are left with the study of .\n\nIn the one-dimensional case, one can observe that if for a segment , then the best approximating quadratic is automatically convex, i.e. . Indeed, if is the best approximant to , i.e., , then is constant and either in which case so there is nothing to prove, or and then is strictly convex on and by the properties of convex functions one can find a linear function which is between and on , so would satisfy , contradiction. So, for any segment ,\nwhere the first inequality uses (1.5 ###reference_###) and the last inequality was obtained by Kryakin in [Kr].\n\nFor the multivariate case, we will show how any quadratic approximating polynomial to a convex function may be modified to become convex quadratic polynomial so that the error of approximation increases by at most an extra constant factor that depends on the geometry of the domain, namely, on how far the domain is from being an ellipsoid.\nFor any , and then there exists such that\nand is the Banach-Mazur distance between and the unit ball of defined as\nIn particular, for any , , we have\nRecall that by (1.5 ###reference_###), so any upper bound on either of or yields an upper bound on with the extra factor of , which, even for general convex domain can be shown to grow at most as . Namely, using the known results on the Banach-Mazur distance (which are corollaries of John\u2019s characterization of inscribed ellipsoid of largest volume) and estimates on Whitney constants from [BrKa], we obtain the following corollary.\nThere exists such that for all we have\nWe note that for all dimensions , generally speaking, there is no uniqueness of best uniform approximating polynomial of degree , see, e.g. [Sha]*Ch. 2. We show that unlike in the one-dimensional case, in several variables the set of best approximating quadratics to a convex function may contain a non-convex quadratic.\nFor , define\nThen and . Denote to be the set of best quadratic approximations to on . Then: \n(i)\n\nand ;\n(ii) and .\nSince in our example there is also a convex best approximating quadratic, one can still hope that for any convex function it is always possible to choose a best approximating quadratic which is convex itself.\nIs it true that for any , , and for any , it is possible to choose such that ?\nThe affirmative answer would imply that Theorem 1.6 ###reference_heorem6### is valid with . A more accessible question could be to find out if the statement of Theorem 1.6 ###reference_heorem6### is valid with for a constant independent of and ."
40
+ },
41
+ {
42
+ "section_id": "2",
43
+ "parent_section_id": null,
44
+ "section_name": "2. Proofs",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "2.1",
49
+ "parent_section_id": "2",
50
+ "section_name": "2.1. Proof of Theorem 1.2",
51
+ "text": "It is possible to compute for using the following result, which, in a certain sense, is a generalization of the Chebyshev alternation theorem to the multivariate case for approximation by linear polynomials.\nFor any , , we have\nwhere the maximum is taken over all positive integers , with , all subsets , of and nonnegative coefficients , satisfying\nWhen is convex, the computation is easier, and it suffices to take and .\nFor any , , we have\nwhere the maximum is taken over all subsets of and nonnegative coefficients satisfying .\nClearly, the right-hand side of (2.1 ###reference_###) is greater than or equal to the right-hand side of (2.3 ###reference_###). Taking arbitrary , , , satisfying (2.2 ###reference_###) and using Jensen\u2019s inequality, we have\nso the inequality in the other direction follows (one can take with arbitrary for ).\n\u220e\nFollowing [BrKa]*Sect. 3, we denote for ,\nwhere the maximum is taken over all subsets of and nonnegative coefficients satisfying , and\nIt is straightforward to observe that , where the supremum is taken over all -dimensional simplexes , i.e., which are convex hulls of points in with interior of being nonempty. Denote by some fixed -dimensional simplex. Now any -dimensional simplex can be mapped into by a nonsingular affine transform, implying . In summary, , therefore, by Corollary 2.2 ###reference_heorem2###,\n(One can compare with [BrKa]*Eqn. (3.2) for the corresponding usual Whitney constant.) By [BrKa]*Lemma 3.4, for all positive integers . As , this implies . Taking we obtain from (2.4 ###reference_###) the upper bound in Theorem 1.2 ###reference_heorem2###.\n\nThe proof of the lower bound in Theorem 1.2 ###reference_heorem2### is essentially given on [BrKa]*p. 177 as the function defined by [BrKa]*Eqn. (3.4) is shown there to be convex and satisfy and . Here we will restrict ourselves only to repeating the definition of . We can consider as a subset of consisting of such that each coordinate is nonnegative and . Then is defined by\nNow we are done with the proof of Theorem 1.2 ###reference_heorem2###."
52
+ },
53
+ {
54
+ "section_id": "2.2",
55
+ "parent_section_id": "2",
56
+ "section_name": "2.2. Proof of Theorem 1.3",
57
+ "text": "Suppose , .\n\nWe begin with showing that . We need certain preliminaries from the theory of multivariate convex functions. Suppose and is convex. is said to have support at if there exists such that while for any . If is open, then has support at any point in , see, e.g. [RoVa]*Th. B, p. 108. Consequently, any convex and continuous function on has support at any interior point of . Now let be arbitrary. By the above, has support at , so we can find satisfying and for any . Set , . Then is convex nonnegative on function with , and . Therefore, it suffices to show that . Indeed, define . Then\nLet be a point such that . Then and\nNow (2.5 ###reference_###) and (2.6 ###reference_###) imply , and so .\n\nFor any we will show that . Since does not change if we apply a rotation and/or a dilation to , we can assume, without loss of generality, that the projection of onto the first coordinate axis is precisely the segment . For which will be selected later, let\nThen obviously . Since depends only on the first variable, due to our choice of the position of , it is not hard to see that and . The Chebyshev alternation theorem (e.g. [DeLo]*Th. 5.1, p. 74) implies that is the best uniform approximation to on and . On the other hand, due to convexity of (or by direct verification), we have . Thus,\nand taking completes the proof."
58
+ },
59
+ {
60
+ "section_id": "2.3",
61
+ "parent_section_id": "2",
62
+ "section_name": "2.3. Proof of Theorem 1.4",
63
+ "text": "Let be arbitrary fixed. Then one can find with\nBy the Weierstrass approximation theorem, there exists an algebraic polynomial such that . Then (2.7 ###reference_###) implies\nand\nNow we consider , where is the Euclidean norm of . We claim that with sufficiently large the resulting function will be convex on . Indeed, it suffices to ensure that is convex along any segment that belongs to , which, in turn, holds true provided all the second directional derivatives of are non-negative, i.e.,\nwhere is the unit sphere in . Thus, as is everywhere and is compact, we can define\nand (2.10 ###reference_###) holds, establishing that is convex on . Therefore, as is a quadratic polynomial and , from (2.8 ###reference_###) and (2.9 ###reference_###), we conclude\nThis implies and the inequality in the other direction is given in (1.5 ###reference_###)."
64
+ },
65
+ {
66
+ "section_id": "2.4",
67
+ "parent_section_id": "2",
68
+ "section_name": "2.4. Proof of Theorem 1.5",
69
+ "text": "Since is invariant under dilations and translations of , we can assume that the projection of onto the first coordinate axis is exactly . By [Sh-orders]*Th. 3, for any there exists such that while . Defining , we obtain\nimplying the required ."
70
+ },
71
+ {
72
+ "section_id": "2.5",
73
+ "parent_section_id": "2",
74
+ "section_name": "2.5. Proof of Theorem 1.6",
75
+ "text": "Using an affine change of variables if needed, by the definition of we can assume that\nWe can write for some symmetric matrix and . By standard linear algebra, there is an orthogonal matrix such that for a diagonal matrix . Thus, under the orthogonal change of variables , we have . By we denote the matrix obtained from by replacing all negative (diagonal) entries with zeroes. We define the required polynomial as follows:\nIt is easy to see that is convex: if are the diagonal entries of , then\nwhich is the sum of convex functions with nonnegative coefficients.\n\nFirst step in establishing the required bound on is to show\nFor any we have , therefore,\nNext we will obtain an estimate in the other direction. Observe that for any we have since is orthogonal, so (2.11 ###reference_###) implies . Now, using convexity of , an elementary upper bound on the second difference, and the representation , we have\nwhich implies\nFor any let us define\nThen , so from (2.15 ###reference_###) for we obtain\nhence, in combination with (2.14 ###reference_###), we get the claimed (2.13 ###reference_###).\n\nSecond, observing that is homogeneous of degree function in , we have for any and any that\nwhere in the last step we used (2.13 ###reference_###).\nSo, by (2.11 ###reference_###),\nUsing the last inequality, we complete the proof as follows:"
76
+ },
77
+ {
78
+ "section_id": "2.6",
79
+ "parent_section_id": "2",
80
+ "section_name": "2.6. Proof of Corollary 1.7",
81
+ "text": "The first step in each of (1.8 ###reference_###)\u2013(1.10 ###reference_###) is an application of Theorem 1.6 ###reference_heorem6###. We have , so (1.8 ###reference_###) follows from [BrKa]*Th. 4.3(a) for . John\u2019s characterization of inscribed ellipsoid of largest volume implies (see, e.g. [Schn]*Th. 10.12.2, p. 588) that for any , while if, in addition, is centrally symmetric. This immediately yields (1.10 ###reference_###), while for (1.9 ###reference_###) we invoke [BrKa]*Th. 4.1."
82
+ },
83
+ {
84
+ "section_id": "2.7",
85
+ "parent_section_id": "2",
86
+ "section_name": "2.7. Proof of Proposition 1.8",
87
+ "text": "is convex as the maximum of three linear functions. Next, we need a sufficient condition for a polynomial from to be in the set of polynomials of best approximation to . It is known (see, e.g., [Sha]*Th. 2.3.2, p. 14) that if one can find a finite set and positive constants satisfying for any , and\n(i) is clearly not convex. It is a straightforward multivariate calculus exercise to show that and with\nDirect verification also shows that any satisfies\nthus (2.16 ###reference_###) is satisfied for with establishing that .\n\n(ii) is obviously convex. One can observe that the values of and on coincide. The rest of the proof follows the same lines as that for the part (i).\n\nAcknowledgment. The authors thank Andr\u00e1s Kro\u00f3 for making them aware of [Sha]*Th. 2.3.2 which allowed to simplify the proof of Proposition 1.8 ###reference_heorem8###, as well as the referee for the useful comments."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {},
92
+ "image_paths": {},
93
+ "validation": true,
94
+ "references": [],
95
+ "url": "http://arxiv.org/html/2311.00912v3"
96
+ }
20240101/2311.04014v3.json ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Method to Improve the Performance of Reinforcement Learning Based on the \ud835\udcb4 Operator for a Class of Stochastic Differential Equation-Based Child-Mother Systems",
3
+ "abstract": "This paper introduces a novel operator, termed the operator, to elevate control performance in Actor-Critic (AC) based reinforcement learning for systems governed by stochastic differential equations (SDEs). The operator ingeniously integrates the stochasticity of a class of child-mother system into the Critic network\u2019s loss function, yielding substantial advancements in the control performance of RL algorithms. Additionally, the operator elegantly reformulates the challenge of solving partial differential equations for the state-value function into a parallel problem for the drift and diffusion functions within the system\u2019s SDEs. A rigorous mathematical proof confirms the operator\u2019s validity. This transformation enables the Operator-based Reinforcement Learning (YORL) framework to efficiently tackle optimal control problems in both model-based and data-driven systems. The superiority of YORL is demonstrated through linear and nonlinear numerical examples, showing its enhanced performance over existing methods post convergence.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "The advent of OpenAI\u2019s Chat Generative Pre-trained Transformer (ChatGPT) marked a seminal moment in the commercialization of Large Language Models (LLMs), signaling the advent of AI\u2019s semantic imaging prowess. This breakthrough catalyzed a paradigm shift in AI research, steering focus towards the domain of decision-making. At the forefront of this domain is reinforcement learning, which, in the wake of escalating computational capabilities, has undergone a metamorphosis into deep reinforcement learning (DRL). The seminal paper mnih2015human ###reference_1### ignited an explosion of interest in DRL, spawning a proliferation of sophisticated algorithms such as Deep Deterministic Policy Gradient (DDPG)lillicrap2015continuous ###reference_2###, Trust Region Policy Optimization (TRPO)schulman2015trust ###reference_3###, Asynchronous Advantage Actor Critic (A3C)mnih2016asynchronous ###reference_4### and Proximal Policy Optimization (PPO)schulman2017proximal ###reference_5###. These innovations exemplify the renaissance in reinforcement learning research.\n\nDRL distinguishes itself by utilizing deep neural networks to distill features from systemic statesbohmer2015autonomous ###reference_6### , thus enabling a seamless bridge between action and state spacestan2019energy ###reference_7### and facilitating end-to-end system optimization. This method addresses the limitations inherent in traditional control mechanismssong2023reaching ###reference_8### and has seen extensive exploration and implementation in roboticssalvato2021crossing ###reference_9### and autonomous vehicular controlli2022decision ###reference_10###.\n\nDespite the advancements in reinforcement learning (RL), its application remains constrained by the exigencies of environmental modeling during the training phase. Both model-based and model-free algorithms presuppose a level of environmental determinism that belies the inherent randomness of real-world settings. This discordance reveals that proficiency within simulated environments does not necessarily translate to real-world efficacy, as highlighted in canese2021multi ###reference_11###. Addressing this disjunction, researchers are exploring avenues to obviate the need for precise environmental modeling. One school of thought, as elucidated in liu2021policy ###reference_12###, advocates for direct agent-environment interactions, either within the actual milieu or subsequent to simulated training. However, this presents considerable safety risks accentuated by the opaque nature of the neural networks governing end-to-end deep RL, which is in stark contrast to the mathematical transparency of traditional control methods. This concern has propelled research into safety reinforcement learning, with significant contributions by thananjeyan2021recovery ###reference_13###, stooke2020responsive ###reference_14### and cheng2019end ###reference_15###marvi2021safe ###reference_16###.\n\nAn alternative approach posits the integration of environmental uncertainty into simulations, with the intention of fostering more adaptable agents. Techniques such as Gaussian processesengel2005reinforcement ###reference_17### and Stochastic Differential Equations (SDEs)yang2023parameters ###reference_18### are employed to model this uncertainty. Recent innovations leverage deep neural networks to ascertain the drift and diffusion terms of SDEs from extensive environmental dataxu2022infinitely ###reference_19###yang2023neural ###reference_20###. This approach called neural stochastic differential equations(NSDEs) that has gained traction in diverse fields, from financechen2021deep ###reference_21### to unmanned aircraft systemsdjeumou_how_2023 ###reference_22### and autonomous drivingqi_stochastic_2022 ###reference_23###.\n\n\nFurthermore, the confluence of SDE-based models and RL methodologies is yielding novel solutions to optimal control challengeswang2020reinforcement ###reference_24###wang2020continuous ###reference_25###wang1812exploration ###reference_26###. The zhang2022online ###reference_27### advanced a stochastic model for n-player non-zero sum differential games, demonstrating convergence to Nash equilibria via Q-learning. The pirmorad2021deep ###reference_28### employed RL to modulate a 1-dimensional Stochastic Burgers\u2019 equation, successfully dampening shock waves and mitigating sharp gradients. Moreover, chen_incremental_2019 ###reference_29### introduced an AC based RL framework for SDE-modeled systems, proving the convergence of the Critic network and proposing a simplified surrogate function to streamline computations.\n\nSDEs have been adeptly employed to capture the inherent stochasticity of controlled systems, presenting a distinct advantage over traditional models predicated on Ordinary Differential Equations (ODEs). This approach, showcased in zhang2022online ###reference_27###pirmorad2021deep ###reference_28###chen_incremental_2019 ###reference_29###, offers a more nuanced representation of real-world variability. Despite these advancements, two pertinent issues emerge:\n\nProblem I: The integration of stochasticity in SDE-based models does not extend to the design of the state-value function estimation in AC based reinforcement learning architectures.\n\nTraditional deep RL algorithms design the Critic network\u2019s loss function to minimize the Mean Square Error (MSE) between the estimated value function at the current and subsequent time steps, as represented by the equations:\nIn studies such as those by zhang2022online ###reference_27### and pirmorad2021deep ###reference_28###, stochasticity is confined to the controlled system\u2019s model, with the design of RL algorithms\u2014both Actor and Critic networks\u2014failing to accommodate this variability. Consequently, the optimization of policy relies heavily on the generalization ability of RL methods. Moreover, chen_incremental_2019 ###reference_29### simplifies the Critic network\u2019s loss function, derived from the It\u00f4 diffusion process\u2019s characteristic operator, to a surrogate function similar to the traditional form shown in (1 ###reference_###) for computational efficiency, thereby overlooking the system\u2019s stochastic aspects. wang2020continuous ###reference_25### also primarily concentrate on the Actor network\u2019s design while employing SDEs to model the system.\nThe disregard for the controlled system\u2019s stochasticity in the design of the Critic network\u2019s loss function can significantly impact its convergence, warranting a critical re-evaluation in order to align the RL algorithms with the complexity of SDE-modeled environments.\nProblem II: The utilization of SDEs in the modeling of control systems, as evidenced in works like chen_incremental_2019 ###reference_29### , imposes stringent continuity conditions on the state-value function\u2014specifically, the necessity for at least second-order continuous and -order Hlder continuity. This prerequisite stems from the optimization methodologies, such as the Hamilton-Jacobi-Bellman (HJB) equation or reinforcement learning algorithms, which require the state-value function\u2019s partial derivatives for optimal policy determination. This requirement restricts the effectiveness of these methods in two distinct cases:\nCase1:\nThe first limitation arises when the Critic network\u2019s activation function, such as the ReLU function, does not fulfill the continuity criteria. This constraint hampers the method\u2019s efficacy and limits the freedom in activation function selection, a flexibility researchers seek to retain in algorithm design.\nCase2:\nIn multi-objective optimization scenarios, the reward function may not be well-defined, and researchers often wish to utilize existing datasets from previous reinforcement learning agents trained on similar problems. These datasets might not satisfy the continuity prerequisites and may not even be sequentially ordered\u2014a situation particularly relevant to inverse reinforcement learning and offline reinforcement learning.\nAddressing these challenges, this paper introduces an alternative operator, referred to as the operator, that is functionally equivalent to the characteristic operator of the It\u00f4 diffusion process. A rigorous proof leveraging the Kolmogorov forward equation and Gaussian distribution properties is provided in the 3 ###reference_### section. Utilizing the operator within the AC framework, we design a reinforcement learning controller for the child-mother system detailed in the 4 ###reference_### section.\n\nThe operator not only resolves the issue of disregarding the controlled system\u2019s stochasticity in value function estimation, as stated in Problem I, but also circumvents the need for computing the value function\u2019s partial derivatives when deriving optimal policy, as highlighted in Problem II. This is achieved by operator transforming the problem of calculating the value function\u2019s partial derivatives into one of computing the drift and diffusion terms\u2019 partial derivatives in the system\u2019s describing SDEs. It provides insightful solutions for the special tasks mentioned above, including those requiring diverse activation function choices in neural network design, inverse reinforcement learning tasks, and problems agnostic to reward functions.\n\n\n\nThe child-mother system, widely applicable in autonomous driving, is formulated as follows:\nwhere is the child-system state, is affected only by the child-system state and the control input , and is the mother-system state, is affected only by the child-system state and the mother-system state . This type of system has a wide prospect in the field of autonomous driving, where the child-system can be considered as the ego vehicle that can be controlled and the mother-system as all surrounding vehicles excluding the ego vehicle. Most of the existing studies in the field of autonomous driving divide the child-mother system into two systemsgu2022integrated ###reference_30###, the child-system (i.e., the ego vehicle in autonomous driving) is modeled by a two-wheeled bicycle model and a non-linear tire modelsnider2009automatic ###reference_31###, and the mother-system (i.e., the surrounding vehicles in autonomous driving) is modeled by the Intelligent Driver Model(IDE)treiber2000congested ###reference_32###, the Minimizing Overall Braking Induced by Lane change(MOBIL) modeltreiber2016mobil ###reference_33###, etc. However, this type of approach does not reasonably consider the stochasticity in the real model, and in the 4 ###reference_### section of this article, this child-mother system is modeled using data-driven NSDEs, which fully considers the uncertainty in the real environment and also gives how to calibrate the deep neural network parameters of this child-mother system using a large amount of data collected in a realistic environment.\n\nThis paper proposes a novel reinforcement learning framework called Operator-based Reinforcement Learning (YORL). It contrasts this with the Traditional Stochastic Reinforcement Learning (TSRL) methodology, where the Critic network\u2019s loss function is conventionally designed. YORL employs the operator to formulate a novel Critic network loss function for the child-mother system, as explicated in the 5 ###reference_### section. The 6 ###reference_### section compares the YORL and TSRL methods\u2019 performance on this this class of child-mother systems.\n\nIn summary, this paper consists of the following sections. 2 ###reference_### contains some theoretical knowledge in SDEs, which will be used in the subsequent deductions and proofs. 3 ###reference_### contains the theoretical part, which proposes the operator that is equivalent to the characteristic operator of the It\u00f4 diffusion process, gives a rigorous proof of the equivalence of these two operators by using the Kolmogorov forward equation and the properties of the Gaussian distribution, and finally gives two propositions of the operator. The proof of these propositions will be shown in the Appendix. The system studied in this paper, a special class of child-mother systems, is described in 4 ###reference_###. 4 ###reference_### details the modeling of this child-mother system using SDEs, and gives a extremely trivial method to calibrate this stochastic model using data. In 5 ###reference_###, the operator proposed in 3 ###reference_### is applied to the design of the loss function of the Critic network for reinforcement learning, and build a noval reinforcement learning framework based on operator(i.e., YORL). In 6 ###reference_###, the reinforcement learning framework designed in 5 ###reference_### is simulated in linear and nonlinear child-mother system, respectively, to solve the optimal control problem and compared with the TSRL method. 7 ###reference_### summarize the full paper and provide some outlooks for the future."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "PRELIMINARIES",
15
+ "text": "For the purpose of subsequent deduction and proof, it is necessary to first give some definitions and lemmas from stochastic analysis.\n{evans2012introduction ###reference_34###Section4.2: The future information}\n\n\nLet be a -dimensional Brownian motion defined on some probability space .\nThe -algebra\nis the future information of the Brownian motion beyond time .\n{evans2012introduction ###reference_34###Section4.2: Filtration}\n\n\nIn a probability space , a family of -algebras have . is called filtration if there have\n{evans2012introduction ###reference_34###Section4.2: Progressive process}\n\n\nIn a probability space , there have a filtration on this probability space. The mapping is a progressive process iff is measurable in space , .\n{evans2012introduction ###reference_34###Section4.2: Two special spaces}\n\n\n(i) is the space of all real-valued, progressive processes such that\n(ii) Likewise, is the space of all real-valued, progressive processes such that\nDefinition 1 ###reference_nition1###4 ###reference_nition4### will be used in Lemma 1 ###reference_a1### and are fundamentals to the subsequent description of SDEs.\n{evans2012introduction ###reference_34###section5.2: Existence and uniqueness of solution of stochastic differential equation}\n\n\nSupport that and are continuous and satisfy the following conditions:\nLet be any -valued random variable such that\nand\nwhere is a given -dimensional Brownian motion.\n\nThen there exists a unique solution of the SDE:\nLemma 1 ###reference_a1### will be used in the 4 ###reference_### section of the article to calibrate the parameters of NSDEs as a gradient penalty term in the loss function."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "THEORY",
21
+ "text": "Suppose a SDE:\nwith , . According to lemma 1 ###reference_a1###, the solution of the SDE in (11 ###reference_###) exists and is unique.\n{evans2012introduction ###reference_34###section4.4: It\u00f4z\u2019s chain rule}\n\n\nConsidering a mapping is a continuous, with continuous partial derivatives and mapping, for .\nThe It\u00f4 chain rule is\nwhere , are the element of row of the matrix and the element of column of row of the matrix, respectively.\n\nMeanwhile, the expectation of (12 ###reference_###) can be written as\n{oksendal2003stochastic ###reference_35###section7.5: Characteristic Operators of It\u00f4 Diffusion Process}\n\n\nConsidering Lemma 2 ###reference_a2###, introduce a characteristic operator for the It\u00f4 diffusion process.\nFor a stohastic process in (11 ###reference_###) and a mapping in (12 ###reference_###), characteristic operator can be defined as\nEquivalently, characteristic operator can be formulated as\nLikewise, let be the dual of\n{oksendal2003stochastic ###reference_35###section7.3: The generator of It\u00f4 Diffusion}\n\n\nAssuming a random variable , the probability density function of it is denoted as . Denoted\nas the expectation of the random variable .\nIt\u2019s trivial that the (13 ###reference_###) can be reformulated as\nwhere is the probability density function of random variable in (11 ###reference_###).\nThis corollary is the cornerstone of the subsequent definition of the operator and the design of the loss function for the Critic network in Section 5 ###reference_###. It transforms the problem of solving for the derivative of the function with respect to time into the problem of solving for the partial derivative of the function with respect to its independent variables. This is a very meaningful transformation.\n{ludvigsson2013kolmogorov ###reference_36###section3.2: Kolmogorov Forward Equation}\n\n\nAccording to Kolmogorov forward equationludvigsson2013kolmogorov ###reference_36###, the characteristic operator and the dual form satisfy the following relationship\nIn some special scenarios, the function is not agnostic and the only information available is the input and output data of the function ( is known to be a continuous function or can be constructed as a continuous function). Therefore, in this type of scenario, the partial derivation of function is not known, is there any other way to solve the problem in this type of scenario?\nA new operator proposed in this paper is introduced and the relationship between the operator and the characteristic operator of It\u00f4 diffusion process is given.\n( operator)\n\n\nThe operator with respect to the stochastic process can be formulated as\nwhere , and are the first-order partial differential matrix with respect to the function , the first-order partial differential matrix with respect to the function and the second-order partial differential matrix with respect to the function , respectively.\nThe and are the expectation and covariance matrices of the stochastic process at time , respectively.\nwhere is the state of the stochastic process at an extremely short moment before time , i.e., the history information of the last extremely short moment of .\n\nMoreover, is the all-one vector and the dimension of it is equal to the dimension of which is . The operator is the Hadamard product.\n(Operator Equivalence Theorem)\n\n\nWhen the operator is given well-defined as described in Definition 6 ###reference_nition6###, the operator with respect to the stochastic process , which is , is equivalent to the characteristic operator of It\u00f4 diffusion process .\nAccording to Corollary 2 ###reference_llary2###, can be formulated as\nDefine the following notations,\nthe (24 ###reference_###) can be reformulated into the matrix form\nFrom the description of the It\u00f4 diffusion process defined in (11 ###reference_###), the random variable which is the stochastic process at moment obeys a Gaussian distribution for as follows\nTherefore, the probabilty density function of random variable is\nIt\u2019s trivial that the following formulation holds\nBy the property that is symmetric matrix and (28 ###reference_###), and are clearly\nThen, the (29 ###reference_###) can be rewritten as\nwhere is the identity matrix.\n\nTherefore, can be rewritten as\nIt is obvious that the holding of (32 ###reference_###) is a proof of the holding of Theorem 1 ###reference_rem1###.\n\u220e\nAssuming , are obey It\u00f4 diffusion process as (11 ###reference_###)\nIf , are independent of each other and there exists a function of , .\nThen the expectation of the derivative with respect to time of a multivariate continuous function satisfying the conditions in Lemma 2 ###reference_a2### is of the form\noperator is a linear operator. Therefore, it is not necessary to prove that the following equation clearly holds\nThe operator transforms the problem of solving the partial derivatives of the function with respect to the independent variables() into solving the partial derivatives of the drift term function and the diffusion term function of the SDE with respect to their independent variables().\nMoreover, according to Kolmogorov forward equation, the following equation holds\nProofs of Propositions 1 ###reference_osition1###, 3 ###reference_osition3### are shown in the Appendix section of the article\n\nBy Corollary 2 ###reference_llary2### and Proposition 3 ###reference_osition3###, it is clear that the problem of solving the derivative of the function with respect to time is transformed, by two transformations of the operator and the operator, into a problem of solving the partial differential equations for the drift and diffusion term functions of the SDEs obeyed by the independent variables in the function with respect to their independent variables which means that solve for the three partial differential matrices , and in the operator.\n\nFurthermore, the transformation from the operator to the operator is essentially a transformation from a partial differential operator of the function to a linear operator of the function . the operator, as a linear operator of the function , would have much better properties, both in mathematics and engineering."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "SYSTEM MODELING FOR A CLASS OF CHILD-MOTHER SYSTEM",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "A Class Of Child-Mother System Modeling",
33
+ "text": "Consider a system full of uncertainties, with deterministic models that current researchers have jointly established through observations and physical laws and uncertainty models that cannot currently be described by mathematical models. Fortunately, even if the researcher cannot fully characterize the uncertain part of the system, the uncertain part of the system is observable by sensors, and thus a large amount of data can be obtained to study and analyze that uncertainty. This kind of system can be described by SDEs as\nwhere is the observation of the whole system and is the control input of the system. is the set of all observations that can be taken into account and is the control space of control inputs, i.e., the set of allowed controls. is a -dimensional Wiener motion. , are the drift term and diffusion term of SDEs, respectively.\n\nIn this paper, we focus on a class of systems in which the system can be decomposed into a child-mother system as the following form.\nwhere is the states of subsystem and is the states of mother system which means will be effected by . represents the state space of the subsystem state, i.e., the set consisting of all state values that can be taken and represents the state space of the state of the mother system, i.e., the set of all values to which the state values of the mother system can be taken.\n and are the -dimensional and -dimensional Wiener motion, respectively. and are the drift term and diffusion term of SDEs, respectively. Likewise, and are the drift term and diffusion term of SDEs, respectively.\n\nThe drift term and diffussion term functions and of SDEs(39 ###reference_###) can be calibrated by the large amount of data observed and collected by the sensors from the system. The parmeters , are the parameters that the neural network needs to learn.\n\n\n\n\n\nAccording to Lemma 1 ###reference_a1###, if SDEs(39 ###reference_###) has a unique solution, conditions of Lemma 1 ###reference_a1### must be obeyed. It\u2019s trivial that conditions and are naturally true for the child-mother system in (39 ###reference_###). Therefore, the drift term and diffusion term of SDEs in (39 ###reference_###) need to satisfy:\nThe output value of the neural network can\u2018t be infinite that the condition is also naturally true. It\u2019s obvious that condition means that have to satisfy Lipschitz continuous condition. Therefore, it need to add a gradient penalty in the objective functions as a regularizer which is:\nwhere are hyper-parameters. is the positive part of which means ."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Stochastic Differential Equation Calibration of Child-Mother System",
39
+ "text": "For a typical child-mother system, the deterministic part of the system (in SDEs that is, the drift term function) is, for the most part, uniquely and deterministically described in mathematical form by physical laws. However, the stochastic part of the system is very difficult to be well described, at least under the existing theories, and the uncertainty that these stochastic factors (e.g., complex human behaviors, complex high-dimensional stochastic variables in the environment) bring to the system cannot be described by deterministic dynamic equations (e.g., ordinary differential equations, partial differential equations).\n\nTherefore, the modeling of the stochastic part of the system requires a large number of sensors observing the system from reality. And an attempt is made to extract the distribution obeyed by the model of the stochastic part of the system from the large amount of realistic data.\n\nIn order to make the system discussed in this paper more general, it is assumed that the parameters in the SDEs in (39 ###reference_###) are unknown, i.e., the child-mother system is unknown both to the agent in reinforcement learning and to the researchers, and all that is available is the data that the sensors have collected from reality.\n\nAccording to (39 ###reference_###), the states of the child-mother system obeys that\nwhere , are the states of subsystem and mother system respectively at the next moment of time . It is obvious that systems satisfying SDEs have Markovianity in continuous time.\n\nAccording to Bayesian estimation,\nwhere the , are the states of the subsystem and the mother system, respectively, collected by the sensor in the real environment at the next moment of . The intuitive idea is to maximize and in order to make the SDEs established for and closer to the probability distribution obeyed by the real trajectories in the dataset. This method is also called maximum a posteriori estimation(MAP) in the field of machine learning. The main idea is to find the optimal parmeters and such that and maximized.\nWithout loss of generality, for and , first consider the prior to be a uniform distribution, i.e., , . Thus, the likelihood function is set as follows:\nwhere and are the subsystem state values and the control input values for the whole system in dataset at time , respectively.\nwhere is the mother system state values in dataset at time .\n\nTherefore, the loss function of the neural network of and and the loss function of the neural network of and are defined as\nand\nrespectively. When the optimization of parameter and are completed, a stochastic model of the child-mother system with uncertainty is obtained,"
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "RL DESIGN BASED ON OPERATOR IN CHILD-MOTHER SYSTEM",
45
+ "text": "Currently, the prevailing reinforcement learning methods are built based on AC framework. The reinforcement learning framework determined by pairing the AC framework with deep neural networks has the advantages that the traditional optimal control lacks, such as in the computation of the value function, the reinforcement learning can be used to estimate the action value function through Critic network, which avoids a large number of computations brought about by the inverse solving of the value function in the traditional optimal control (e.g., dynamic programming), and the difference is obvious when both states and actions are continuous, i.e., the state space and action space are infinite (In mathematics, the potential of the set of state spaces and action spaces is ). Moreover, for systems in continuous state space and continuous action space, in the field of traditional optimal control, it can use the Hamilton-Jacobi-Belman equations for finding the optimal policy, but this method has strong constraints on the design of the system and the cost function. When the model of the controlled system is unknowable or not exhaustible, the traditional optimal control method may be difficult to solve this type of problem effectively.\n\nIn summary, this chapter proposes a novel reinforcement learning framework based on the operator which mentioned in 3 ###reference_### and applied to the child-mother system built in 4 ###reference_###. Moreover, the focus of this paper is not on how to calibrate a system built from SDEs using a large amount of data, but rather on the problem of solving the optimal control for that system once the calibration is complete. The calibration method proposed in 4 ###reference_### is a trivial method, and in the subsequent design of the reinforcement learning framework, by default, the child-mother system has been calibrated by the data, as shown in (50 ###reference_###)."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Critic Network Design",
51
+ "text": "For the reinforcement learning based on AC framework, the value function of action at time has be defined as\nwhere is the network parmeter of . The is the value function of states and at time , representing the potential value of the state to future total rewards. is the reward function which is related to the states of child-system and mother-system as well as the control input and the functional form of should be in the form of a polynomial. is the reward decay factor due to the fact that rewards at more future moments have a much lower impact on the current action than the current reward has on the action.\n\nFor the Critic network in the reinforcement learning framework, the task of this network is to give an accurate estimate of the state value function in the current state so that the Actor network can use the value function estimate to select the optimal action that maximizes the reward, hence finding a globally optimal solution in the whole control process. According to the theory of reinforcement learning, when the parameter is trained, the value of is independent of the choice of . It is natural to think of an action when it is uniquely determined at time when the state of the child-system is and the state of the mother system is . Then the action\u2019s value function at that moment should be a deterministic one, i.e., in (51 ###reference_###), the value function is independent of the choice of . Therefore, the following equation holds.\nApplying Proposition 1 ###reference_osition1### to the above equation, the following equation holds.\nWithout loss of generality, assume for a moment that . Indeed, when the Critic network training converges, no change in will have an effect on , regardless of the time at any moment. Therefore, (53 ###reference_###) can be rewritten as\nThe formula is only related to the reward value at the moment and the estimation of the value function by the Critic network at the moment , and does not require a partial derivation of the value function, which is a nice property. The method can be remarkably effective when the value function network is not fully understood, such as in some application scenarios of inverse reinforcement learning(IRL), data-driven tasks, etc., where the network structure and parameters of the Critic network are not well known and only a large amount of data is available.\n\nAccording to the theory of operators in Definition 6 ###reference_nition6###, the operator with respect to the stochastic process and can be described by the following equation.\nwhere , and are the first-order partial differential equation matrix of the drift term function , the first-order partial differential equation matrix of the diffusion term function and the second-order partial differential equation matrix of the diffusion term function of SDEs (50 ###reference_###), respectively. And , and are defined as\nMoreover, at moment , the random variable obeys the Gaussian distribution and the expectation matrix and covariance matrix are the following form.\nwhere and are the child-system states and the control input at the last moment of time . It can also be thought of as the historical information available to the entire child-mother system at moment (i.e., the system history information collected by the sensors).\n\nLikewise, the operator with respect to the stochastic process and can be described by the following equation.\nwhere , and are the first-order partial differential equation matrix of the drift term function , the first-order partial differential equation matrix of the diffusion term function and the second-order partial differential equation matrix of the diffusion term function of SDEs (50 ###reference_###), respectively. And , and are defined as\nMoreover, the and are defined as\nwhere and are the states of mother-system and the states of child-system states at the last moment of time .\n\nThe goal of training for the Critic network is essentially to find the optimal parameter such that (54 ###reference_###) holds at any moments.\nTherefore, the loss function of the Critic network can be set intuitively\nMore generally, at any given time , all selections of should make (51 ###reference_###) a fixed value, and the more generalized form of the loss function should be reformulated as\nWhen the Critic network is well trained by a large amount of data collected offline or online, the network can give, at a given moment, the value function corresponding to the current moment. Denote as the network parameter when the Critic network is well trained."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Actor Network Design",
57
+ "text": "For reinforcement learning based on the AC architecture, the goal of the Actor network is to output the optimal action in the current state after the value function of the current state is given by the Critic network, which maximizes the final reward.\nThe action ouputted by Actor network can be represented as\nwhere , are the learning-based function of controller which can be determined by neural network with the parameter . is a -dimensional random vector. The random variables in random vector are independently and identically distributed random variables, all of which follow the Gaussian distribution with expectation and variance (time interval of control inputs). Therefore, at time , the action ouputted by Actor network obeys the following distribution:\nwhere is the expectation vector and is the covariance matrix.\n\n\nThe key to improve the policy of Actor is to find the optimal parmeter of Actor network that maximize the reward in reinforcement learning. It means that find\nwhere is the conditional distribution of action when the states of child-system and states of mother-system is well known at time . is the advantage function, which indicates whether the current action is a favorable action for the current value function, when is greater than , it means that the current action is an advantageous action, on the contrary, the action is a worse action.\nAccording to the Actor design in TRPOschulman2015trust ###reference_3### and PPOschulman2017proximal ###reference_5###, (65 ###reference_###) can be reformulated as\nwhere represents the conditional probability of the action under the old strategy.\n\nMost current studies comparing the advantages of algorithms use the PPO algorithm as a baseline for comparison. The subsequent experimental results in this paper are also compared with PPO, therefore, the design of Actor takes the same design method as the PPO algorithm.\nThus, define as the loss function of the Actor as follows\nwhere \u201dclip\u201d is a function that restricts to the middle of and . is a hyperparameterization.\n\nIn practice, during reinforcement learning training, the Actor network and the Critic network are not trained first after the Critic network is trained and then the Actor network is trained. Instead, after the agent interacts with the environment for a fixed number of episodes, the data from these episodes are used, either synchronously or asynchronously, to train the Actor network and Critic network."
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "ILLUSTRATIVE EXAMPLES",
63
+ "text": "Comparing the YORL method with the TSRL method, both YORL and TSRL use a design consistent with the PPO method in the design of the Actor network. However, in the design of the Critic network, YORL uses the loss function update network described in 5.1 ###reference_###, and TSRL uses the loss function update network described in (1 ###reference_###). In fact TSRL\u2019s AC network design is all consistent with PPO, simply by applying the PPO algorithm to a system modeled by SDEs."
64
+ },
65
+ {
66
+ "section_id": "6.1",
67
+ "parent_section_id": "6",
68
+ "section_name": "Linear Numerical Examples",
69
+ "text": "Considering a linear child-mother system\nThe reward function can be determined by\nwhere are the constant and is the magnitude of the distance expected to be maintained by and .\nThe goal of optimal control is to find the optimal control policy that maximizes the total reward while satisfying the constraint equations.\nThe specific design details of the reinforcement learning framework are shown in Table 1 ###reference_###.\nA training comparison between the reinforcement learning method using the operator-based design of the loss function of the Critic network(YORL) and the TSRL method on this linear numerical example is shown in Fig.1 ###reference_### .\n###figure_1### It is clear that the training performance of the YORL method using the operator is outperformed by the TSRL method in this linear child-mother system and the YORL method is able to achieve higher rewards than the TSRL method at the time of training convergence.\nThis is due to the fact that YORL incorporates the stochasticity of the child-mother system modeled by SDEs into the design of the Critic network, and therefore is outperforming the TSRL, a method that does not take into account the stochasticity in the model, in terms of the reward function.\nIt nicely addresses the deficiencies of the current study as articulated in Problem I.\n\nAs can be seen in Fig.1 ###reference_###, when the Relu function is used for the activation function of the network, there is a certain degree of reduction in the reward function for both the YORL and TSRL methods, in consistent with what was described in Problem II. However, the value of reward function of the YORL method is still higher than that of the conventional TSRL method because the operator avoids the problem of calculating the partial differentiation of the state value function.\n\nThe experimental results reveal that the network with Relu activation function has faster convergence characteristics when the hidden layer dimensions are the same. While the network with Sigmoid activation function is slower in convergence, the peak size of the reward after convergence performs better compared to other networks. Compared to Relu and Sigmoid, the network with tanh activation function has a more moderate performance. In addition, the TSRL method converges faster than YORL when the hidden layer dimension increases, but the average reward after convergence is inferior to the YORL method."
70
+ },
71
+ {
72
+ "section_id": "6.2",
73
+ "parent_section_id": "6",
74
+ "section_name": "Nonlinear Numerical Examples",
75
+ "text": "Considering a nonlinear child-mother system\nThe reward function can be determined by\nwhere are the constant and is the magnitude of the distance expected to be maintained by and .\nThe goal of optimal control is to find the optimal control policy that maximizes the total reward while satisfying the constraint equations.\nThe specific design details of the reinforcement learning framework are shown in Table 1 ###reference_### as well.\n###figure_2### The simulation results for this nonlinear child-mother system are shown in Fig.2 ###reference_### . Similar to the linear child-mother system, the dimension of the hidden layer used in subfigure (a) is , and the activation function is a Sigmoid function. It can be found that the average reward of YORL is totally higher than the existing TSRL method throughout the training process. When the hidden layer dimension is increased to and the activation function is unchanged, as shown in subfigure (b), both the TSRL method and the YORL are improved, which is a positive result of the increase of neurons in the neural network. When the training converges, it can be noticed that the average reward of the YORL method is slightly better than the TSRL method.\nWhen the dimension of the hidden layer is unchanged and the activation function is replaced with the Relu function, as shown in subfigure (c), the advantage of YORL is still obvious.\nFinally, in subfigure (d), the hidden layer dimension is still and the activation function is replaced with the tanh function. YORL continues to perform remarkably well.\n\nFrom the two numerical examples, it can be seen that the YORL method shows better control performance than the TSRL method, both in systems modeled by linear SDEs and in systems modeled by nonlinear SDEs.The YORL method solves Problem I well and gives a feasible approach in some special scenarios described in Problem II. The YORL method can provide an illuminating proposal for researchers when the problem they are studying is in the special scenarios described in Problem II."
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "CONCLUSIONS",
81
+ "text": "It can be found that YORL method outperforms TSRL method in both linear child-mother systems and nonlinear child-mother systems, which is a well-suited answer to the Problem I posed in the previous section. This demonstrates the necessity of taking the stochasticity in the system into account in the design of the loss function when designing Critic networks.\n\nWhen the states in the child-mother system are all considered as stochastic processes represented by SDEs, the state-value function is essentially a functional of the stochastic process. The characteristic operator of the It\u00f4 diffusion process transforms the problem of solving the derivative of this functional with respect to time into a problem of solving the partial derivative of this functional with respect to its independent variables. The operator proposed in this paper can transform the problem of solving the derivative of the functional with respect to time into the problem of solving the partial derivatives of the drift term function and the diffusion term function of SDEs of the stochastic process with respect to its independent variables.\n\nFor solving the transformed problem of the operator, it is required that the functional of the stochastic process is known and there is a strict requirement on the continuity of the functional (at least the second-order partial derivatives are required to be continuous). However, in many scenarios, the functional may not be known, and only the data for the inputs and outputs of that functional are available. In designing a Critic network for reinforcement learning, some activation functions such as Relu may not satisfy the continuity requirement, suggesting that the choice of the activation function may result in the functional of the value function that does not satisfy the conditions for the application of the operator, which is a significant limitation for the design of the network.\nThe operator proposed in this paper transforms this problem into a problem of solving the partial derivatives of the drift term function and the diffusion term function of SDEs obeyed by the random variable itself with respect to its independent variables. The continuity requirement of the pairwise value function is transformed into the continuity requirement of the drift term function and the diffusion term function of SDEs obeyed by the random variable itself.\n\nThe proposed operator allows more selectivity in the design of the Critic network when the SDEs obeyed by the system state satisfy the continuity condition. The activation functions of network such as the Relu function can be chosen. Moreover, the operator can be used as a reference tool in areas such as inverse reinforcement learning, offline reinforcement learning, to provide researchers with inspiring proposals. Therefore, the proposed operator effectively addresses the shortcomings articulated in Problem II."
82
+ },
83
+ {
84
+ "section_id": "8",
85
+ "parent_section_id": null,
86
+ "section_name": "APPENDIX",
87
+ "text": ""
88
+ },
89
+ {
90
+ "section_id": "8.1",
91
+ "parent_section_id": "8",
92
+ "section_name": "The Proof of Proposition 1",
93
+ "text": "By denoting the dimension of as , the dimension of as , and so on, and the dimension of as , there are, in essence, independent variables in the function . Denoted as \n\nAccording to the Taylor expansion formula for multivariate functions, (34 ###reference_###) in Proposition 1 ###reference_osition1### can be written in the following form:\nFor equation ,\nCase1():\nCase1():\nSince , are independent of each other, it is clear that the following equation holds\nTherefore, the (74 ###reference_###) can be reformulated as\nAs we known,\nObviously the following equation holds\nAlso due to Theorem 1 ###reference_rem1###, the operator is equivalent to the operator, so Proposition 1 ###reference_osition1### holds.\n\u220e"
94
+ },
95
+ {
96
+ "section_id": "8.2",
97
+ "parent_section_id": "8",
98
+ "section_name": "The Proof of Proposition 3",
99
+ "text": "According to Theorem 1 ###reference_rem1###, the (36 ###reference_###) naturally holds.\n\nAnd according to Kolmogorov forward equation,\nMeanwhile, according to Proposition 2 ###reference_osition2###,\nTherefore, the following equation holds\n\u220e"
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>SPECIFICATION OF THE RL DESIGN IN LINEAR NUMERICAL EXAMPLE</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.14.15.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S6.T1.14.15.1.1\" style=\"padding:1pt 27.9pt;\">RL Design</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.3.4\" style=\"padding:1pt 27.9pt;\">Actor Network</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.1\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.3.3\" style=\"padding:1pt 27.9pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.6.6.4\" style=\"padding:1pt 27.9pt;\">Critic Network</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.4.4.1\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.5.5.2\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.6.6.3\" style=\"padding:1pt 27.9pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.7.7.1\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.8.8.2\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.9.9.3\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.10.10.4\" style=\"padding:1pt 27.9pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.11.11.1\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.12.12.2\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.13.13.3\" style=\"padding:1pt 27.9pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.14.14.4\" style=\"padding:1pt 27.9pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
106
+ "capture": "Table 1: SPECIFICATION OF THE RL DESIGN IN LINEAR NUMERICAL EXAMPLE"
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2311.04014v3_figure_1.png",
112
+ "caption": "Figure 1: In subfigure (a), the hidden layer dimension of both the Actor and Critic networks for both TSRL and YORL is 32323232 and the activation function used is the Sigmoid function. In subfigure (b), the hidden layer dimension is 128128128128 and the activation function used is Sigmoid. In subfigure (c), the hidden layer dimension is 32323232 and the activation function used is Relu. In subfigure (d), the hidden layer dimension is 32323232 and the activation function used is tanh.",
113
+ "url": "http://arxiv.org/html/2311.04014v3/extracted/5324992/pic/YORLvsPPOLinear.png"
114
+ },
115
+ "2": {
116
+ "figure_path": "2311.04014v3_figure_2.png",
117
+ "caption": "Figure 2: In subfigure (a), the hidden layer dimension of both the Actor and Critic networks for both TSRL and YORL is 32323232 and the activation function used is the Sigmoid function. In subfigure (b), the hidden layer dimension is 128128128128 and the activation function used is Sigmoid. In subfigure (c), the hidden layer dimension is 32323232 and the activation function used is Relu. In subfigure (d), the hidden layer dimension is 32323232 and the activation function used is tanh.",
118
+ "url": "http://arxiv.org/html/2311.04014v3/extracted/5324992/pic/YORLvsPPONonLinear.png"
119
+ }
120
+ },
121
+ "validation": true,
122
+ "references": [
123
+ {
124
+ "1": {
125
+ "title": "doi:10.1109/IV51971.2022.9827388.\n\nURL https://ieeexplore.ieee.org/document/9827388/",
126
+ "author": "H. Qi, Y. Ying, J. Zhang,\nStochastic lateral noise\nand movement by Brownian differential models, in: 2022 IEEE\nIntelligent Vehicles Symposium (IV), IEEE, Aachen, Germany, 2022, pp.\n98\u2013103.",
127
+ "venue": null,
128
+ "url": "https://doi.org/10.1109/IV51971.2022.9827388"
129
+ }
130
+ }
131
+ ],
132
+ "url": "http://arxiv.org/html/2311.04014v3"
133
+ }
20240101/2312.01324v2.json ADDED
@@ -0,0 +1,369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "MABViT - Modified Attention Block Enhances Vision Transformers",
3
+ "abstract": "Recent studies have demonstrated the effectiveness of Gated Linear Units (GLU) in enhancing transformer models, particularly in Large Language Models (LLMs). Additionally, utilizing a parallel configuration within each Transformer block rather than the conventional serialized method has been revealed to accelerate the training of LLMs without significantly impacting performance. However, when the MLP and attention block were run in parallel for the image classification task, we observed a noticeable decline in performance. We propose a novel transformer variant that integrates non-linearity within the attention block to tackle this problem. We implemented the GLU-based activation function on the Value tensor, and this new technique surpasses the current state-of-the-art S/16 variant of Vision Transformers by 0.6% on the ImageNet-1K dataset while utilizing fewer parameters. It also supersedes the B/16 variant while using only half the parameters. Furthermore, we provide results with the GELU activation function variant to confirm our assertions. Lastly, we showcase that the MABViT variants exhibit greater potential when utilized in deep transformers compared to the standard architecture.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The Transformer model (Vaswani et al. 2017 ###reference_15###) is a widely adopted neural network architecture across multiple domains, including machine translation, image classification, and speech synthesis. The parallel configuration (Wang, B. and Komatsuzaki, A. 2021 ###reference_17###) within each Transformer block rather than the conventional sequential structure has been revealed to accelerate the training of Large Language models (LLMs) by 15% (Chowdhery et al. 2022 ###reference_2###) without significantly compromising the results. This prompted our investigation into its application in Vision Transformers.\nHowever, a disparity emerges between the parallel and standard formulations when training vision models, potentially attributed to the differences in scale between vision models (ranging from 5M to 100M parameters) and the considerably larger size of Language Models (greater than 1B parameters). Palm (Chowdhery et al. 2022 ###reference_2###) noted a minor decline in quality at the 8B scale with the parallel formulation but observed no such impact at the 62B scale compared to the standard Transformer. This observation regarding the success and limitations of the parallel formulations in models of varying scales motivated our exploration into integrating non-linearity within the attention block.\nIn this work, we aimed to identify the underlying reasons behind the comparable performance of parallel and standard structures at a large scale. Building on this understanding, we have developed a novel architecture that surpasses the performance of traditional transformer structures, achieving superior results with fewer parameters.\nThe key contributions outlined in this paper are as follows:\nTo the best of our knowledge, we are the first to analyze the difference in the performance of parallel structures at different scales.\nWe hypothesize that representation collapse is the cause for the similarity in performance of parallel and standard transformer architectures and experimentally attempt to verify this claim.\nWe incorporate GLU-based activation functions within the attention block of the Transformer architecture to partially overcome the representation collapse issue.\nWe demonstrate that MABViT-GLU variants outperform the standard architectures with fewer parameters.\nWe also provide results for the MABViT-GELU variant to reinforce our assertion that applying an activation function to the Value tensor within the attention module enhances the Vision Transformer.\nFinally, we exhibit that MABViT variants possess greater potential in deep transformers compared to the standard architecture."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Transformers",
21
+ "text": "The Transformer architecture is comprised of two key sub-modules:\n1) Multi-Head Attention Module: This module facilitates focusing on different positions within the input sequence to compute representations. It involves splitting the input into multiple heads to perform parallel self-attention before combining the outcomes.\n2) MLP Module: Also known as the position-wise feed-forward network, this module processes the output from the attention mechanism through a series of fully connected layers independently at each position in the sequence."
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "Pre-LayerNormalization Transformer",
27
+ "text": "The computation process in the Pre-LN transformer architecture can be represented as:\nHere, denotes the input, and the operations involving the Multi-Head Attention and MLP (Multi-Layer Perceptron) modules are executed successively within the Transformer block."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "Post-LayerNormalization Transformer",
33
+ "text": "The computations in the Post-LN transformer architecture are:\nNote: LN represents LayerNorm, and there is a difference in its position in the two architectures."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Vision Transformers",
39
+ "text": "In Vision Transformers (ViT) (Dosovitskiy et al. 2020 ###reference_5###), the architecture begins with a Patch Embedding (PE) layer that restructures the image into a sequence of patches. The PE layer first rearranges the image, denoted as , into patches , where determines the patch dimensions. Each of these patches then undergoes an independent dense transformation, generating the visual tokens . After this layer, a series of Transformer blocks are applied, operating with self-attention, feed-forward layers, and residual connections similar to typical Transformer architectures."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Parallel Structure",
45
+ "text": "The computation process in the Parallel Pre-LN transformer architecture is:\nHere, the operations involving the Multi-Head Attention and MLP (Multi-Layer Perceptron) modules are performed in parallel within the Transformer block instead of sequentially."
46
+ },
47
+ {
48
+ "section_id": "2.4",
49
+ "parent_section_id": "2",
50
+ "section_name": "Representational Collapse",
51
+ "text": "The challenge with the representation capability of Pre-LN transformers was first identified by Admin (Liu et al. 2020 ###reference_9###). As the number of layers increases, the term grows, making the output from the Multi-Head Attention, or MLP blocks, relatively insignificant.\nFor instance, here, as we move to deeper layers, the magnitude and variance of the term greatly exceeds the output of the Multi-Head Attention block. This suggests that the input and output values in the later blocks are likely to converge or become increasingly similar."
52
+ },
53
+ {
54
+ "section_id": "2.5",
55
+ "parent_section_id": "2",
56
+ "section_name": "Gated Linear Units",
57
+ "text": "In their work, (Dauphin et al. 2017 ###reference_4###) presented Gated Linear Units (GLU), a neural network layer created by combining two linear transformations of the input using element-wise multiplication. One of these transformations employs a sigmoid activation function.\nThis formulation for the GLU equation has the input , weight matrices and , bias vectors and , utilizing the sigmoid function and element-wise multiplication .\n(Shazeer 2020 ###reference_10###) introduced further variants of the GLU and demonstrated that substituting the initial linear transformation and activation function of the MLP layer in the transformer architecture with GLU or one of its variants enhances the Transformer\u2019s performance.\nEach of these layers involves 3 matrices instead of 2, so the hidden dimension was reduced by 2/3 to maintain number of parameters."
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "Related Work",
63
+ "text": "Several approaches have been proposed to overcome the representation collapse issue in Pre-LN transformers. Techniques like Admin (Liu et al. 2020 ###reference_9###) and DeepNet (Wang et al. 2022 ###reference_16###) add different weights to the residuals. Furthermore,\nDeepNet (Wang et al. 2022 ###reference_16###) also modified the initialization to reduce training instability in Post-LN transformers.\nFew others, like Resi-Dual (Xie et al. 2023 ###reference_18###), proposed modifications to architecture to address this problem.\nHowever, when applied to Vision Transformers, all these techniques either result in training instability or exhibit considerably inferior performance compared to standard Pre-LN transformer architectures. This motivated us to develop a novel architecture that addresses the representation collapse problem and achieves improved results on vision tasks over standard Vision Transformers."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Methodology",
69
+ "text": "We hypothesize that representation collapse is the underlying cause for the comparable performance of parallel and standard transformer architectures at a large scale."
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Standard Attention Block",
75
+ "text": "The computations inside a regular attention block are:\nGiven the input features or embeddings , these are transformed into the , , and matrices via learned linear projections. Here, , , and represent the weight matrices while , , and denote the bias terms for the linear projections.\nEquations for computing the attention scores (), attention distribution (), and weighted values ():\nThe attention scores () are determined by taking the dot product of the query () and key () matrices. To scale these attention scores, we divide by the square root of the dimensionality .\nSubsequently, the softmax function is applied to these scores to produce the attention distribution () that provides the weight or significance assigned to each token in the input sequence. is then multiplied with the value matrix () to obtain the weighted values () capturing the importance of the individual elements.\n###figure_1###"
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Modified Attention Block",
81
+ "text": "Similar to the standard attention block, we transform input into the Query, Key and Value. Following this, we apply the activation function to the Value tensor. Otherwise, the standard processes within the attention block remain unchanged.\n###figure_2###"
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Variant",
87
+ "text": "For the GLU-based activation function, we increased the dimension of the value tensor to twice its original size, dividing it into two halves. We applied the activation to one half and performed element-wise multiplication with the other half:\nTo counterbalance the additional parameters introduced by using the GLU activation, we reduced the number of parameters in the MLP block.\nWhile the proposed architecture does not provide an exhaustive solution to the representation collapse problem, it partially addresses it by giving significance to the output of the Multi-head Attention block."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Experiments",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "Setup",
99
+ "text": "We utilized the Vision Transformers (ViT) model (Dosovitskiy et al. 2020 ###reference_5###), which has demonstrated impressive performance across diverse visual tasks. Seven variants of ViT were trained on three different architectures (Ti/16, S/16, and B/16):\n1) Standard ViT: We implemented the standard Vision Transformer using big-vision (Beyer, Zhai, and Kolesnikov 2022 ###reference_1###).\n2) Standard Parallel Structure: We modified the standard architecture to perform Multi-Head Attention and MLP computations in parallel instead of sequentially.\n3) GLU-based variant: We applied the GLU-based activation within the standard ViT attention module on the Value tensor without any additional hyperparameter tuning.\n4) GLU Parallel Variant: Same as above but applied to the Parallel variant.\n5) Parameter Reduced GLU Variant: To compensate for the extra parameters from GLU, we reduced MLP dimensions from 4x to 3x embedding size.\n6) Parameter Reduced GLU Parallel Variant: As above but on the Parallel ViT.\n7) GELU Variant: GELU activation applied to standard ViT Value tensor.\nThe training was conducted following the AugReg methodology (Steiner et al. 2022 ###reference_13###) for 300 epochs with a batch size of 4096 for Ti/16 and S/16 architectures and 2048 for B/16. We evaluated the top-1 accuracy on the ImageNet-1K validation set per common practices. A dropout of 0.1 was implemented for the B/16 architecture, while other architectures were trained without dropout. The training process encompassed the standard ViT, Parallel ViT, standard and Parallel variants of the GLU-based MABViT architecture (with and without reducing MLP parameters), and the GELU-based MABViT architecture to provide a comprehensive and detailed analysis."
100
+ },
101
+ {
102
+ "section_id": "5.2",
103
+ "parent_section_id": "5",
104
+ "section_name": "Architectures",
105
+ "text": "Note: The MLP dimensions in brackets correspond to the parameter-reduced variants."
106
+ },
107
+ {
108
+ "section_id": "5.3",
109
+ "parent_section_id": "5",
110
+ "section_name": "Number of Parameters",
111
+ "text": "For each architecture, the standard ViT and Parallel ViT pairs possess identical parameters."
112
+ },
113
+ {
114
+ "section_id": "5.4",
115
+ "parent_section_id": "5",
116
+ "section_name": "Comparison Between Standard and Parallel Variants",
117
+ "text": "There is a noticeable difference between the validation accuracy of the standard and parallel architectures across all the models. With smaller-scale models, the magnitude and variance of the term are reduced. Consequently, the standard architectures attain superior performance compared to the parallel formulations. We observe that the difference in results reduces when we increase the width of the layers. We hypothesize that the magnitude of the term rises with greater width, causing the parallel and standard structures to perform more similarly."
118
+ },
119
+ {
120
+ "section_id": "5.5",
121
+ "parent_section_id": "5",
122
+ "section_name": "Comparison Between Standard and Parallel GLU variants",
123
+ "text": "The standard and parallel GLU-based variants demonstrate a steady enhancement of over 1% compared to their conventional counterparts on the S/16 and Ti/16 architectures. However, on the B/16 architecture, both variants exhibit overfitting and underperformance.\nAs expected, the parallel GLU variant surpasses the standard GLU on B/16 since it is less prone to overfitting."
124
+ },
125
+ {
126
+ "section_id": "5.6",
127
+ "parent_section_id": "5",
128
+ "section_name": "Comparison Between Parameter Reduced GLU and Parameter Reduced Parallel GLU variants",
129
+ "text": "As described earlier in the architecture specifications, the GLU-PR-base and GLU-PR-Parallel variants possess fewer parameters compared to the standard ViT architecture. Both the PR-GLU S/16 and Ti/16 variants surpass their traditional counterparts. The PR-GLU base variant exhibits a 0.6 % improvement over the standard ViT on S/16. However, as with the previous GLU variants, overfitting occurs again on the B/16 architecture for both formulations."
130
+ },
131
+ {
132
+ "section_id": "5.7",
133
+ "parent_section_id": "5",
134
+ "section_name": "GELU Variant",
135
+ "text": "Our experiments with the GELU variants reveal that the GELU S/16 and Ti/16 variants outperform the standard ViT but fall short compared to the GLU-base and PR-GLU-base variants. This reaffirms our assertion that integrating an activation function within the attention module enhances the Vision Transformer\u2019s performance."
136
+ },
137
+ {
138
+ "section_id": "5.8",
139
+ "parent_section_id": "5",
140
+ "section_name": "Experiments on M/16 Architecture",
141
+ "text": "Since the B/16 MABViT variants exhibited overfitting, we conducted additional experiments on an intermediate architecture between S/16 and B/16. The M/16 architecture has 12 layers, 576 dimensions in the MHA, 2304 dimensions in MLP and consists of 8 heads.\nWe evaluated all four standard (not parallel) variants with the same hyperparameters.\nAll the M/16 MABViT variants surpass the baseline architecture. Furthermore, they outperform the base B/16 variant while utilizing only half the number of parameters. This demonstrates the ability of MABViT variants to capture complex patterns with fewer parameters efficiently."
142
+ },
143
+ {
144
+ "section_id": "5.9",
145
+ "parent_section_id": "5",
146
+ "section_name": "Experiments on S/16 with 18 Layers",
147
+ "text": "In our final experiments, we increased the number of layers in the S/16 architecture to 18 and evaluated the standard variants:\nAs anticipated, with greater depth, the increasing magnitude of the term impedes the performance of the baseline model. However, the MABViT variants are able to partially overcome the representation collapse issue by providing significance to the output of the MHA block. Evidently, the MABViT variant continues to improve as we raise the number of layers.\nOverall, applying an activation function to the Value tensor boosts the performance of the Ti/16, S/16 and M/16 variants for both standard and parallel formulations. The overfitting exhibited by the GLU B/16 variants indicates the ability of the new architecture to capture complex patterns present in the dataset. The results also accentuate the importance of the standard structure for smaller-scale models."
148
+ },
149
+ {
150
+ "section_id": "6",
151
+ "parent_section_id": null,
152
+ "section_name": "Discussion",
153
+ "text": "Although the suggested modification provides significance to the Multi-Head Attention block\u2019s output, the representational collapse problem is not entirely resolved. The linear growth of the term continues, resulting in convergence between the input and output of deeper layers. However, the proposed architecture improves performance in initial layers and assigns greater weight to the Multi-Head Attention output in later layers compared to the standard ViT. Additional research could provide potential solutions to tackle representation collapse fully or refine this technique.\n###figure_3### ###figure_4### ###figure_5### Our experiments demonstrate that the MABViT variants converge substantially faster than the standard architectures. From Figure 3 ###reference_### and Figure 4 ###reference_###, we can see that the MABViT PR-GLU variant achieves 78 % validation accuracy at the step, whereas baseline ViT attains 78 % at the step. Across all experiments, the MABViT models consistently exhibit superior performance throughout training, barring cases of overfitting. The new model illustrates an ability to swiftly recognize complex patterns in the dataset compared to the standard variant, which is especially relevant as most Large Language Models are trained on massive data. Our work also emphasizes the significance of the Value tensor projection layer, motivating prospective research into utilizing it as a Mixture of Experts.\nNote: The validation accuracy reported in the tables corresponds to the conclusion of 300 epochs (93,000 steps). It is important to note that the figures were generated for 90,000 steps for clearer visualization."
154
+ },
155
+ {
156
+ "section_id": "7",
157
+ "parent_section_id": null,
158
+ "section_name": "Conclusion",
159
+ "text": "In this work, we successfully show that representation collapse causes the comparable performance of parallel and standard transformer architectures at scale. We also demonstrate that current transformer architectures can be improved through partially resolving representational collapse by effectively integrating non-linearity inside the attention module using GLU variants. The PR-SwiGLU S/16 variant enhances performance by 0.6% with fewer parameters, and all the MABViT M/16 variants surpass the standard B/16 architecture, utilizing only half the parameters. Additionally, we provided analysis with GELU activation and substantiated that activation inside the attention module benefits Vision Transformer. Furthermore, we exhibited that MABViT variants possess greater potential in deep transformers compared to the standard architectures."
160
+ },
161
+ {
162
+ "section_id": "8",
163
+ "parent_section_id": null,
164
+ "section_name": "Acknowledgement",
165
+ "text": "We extend our sincere thanks to Bharat Kumar Sharma and NVIDIA for their crucial support in the project.\nc:35\n\\bibentryc:36\n\\bibentryc:37\n\\bibentryc:38\n\\bibentryc:30\n\\bibentryc:39\n\\bibentryc:41\n\\bibentryc:40\n\\bibentryc:42\n\\bibentryc:44"
166
+ }
167
+ ],
168
+ "appendix": [],
169
+ "tables": {},
170
+ "image_paths": {
171
+ "1": {
172
+ "figure_path": "2312.01324v2_figure_1.png",
173
+ "caption": "Figure 1: Scaled Dot Product Attention",
174
+ "url": "http://arxiv.org/html/2312.01324v2/extracted/5325282/MABViT1.jpg"
175
+ },
176
+ "2": {
177
+ "figure_path": "2312.01324v2_figure_2.png",
178
+ "caption": "Figure 2: Modified Scaled Dot Product Attention",
179
+ "url": "http://arxiv.org/html/2312.01324v2/extracted/5325282/MABViT2.jpg"
180
+ },
181
+ "3": {
182
+ "figure_path": "2312.01324v2_figure_3.png",
183
+ "caption": "Figure 3: Validation accuracy progression of the Baseline S/16 18 Layers variant over 90,000 training steps.",
184
+ "url": "http://arxiv.org/html/2312.01324v2/extracted/5325282/MABVit4.png"
185
+ },
186
+ "4": {
187
+ "figure_path": "2312.01324v2_figure_4.png",
188
+ "caption": "Figure 4: Validation accuracy trajectory of the MABViT PR-GLU S/16 18 Layers variant over 90,000 training steps.",
189
+ "url": "http://arxiv.org/html/2312.01324v2/extracted/5325282/MABViT5.png"
190
+ },
191
+ "5": {
192
+ "figure_path": "2312.01324v2_figure_5.png",
193
+ "caption": "Figure 5: Difference between validation accuracy of MABViT PR-GLU-Base S/16 18L vs Base S/16 18L over 90,000 training steps",
194
+ "url": "http://arxiv.org/html/2312.01324v2/extracted/5325282/MABVit3.png"
195
+ }
196
+ },
197
+ "validation": true,
198
+ "references": [
199
+ {
200
+ "1": {
201
+ "title": "Big Vision.",
202
+ "author": "Beyer, L.; Zhai, X.; and Kolesnikov, A. 2022.",
203
+ "venue": "https://github.com/google-research/big\u02d9vision.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "2": {
209
+ "title": "PaLM: Scaling Language Modeling with Pathways.",
210
+ "author": "Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; Schuh, P.; Shi, K.; Tsvyashchenko, S.; Maynez, J.; Rao, A.; Barnes, P.; Tay, Y.; Shazeer, N.; Prabhakaran, V.; Reif, E.; Du, N.; Hutchinson, B.; Pope, R.; Bradbury, J.; Austin, J.; Isard, M.; Gur-Ari, G.; Yin, P.; Duke, T.; Levskaya, A.; Ghemawat, S.; Dev, S.; Michalewski, H.; Garcia, X.; Misra, V.; Robinson, K.; Fedus, L.; Zhou, D.; Ippolito, D.; Luan, D.; Lim, H.; Zoph, B.; Spiridonov, A.; Sepassi, R.; Dohan, D.; Agrawal, S.; Omernick, M.; Dai, A. M.; Pillai, T. S.; Pellat, M.; Lewkowycz, A.; Moreira, E.; Child, R.; Polozov, O.; Lee, K.; Zhou, Z.; Wang, X.; Saeta, B.; Diaz, M.; Firat, O.; Catasta, M.; Wei, J.; Meier-Hellstern, K.; Eck, D.; Dean, J.; Petrov, S.; and Fiedel, N. 2022.",
211
+ "venue": "arXiv:2204.02311v5.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "3": {
217
+ "title": "Randaugment: Practical Automated Data Augmentation with a Reduced Search Space.",
218
+ "author": "Cubuk, E. D.; Zoph, B.; Shlens, J.; and Le, Q. 2020.",
219
+ "venue": "In Advances in Neural Information Processing Systems, volume 33, 18613\u201318624.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "4": {
225
+ "title": "Language Modeling with Gated Convolutional Networks.",
226
+ "author": "Dauphin, Y. N.; Fan, A.; Auli, M.; and Grangier, D. 2017.",
227
+ "venue": null,
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "5": {
233
+ "title": "An image is worth 16x16 words: Transformers for image recognition at scale.",
234
+ "author": "Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2020.",
235
+ "venue": "arXiv:2010.11929.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "6": {
241
+ "title": "Deep residual learning for image recognition.",
242
+ "author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.",
243
+ "venue": "arXiv preprint arXiv:1512.03385.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "7": {
249
+ "title": "Improving Transformer Optimization Through Better Initialization.",
250
+ "author": "Huang, X. S.; Perez, F.; Ba, J.; and Volkovs, M. 2020.",
251
+ "venue": "In International Conference on Machine Learning, 4475\u20134483. PMLR.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "8": {
257
+ "title": "Dual PatchNorm.",
258
+ "author": "kumar, M.; Dehghani, M.; and Houlsby, N. 2023.",
259
+ "venue": "arXiv, 2302: 2302.01327.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "9": {
265
+ "title": "Understanding the Difficulty of Training Transformers.",
266
+ "author": "Liu, L.; Liu, X.; Gao, J.; Chen, W.; and Han, J. 2020.",
267
+ "venue": null,
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "10": {
273
+ "title": "GLU Variants Improve Transformer.",
274
+ "author": "Shazeer, N. 2020.",
275
+ "venue": null,
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "11": {
281
+ "title": "Talking-Heads Attention.",
282
+ "author": "Shazeer, N.; Lan, Z.; Cheng, Y.; Ding, N.; and Hou, L. 2020.",
283
+ "venue": null,
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "12": {
289
+ "title": "NormFormer: Improved Transformer Pretraining with Extra Normalization.",
290
+ "author": "Shleifer, S.; Weston, J.; and Ott, M. 2021.",
291
+ "venue": "arXiv, 2110: 2110.09456.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "13": {
297
+ "title": "How to Train Your ViT? Data, Augmentation, and Regularization in Vision Transformers.",
298
+ "author": "Steiner, A. P.; Kolesnikov, A.; Zhai, X.; Wightman, R.; Uszkoreit, J.; and Beyer, L. 2022.",
299
+ "venue": "Transactions on Machine Learning Research.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "14": {
305
+ "title": "Going Deeper with Image Transformers.",
306
+ "author": "Touvron, H.; Cord, M.; Sablayrolles, A.; Synnaeve, G.; and J\u00e9gou, H. 2021.",
307
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, 6144\u20136153. IEEE.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "15": {
313
+ "title": "Attention Is All You Need.",
314
+ "author": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017.",
315
+ "venue": "arXiv:1706.03762.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "16": {
321
+ "title": "Deepnet: Scaling Transformers to 1,000 Layers.",
322
+ "author": "Wang, H.; Ma, S.; Dong, L.; Huang, S.; Zhang, D.; and Wei, F. 2022.",
323
+ "venue": null,
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "17": {
329
+ "title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.",
330
+ "author": "Wang, B. and Komatsuzaki, A. 2021.",
331
+ "venue": "https://github.com/kingoflolz/mesh-transformer-jax.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "18": {
337
+ "title": "ResiDual: Transformer with Dual Residual Connections.",
338
+ "author": "Xie, S.; Zhang, H.; Guo, J.; Tan, X.; Bian, J.; Awadalla, H. H.; Menezes, A.; Qin, T.; and Yan, R. 2023.",
339
+ "venue": "arXiv:2304.14802.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "19": {
345
+ "title": "On Layer Normalization in the Transformer Architecture.",
346
+ "author": "Xiong, R.; Yang, Y.; He, D.; Zheng, K.; Zheng, S.; Xing, C.; Zhang, H.; Lan, Y.; Wang, L.; and Liu, T.-Y. 2020.",
347
+ "venue": "In Proceedings of the 35th International Conference on Machine Learning, 12126\u201312135. PMLR.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "20": {
353
+ "title": "Scaling Vision Transformers.",
354
+ "author": "Zhai, X.; Kolesnikov, A.; Houlsby, N.; and Beyer, L. 2022.",
355
+ "venue": "Conference on Computer Vision and Pattern Recognition (CVPR).",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "21": {
361
+ "title": "Mixup: Beyond Empirical Risk Minimization.",
362
+ "author": "Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2017.",
363
+ "venue": null,
364
+ "url": null
365
+ }
366
+ }
367
+ ],
368
+ "url": "http://arxiv.org/html/2312.01324v2"
369
+ }
20240101/2312.09086v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2312.10661v2.json ADDED
@@ -0,0 +1,583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Wikiformer: Pre-training with Structured Information of Wikipedia for Ad-hoc Retrieval",
3
+ "abstract": "With the development of deep learning and natural language processing techniques, pre-trained language models have been widely used to solve information retrieval (IR) problems. Benefiting from the pre-training and fine-tuning paradigm, these models achieve state-of-the-art performance. In previous works, plain texts in Wikipedia have been widely used in the pre-training stage. However, the rich structured information in Wikipedia, such as the titles, abstracts, hierarchical heading (multi-level title) structure, relationship between articles, references, hyperlink structures, and the writing organizations, has not been fully explored. In this paper, we devise four pre-training objectives tailored for IR tasks based on the structured knowledge of Wikipedia. Compared to existing pre-training methods, our approach can better capture the semantic knowledge in the training corpus by leveraging the human-edited structured data from Wikipedia. Experimental results on multiple IR benchmark datasets show the superior performance of our model in both zero-shot and fine-tuning settings compared to existing strong retrieval baselines. Besides, experimental results in biomedical and legal domains demonstrate that our approach achieves better performance in vertical domains compared to previous models, especially in scenarios where long text similarity matching is needed. The code is available at https://github.com/oneal2000/Wikiformer.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Pre-trained Language Models (PLMs) have achieved great success in the field of Natural Language Processing (NLP)(Devlin et al. 2018 ###reference_13###; Vaswani et al. 2017 ###reference_41###; Yang et al. 2019 ###reference_44###; Liu et al. 2019 ###reference_25###; Yasunaga, Leskovec, and Liang 2022 ###reference_45###). These models are firstly pre-trained on a large-scale unlabeled text corpus and then fine-tuned on certain downstream tasks. The pre-training and fine-tuning paradigm have achieved state-of-the-art performance in many downstream NLP tasks. Recently, it has also attracted the attention of the Information Retrieval (IR) community. Besides directly applying PLMs to solve downstream IR tasks(Nogueira and Cho 2019 ###reference_34###), IR researchers have also developed several pre-training methods tailored for IR tasks, especially ad-hoc search(Ma et al. 2021a ###reference_26###, d ###reference_31###, b ###reference_27###; Chang et al. 2020 ###reference_4###). These studies have shown promising results in conducting IR-specific pre-trained models for downstream tasks.\nAs one of the largest online knowledge bases, Wikipedia has been widely used as the pre-training corpus. In previous works, IR researchers have devised several pre-training tasks by leveraging the rich textual contents in Wikipedia. For example, PROP(Ma et al. 2021a ###reference_26###) utilizes pure texts in Wikipedia, while HARP(Ma et al. 2021d ###reference_31###) utilizes hyperlinks and anchor texts in the web pages. However, as shown in Figure 1 ###reference_###, there\u2019s more rich knowledge brought by the structured information of Wikipedia, which, to the best of our knowledge, has not been exploited in existing studies. For example, the abstract section of Wikipedia is the summarization of an article. When the user\u2019s query is the title of an article, the abstract section is more likely to match the user\u2019s information needs compared to other sections within the same article. In addition, every article on Wikipedia has a hierarchical heading (multi-level title) structure, the subtitle is always the representative words or summarization of the corresponding section. Besides, different subsections of the same section share similar ideas. The relationship between different articles also contains rich information, e.g., the See Also section links one article to other articles that contain additional or similar information. Whether this structured knowledge could benefit the pre-trained models for IR remains mostly unknown.\nTo better incorporate the knowledge of Wikipedia into the pre-training stage, we propose a framework named Wikiformer that fully utilizes the structured information of Wikipedia in the pre-training stage. Wikiformer mainly includes four pre-training tasks: 1) Simulated Re-ranking (SRR), 2) Representative Words Identification (RWI), 3) Abstract Texts Identification (ATI), and 4) Long Texts Matching (LTM). These tasks use the title, subtitles, abstract, hyperlinks, and heading hierarchies to construct pseudo query-document pairs for the pre-training of the retrieval model. Each of them captures the needs of retrieval and ranking in different granularities from different angles. To evaluate the effectiveness of the above pre-training tasks, we test the performance of our model on several IR benchmarks in zero-shot and fine-tuning settings. In the zero-shot setting, no supervised data is used for fine-tuning. Since the fine-tuning process gradually updates the parameters of PLMs, zero-shot performance is a more direct metric to evaluate the effectiveness of pre-training methods. The experimental results show that Wikiformer can significantly outperform traditional methods, state-of-the-art neural ranking models, and existing pre-trained models for IR in multiple domains with or without human-annotated data.\nIn summary, the contributions of our work are three folds:\nWe propose a novel pre-training framework, i.e., Wikiformer, that makes full use of the structured knowledge of Wikipedia.\nWe propose four learning objectives based on pseudo query-document pair sampling during pre-training. Tailored for IR tasks such as retrieval and document re-ranking, these objectives can better help the model analyze the relevance between queries and documents.\nWe evaluate Wikiformer on multiple IR benchmark datasets, and the experimental results show that Wikiformer outperforms state-of-the-art methods in both zero-shot and fine-tuning settings in multiple domains.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Pre-trained Language Models",
21
+ "text": "Pre-trained Language Models (PLMs) have achieved great success in recent years(Devlin et al. 2018 ###reference_13###; Vaswani et al. 2017 ###reference_41###; Yang et al. 2019 ###reference_44###; Liu et al. 2019 ###reference_25###; Yasunaga, Leskovec, and Liang 2022 ###reference_45###). These models are firstly trained on large-scale unlabeled text corpora and then fine-tuned on certain downstream tasks with labeled data. Benefiting from the self-supervised learning on a large-scale pre-training corpus, these models own a powerful ability on contextual text representation.\nAmong these PLMs, Transformer based models(Vaswani et al. 2017 ###reference_41###) show great performance in most downstream NLP tasks. One of the remarkable examples is the BERT model(Devlin et al. 2018 ###reference_13###), a bi-directional Transformer based pre-trained language model. BERT has two self-supervised tasks in the pre-training stage: Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). Following BERT, researchers redesign and optimize the pre-training tasks of PLMs. For example, Roberta(Liu et al. 2019 ###reference_25###) uses a dynamic masking strategy and is trained on a larger text corpus. In addition, some researchers explore the integration of structured information into PLMs(Yasunaga, Leskovec, and Liang 2022 ###reference_45###; Colon-Hernandez et al. 2021 ###reference_9###; Kaur et al. 2022 ###reference_20###; Zhang et al. 2019 ###reference_48###). For example, LinkBERT(Yasunaga, Leskovec, and Liang 2022 ###reference_45###) replaces the NSP task of BERT with the Document Relation Prediction (DRP) task, which enables the model to learn cross-document knowledge from hyperlinks among web pages. ERNIE(Zhang et al. 2019 ###reference_48###) utilizes both textual corpora and Knowledge Graphs to train an enhanced PLM."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Pre-training Methods Tailored for IR",
27
+ "text": "Considering the great success that PTMs have achieved in NLP tasks, the IR community begins to apply PLMs to solve IR tasks (Li et al. 2023d ###reference_24###, c ###reference_23###; Chen et al. 2023 ###reference_6###; Li et al. 2023b ###reference_22###; Ye et al. 2023 ###reference_46###), and devise pre-training methods tailored for IR(Ma et al. 2023 ###reference_30###, 2021a ###reference_26###, 2021d ###reference_31###, 2021b ###reference_27###; Chang et al. 2020 ###reference_4###; Fan et al. 2021 ###reference_14###; Guo et al. 2022 ###reference_19###; Chen et al. 2022 ###reference_7###; Su et al. 2023a ###reference_39###, b ###reference_40###; Li et al. 2023a ###reference_21###). For example, HARP(Ma et al. 2021d ###reference_31###) utilizes hyperlinks and anchor texts in the pre-training stage. As the anchor texts are edited by humans, constructing pseudo query-document pairs from them may be more reliable than an algorithm. Webformer(Guo et al. 2022 ###reference_19###) is a pre-trained language model based on large-scale web pages, HTML tags, and the DOM (Document Object Model) tree structures of web pages. Ma et al. ###reference_26###(Ma et al. 2021a ###reference_26###) devised a self-supervised learning task Representative Words Prediction (ROP) based on the Query Likelihood model and train the Transformer encoder with a self-supervised contrastive learning strategy. From another angle, ARES(Chen et al. 2022 ###reference_7###) propose several pre-training objectives based on Axiomatic Regularization. Experimental results on several IR benchmarks show that ARES, PROP, Webformer, and HARP perform significantly better than traditional methods such as BM25 after fine-tuning. Also, some researchers explore incorporating structure information for entity retrieval(Gerritse, Hasibi, and de Vries 2020 ###reference_17###; Nikolaev and Kotov 2020 ###reference_33###; Chatterjee and Dietz 2022 ###reference_5###; Gerritse, Hasibi, and de Vries 2022 ###reference_18###).\nDifferent from the above approaches, we propose four new pre-training objectives using the titles, abstracts, hierarchical heading (multi-level title) structure, relationship between articles, references, hyperlink structures, and the writing organizations of Wikipedia to leverage the wisdom of crowds brought by Wikipedia editors. Compared to previous work, Wikiformer captures more internal relationships between the paragraph structure in Wikipedia web pages, which helps it better model relevance matching."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Methodology",
33
+ "text": "The main objective of our pre-training method is to leverage the structured knowledge and writing organization of Wikipedia for designing better pre-training tasks tailored for information retrieval. To achieve this, we propose four pre-training tasks based on the titles, abstracts, hierarchical heading (multi-level title) structure, the relationship between articles, and the writing organizations of Wikipedia.\nIn this section, we introduce the details of the pre-training tasks of our proposed model Wikiformer, including Simulated Re-ranking (SRR), Representative Words Identification (RWI), Abstract Texts Identification (ATI), and Long Texts Matching (LTM) tasks."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Simulated Re-ranking (SRR)",
39
+ "text": "###figure_2### The SRR task is inspired by an important IR problem: document re-ranking. In general, the goal of the document re-ranking task is to sort a series of documents that are highly related to the query, and then select the ones that are most related to the query. According to the characteristics of this task, we aim to design a self-supervised learning task to select the most relevant document from a series of documents with similar contents. In the SRR task, we make full use of the hierarchical heading (multi-level title) structure of Wikipedia to achieve the above objective. Every article on Wikipedia is organized by the hierarchical heading (multi-level title) structure, the subtitle corresponding to a certain section tends to be the representative words or summarization of the text. Besides, different subsections of the same section share similar semantics. As a result, through this structure, we can obtain a series of texts that are highly similar but slightly different in content and generate the query through the multi-level titles as shown in Figure 2 ###reference_###.\nTo be specific, we modeled each Wikipedia article into a tree structure namely Wiki Structure Tree (WST) based on the hierarchical heading structure. It can be defined as:\n, where is a finite set containing nodes, and is the root node of . Each node in consists of two parts: the subtitle and its corresponding content. The root node contains the main title and the abstract of this article. Starting from the root node , recursively take all the corresponding lower-level sections as its child nodes until every section in this article is added to the .\nAfter building , we use a contrastive sampling strategy to construct pseudo query-document pairs based on the tree. For a non-leaf node in the , we add all its child nodes to the set . A node is randomly selected from . Traversing from the root node to node , all the titles on the path are put together to form a query . This process is shown in Figure 2 ###reference_###. The content of the node is defined as , and the content of the other nodes in is defined as . We use a Transformer based PLM to compute the relevance score of a pseudo query-document pair:\nwhere is the vector representation of the \u201d[CLS]\u201d token. is a multi-layer perceptron that projects the [CLS] vector to a relevance score. For the loss function, we use the Softmax Cross Entropy Loss(Cao et al. 2007 ###reference_3###; Ai et al. 2018 ###reference_1###; Gao, Dai, and Callan 2021 ###reference_16###) to optimize the Transformer based model, which is defined as:\nwhere , and are defined above and is the set of all negative passages generated from ."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Representative Words Identification (RWI)",
45
+ "text": "###figure_3### RWI task is inspired by an IR axiom which assumes that the user\u2019s query is the representative words extracted from the relevant documents. According to the Wikipedia structure, we regard the subtitle of each section as representative words, and then we sample pseudo query-document pair via a simple strategy based on the hierarchical heading (multi-level title) structure, as shown in Figure 3 ###reference_###.\nSpecifically, pseudo query-document pairs are organized as follows: for each Wikipedia article, we first model it as the structure. Then we add all nodes of except the root node to the set . A node is randomly selected from , and we define the depth of this node in as . Traversing from the root node to node , all the titles on the path are put together to form a query . The content of the node is defined as . For the negative queries, we randomly select nodes from , and concatenate the main title and subtitles of the selected nodes to define it as . The relevance score is defined in Equation2 ###reference_###. The loss function of the RWI task is defined as:\nwhere is the title, is the content of that article, and is the set of all negative queries generated from that article.\nIn this task, although both positive and negative queries contain the subtitles of the document, the positive query is more representative compared to the negative query. The model gives higher scores to the positive query through contrastive learning, so that the model can recognize the representative words in the text, and assign higher weights to these words if they are matched to the query. Therefore, through the RWI task, the model can learn how to identify the keywords in the text, which further leads to a better performance in the IR downstream task."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Abstract Texts Identification (ATI)",
51
+ "text": "In the ATI task, we utilize the abstract and the inner structure of Wikipedia. The abstract (the first section) of Wikipedia is regarded as the summarization of the whole article. Compared with other sections of the same article, the abstract is more likely to meet the user\u2019s information needs when the query is the title. Therefore, we extract the title from the Wikipedia article as the query (denoted as ). Then the abstract of the same article is regarded as a positive document (denoted as ). For the negative ones, we use the other sections of the same article (denoted as ). The relevance score of a pseudo query-document pair is defined in Equation 2 ###reference_###. The loss function of the ATI task is defined as:\nwhere is the title of the article, is the abstract of the article, and is the set of all negative documents generated from that article."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Long Texts Matching (LTM)",
57
+ "text": "After pre-training with RWI, ATI, and SRR tasks, Wikiformer acquires the ability to measure the relevance between a short text (query) and a long text. This can help the model better handle the vast majority of ad-hoc retrieval tasks. However, there are also scenarios involving \u201dlong queries\u201d, such as legal case retrieval and document-to-document search. In these scenarios, the model is required to match the relevance between two long texts. Fortunately, with the structured information of Wikipedia, especially hyperlinks, we can build a series of informative pseudo long query-document pairs. To be specific, we utilize the See Also section of Wikipedia which consists of hyperlinks that link to the other articles related to or comparable to this article. The See Also section is mainly written manually, based on the judgment and common sense of the authors and editors. Thus, we can obtain a series of reliable web pages that are highly related to the content of this page.\nTo this end, we design the Long Texts Matching (LTM) task to encourage the Wikiformer to learn the relevance matching ability between two long documents. Initially, we transformed the complete Wikipedia corpus into a graph structure by leveraging the interconnections provided by the \u2019See Also\u2019 links. This graph is designated as the See Also Graph (SAG). Each hyperlink in the See Also section can be formally represented as , which means that appears in the See Also section of . Consequently, can be defined as a directed graph: , where is the above-mentioned set of ordered pairs and is a set of Wikipedia articles. The order of an edge indicates the direction of hyperlinks. After building , we use a contrastive sampling strategy based on the graph. For each node in , we define its content as query and define all its adjacent nodes as positive documents . We randomly select other documents as . The relevance score of a pseudo query-document pair is defined in Equation 2 ###reference_###. The loss function of the LTM task is defined as:\nwhere is the adjacent articles, is the content of the original article, and is the set of all negative articles."
58
+ },
59
+ {
60
+ "section_id": "3.5",
61
+ "parent_section_id": "3",
62
+ "section_name": "Final Training Objective",
63
+ "text": "We add the loss of the proposed four tasks together as the overall loss of the model:"
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Experiments",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Dataset Description",
75
+ "text": "For the pre-training dataset, we use the English Wikipedia (version 20220101).\nFor the downstream datasets, we evaluate the performance of Wikiformer on five IR benchmarks. The basic statistics are shown in Table 1 ###reference_###. MS MARCO Document Re-ranking (Nguyen et al. 2016 ###reference_32###) is a large-scale ad-hoc retrieval dataset with 0.37M queries and 3.2M documents. TREC DL 2019 (Craswell et al. 2020 ###reference_10###) shares the same document collection with MS MARCO but collects finer-grained human labels for 43 queries in the test set. TREC Covid Round2 (Roberts et al. 2021 ###reference_36###) is an ad-hoc retrieval dataset consisting of biomedical articles. It contains the May 1, 2020 version of the CORD-19 (Wang et al. 2020 ###reference_42###) document set and 35 queries written by biomedical professionals. LeCaRD (Ma et al. 2021c ###reference_29###) is a legal case retrieval dataset, consisting of 107 query cases and 10700 candidate cases. The queries in the LeCaRD dataset are the factual description part of a legal case, while the candidate documents are complete legal cases. CAIL-LCR(Ma 2022 ###reference_28###) is a case retrieval dataset (document-to-document search) provided by CAIL 2022 consisting of 130 query cases and 100 candidate cases for each query case."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Baselines for Comparison",
81
+ "text": "We consider three types of IR baselines for comparison, including traditional IR methods, Neural IR models, and pre-trained language models:\nQuery Likelihood (Zhai 2008 ###reference_47###) is a language model based on Dirichlet smoothing.\nBM25 (Robertson, Zaragoza et al. 2009 ###reference_37###) is a highly effective retrieval model based on lexical matching.\nKNRM(Xiong et al. 2017 ###reference_43###) is an Interactive-based Neural Ranking Model that uses kernel-pooling to provide matching signals for each query-document pair.\nConv-KNRM(Dai et al. 2018 ###reference_12###) is a Convolutional Kernel-based Neural Ranking Model that fuses the contextual information of the surrounding words for relevance matching.\nBERT (Devlin et al. 2018 ###reference_13###) is a bi-directional Transformer based Pre-trained Language Model that has a powerful ability on contextual text representations.\nPROP_MS (Ma et al. 2021a ###reference_26###) adopts the Representative Words Prediction (ROP) task to learn relevance matching from the pseudo query-document pairs. It is pre-trained on MS MARCO.111As PROP and B-PROP have similar performance, and B-PROP does not have a publicly available model checkpoint, therefore we only choose PROP as the baseline instead of selecting both of them.\nPROP_WIKI (Ma et al. 2021a ###reference_26###) adopts the same pre-training task as PROP_MS. The only difference is that PROP_WIKI is pre-trained on Wikipedia.\nHARP (Ma et al. 2021d ###reference_31###) utilizes the hyperlinks and anchor texts to generate pseudo query-document pairs and achieves state-of-the-art performance on ad-hoc retrieval.\nARES (Chen et al. 2022 ###reference_7###) is a pre-trained language model with Axiomatic Regularization for ad hoc Search.\nWebformer (Guo et al. 2022 ###reference_19###) is a pre-trained language model based on large-scale web pages and their DOM (Document Object Model) tree structures."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Implementation Details",
87
+ "text": "For the implementation of KNRM and Conv-KNRM, we use the OpenMatch222https://github.com/thunlp/OpenMatch toolkit, and the 300d GloVe (Pennington, Socher, and Manning 2014 ###reference_35###) vectors are used to initialize the word embeddings. For the implementation of BM25 and QL, we use the pyserini toolkit333https://github.com/castorini/pyserini. For the hyperparameter of BM25, we set and 444This is the best hyperparameter we got after parameter searching.. Note that in our experiments, we use the scores of the BM25 and QL models to re-rank the candidate documents, rather than re-ranking the whole corpus. For the implementation of BERT, we use the Pytorch version BERT-base released by Google555https://github.com/google-research/bert. For the implementation of ARES, PROP_MS, and PROP_WIKI, we directly use the checkpoints released by the original paper. Since the original paper of Webformer and HARP did not release any checkpoints, we reproduce them on the same dataset based on their code and the details provided in their paper.\nTo facilitate comparison with previous baselines, we adopted the same architecture as BERT-base. This aligns with the settings of previous works such as ARES, HARP, PROP, B-PROP, and Webformer. To save computational resources during training, we initialized our model with BERT-base, following the same setting as previous works such as ARES and HARP. We use the AdamW optimizer with a learning rate of 1e-5 in the first 50k steps and 5e-6 in the following steps. We set the warm-up ratio to 0.1. In the RWI, ATI, and SRR tasks, we set the maximum length of the query as 30 and the maximum length of the documents as 480. In the LTM task, we set the maximum length of both documents as 255. We trained our model on four Nvidia GeForce RTX 3090 GPUs for 60 hours. After training for 50k steps, we save the checkpoint every 5k steps and evaluate the zero-shot performance of each checkpoint on a subset of the MS MARCO training set which has no overlap with our test set. We select the best zero-shot performance checkpoint as the final model."
88
+ },
89
+ {
90
+ "section_id": "4.4",
91
+ "parent_section_id": "4",
92
+ "section_name": "Evaluation Methodology",
93
+ "text": "For the two large-scale datasets MS MARCO and TREC DL 2019, we use Mean Reciprocal Rank at 10 and 100 (MRR@10 and MRR@100) for MS MARCO and normalized discounted cumulative gain at 10 and 100 (nDCG@10 and nDCG@100) for TREC DL 2019 as the evaluation metrics. For TREC Covid, we follow the setting of OpenMatch which re-ranks the top 60 candidates provided by the BM25-fusion method. We use precision at rank 5 (P@50) and nDCG@10 as the evaluation metrics for TREC Covid. For LeCaRD and CAIL-LCR datasets, we re-rank the candidate documents provided by the original dataset and use nDCG@5 and nDCG@15 as the evaluation metrics.\nFor the significance test, we adopt Fisher\u2019s randomization test (Fisher 1936 ###reference_15###; Cohen 1995 ###reference_8###; Box et al. 1978 ###reference_2###) which is recommended for IR evaluation by previous work (Smucker, Allan, and Carterette 2007 ###reference_38###)."
94
+ },
95
+ {
96
+ "section_id": "4.5",
97
+ "parent_section_id": "4",
98
+ "section_name": "Experimental Results",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "4.5.1",
103
+ "parent_section_id": "4.5",
104
+ "section_name": "Zero-shot Performance",
105
+ "text": "Zero-shot performance is the performance of the model without any supervised data for fine-tuning. Thus, it directly reflects the effectiveness of the pre-training tasks. The experimental results are shown in Table 2 ###reference_###. We can see that Wikiformer outperforms all baselines on all evaluation metrics which shows the superiority of Wikiformer in the zero-shot setting. Based on the results, we also have the following findings:\nPre-trained models tailored for IR such as PROP, ARES, and Wikiformer perform significantly better than BERT in zero-shot settings. This shows the effectiveness of the pre-training tasks tailored for IR and that these models have indeed learned useful knowledge for relevance matching. Wikiformer performs the best among all the baselines in both benchmarks in zero-shot settings. Since the model architecture and parameter size of Wikiformer are the same as the other pre-trained models, this shows the effectiveness of our pre-training method. Besides, Wikiformer, Webformer, and PROP-Wiki are all pre-trained on the Wikipedia corpus. The superior performance of Wikiformer shows that it has made better use of Wikipedia and learned the rich knowledge that is helpful to solve IR problems through structured information on Wikipedia."
106
+ },
107
+ {
108
+ "section_id": "4.5.2",
109
+ "parent_section_id": "4.5",
110
+ "section_name": "Fine-tuned Performance",
111
+ "text": "Table2 ###reference_### reports the performance of Wikiformer and other baselines after fine-tuning. Through the experimental results, we have the following findings:\n(1) Although the performance of most pre-trained language models (PLMs) is inferior to traditional methods like BM25 and QL in the zero-shot setting, they surpass BM25 and QL significantly after fine-tuning. However, even after fine-tuning, Neural IR Models still underperform BM25 and QL. (2) On the MS MARCO dataset, IR PLMs consistently outperform BERT under the fine-tuning setting. This indicates that the knowledge acquired by IR PLMs during the pre-training stage remains valuable even after fine-tuning. HARP and Webformer, due to the incorporation of external knowledge such as hyperlinks, DOM Tree, and HTML tags, exhibit better performance than PROP-WIKI and PROP-MS. (3) Wikiformer significantly outperforms other baselines on both datasets. Note that the model structure and fine-tuning dataset for Wikiformer are the same as other baselines. Therefore, these experimental results indicate that Wikiformer has acquired more information retrieval knowledge during the pre-training stage compared to other baselines. This demonstrates the value of our pre-training task."
112
+ },
113
+ {
114
+ "section_id": "4.5.3",
115
+ "parent_section_id": "4.5",
116
+ "section_name": "Performance on Vertical Domains",
117
+ "text": "We conducted experiments on the legal domain dataset LeCaRD and CAIL-SCR as well as the biomedical domain dataset TREC Covid to explore the performance of Wikiformer in vertical domains. The experimental results are presented in Tables4 ###reference_### and Table3 ###reference_###. The experimental results indicate that Wikiformer outperforms previous pre-trained models significantly in both the legal and biomedical domains. This suggests that Wikiformer possesses a domain-specific adaptability and effectiveness that allows it to excel in information retrieval tasks within these specialized fields. Its superior highlights the potential of utilizing Wikiformer for improving search and retrieval tasks across diverse domains."
118
+ },
119
+ {
120
+ "section_id": "4.5.4",
121
+ "parent_section_id": "4.5",
122
+ "section_name": "Long Text Matching Performance",
123
+ "text": "The performance of Wikiformer and other baselines on LeCaRD and CAIL-LCR are reported in Table 4 ###reference_###. LeCaRD and CAIL-LCR are Chinese legal case retrieval tasks that have relatively long queries and candidate documents. Thus, experiments on these datasets can evaluate the long text-matching performance of Wikiformer and the baselines. Since there is no Chinese-centric pre-trained model tailored for IR so far, we only use traditional methods and a Chinese version of the BERT model (Cui et al. 2021 ###reference_11###) as baselines.\nThe experimental results show that Wikiformer achieves better performance than traditional statistic methods BM25 and QL but also pre-trained language model BERT in long text-matching tasks. These experimental results highlight the potential of Wikiformer in effectively evaluating long-text similarity and also underscore the effectiveness of the proposed Long Text Matching (LTM) task."
124
+ },
125
+ {
126
+ "section_id": "4.6",
127
+ "parent_section_id": "4",
128
+ "section_name": "Impact of the Training Data Size",
129
+ "text": "To investigate whether a larger training dataset enhances the performance of the pre-training phase, we evaluate the performance of Wikiformer on different sizes of training data varying from 100 to 1,000,000 pseudo query-document pairs. As shown in Figure 4 ###reference_###, Wikiformer surpasses the Query Likelihood model by pre-training with only 100 pseudo query-document pairs in the SRR task. This experimental result shows the effectiveness of our pre-training task and our proposed pseudo query-document pair sampling strategy.\n###figure_4###"
130
+ },
131
+ {
132
+ "section_id": "4.7",
133
+ "parent_section_id": "4",
134
+ "section_name": "Ablation Study",
135
+ "text": "To further analyze the effectiveness of each pre-training task, we conduct ablation experiments on MS MARCO (zero-shot) and LeCaRD (fine-tuned). The experimental results in table 5 ###reference_### show that removing any pre-training tasks will lead to a drop in performance, indicating the effectiveness of each pre-training task on downstream IR tasks. On MS MARCO, among the four tasks, removing the SRR task leads to the largest performance degradation, which reveals that the hierarchical heading structure and the writing organization of Wikipedia contain valuable knowledge for ad-hoc retrieval which helps Wikiformer better at handling relevance matching. On LeCaRD, removing the LTM task leads to the largest performance degradation, which reveals that the LTM task is critical for improving the model\u2019s ability on long text-matching tasks."
136
+ },
137
+ {
138
+ "section_id": "5",
139
+ "parent_section_id": null,
140
+ "section_name": "Conclusions",
141
+ "text": "In this paper, we propose Wikiformer, a pre-trained language model tailored for IR that achieves state-of-the-art performance. We propose several pseudo query-document pair sampling strategies based on the structured information on Wikipedia to leverage the wisdom of crowds brought by Wikipedia editors. Extensive experimental results and case studies verify the effectiveness of our pre-training methods. Results of the ablation study have also implied the effectiveness of all pre-training tasks."
142
+ },
143
+ {
144
+ "section_id": "6",
145
+ "parent_section_id": null,
146
+ "section_name": "Acknowledgements",
147
+ "text": "This work is supported by Quan Cheng Laboratory (Grant No. QCLZD202301), the Natural Science Foundation of China (Grant No. 62002194), and Huawei Poisson Lab."
148
+ }
149
+ ],
150
+ "appendix": [],
151
+ "tables": {
152
+ "1": {
153
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Basic statistics of our benchmark datasets</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.2.1\">Genre</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.3.1\">#Queries</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.4.1\">#Documents</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.2.1.1.1\">MS MARCO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.2\">web pages</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.3\">0.37M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.4\">3.2M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.3.2.1.1\">TREC DL 2019</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.2\">web pages</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.3\">43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.4\">3.2M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.4.3.1.1\">TREC Covid</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.2\">biomedical</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.3\">35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.4\">59,851</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.5.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.5.4.1.1\">LeCaRD</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.5.4.2\">legal</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.5.4.3\">107</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.5.4.4\">10,700</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.6.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.6.5.1.1\">CAIL-LCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.6.5.2\">legal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.6.5.3\">130</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.6.5.4\">13,000</td>\n</tr>\n</tbody>\n</table>\n</figure>",
154
+ "capture": "Table 1: Basic statistics of our benchmark datasets"
155
+ },
156
+ "2": {
157
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The experimental results of Wikiformer and other baselines on three datasets in the zero-shot and fine-tuning setting. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. The best results are in bold. \u201cN\u201d stands for nDCG. The zero-shot performance of both KNRM and Conv-KNRM methods is the same as randomized ranking. Therefore, their zero-shot performance is not shown in the table.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"Sx4.T2.3.1.1.1\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"Sx4.T2.3.1.1.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"Sx4.T2.3.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.1.1.3.1\">Zero-shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"Sx4.T2.3.1.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.1.1.4.1\">Fine-tuned</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.2.2\">\n<td class=\"ltx_td ltx_border_t\" id=\"Sx4.T2.3.2.2.1\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx4.T2.3.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"Sx4.T2.3.2.2.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.2.2.3.1\">MS MARCO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"Sx4.T2.3.2.2.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.2.2.4.1\">TREC DL 2019</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"Sx4.T2.3.2.2.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.2.2.5.1\">MS MARCO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"Sx4.T2.3.2.2.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.2.2.6.1\">TREC DL 2019</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.1\">Model Type</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.2\">Model Name</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.3\">MRR@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.4\">MRR@100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.5\">N@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.6\">N@100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.7\">MRR@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.8\">MRR@100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.9\">N@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.3.3.10\">N@100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.1\">Traditional Models</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.2\">BM25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.3\">0.2656*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.4\">0.2767*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.5\">0.5315*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.6\">0.4996*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.7\">0.2656*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.8\">0.2767*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.9\">0.5315*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.4.10\">0.4996*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.5.5\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.2\">QL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.3\">0.2143*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.4\">0.2268*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.5\">0.5234*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.6\">0.4983*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.7\">0.2143*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.8\">0.2268*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.9\">0.5234*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.5.10\">0.4983*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.1\">Neural IR Models</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.2\">KNRM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.3\">NA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.4\">NA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.5\">NA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.6\">NA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.7\">0.1526*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.8\">0.1685*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.9\">0.3071*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.6.6.10\">0.4591*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.7.7\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.2\">Conv-KNRM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.3\">NA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.4\">NA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.5\">NA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.6\">NA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.7\">0.1554*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.8\">0.1792*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.9\">0.3112*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.7.10\">0.4762*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.1\">Pre-trained Models</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.2\">BERT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.3\">0.1684*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.4\">0.1811*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.5\">0.3407**</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.6\">0.4316*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.7\">0.3826*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.8\">0.3881*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.9\">0.6540</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.8.8.10\">0.5325*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.9.9\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.2\">PROP_WIKI</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.3\">0.2205*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.4\">0.2321*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.5\">0.4712*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.6\">0.4709*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.7\">0.3866*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.8\">0.3922*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.9\">0.6399*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.9.10\">0.5311*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.10.10\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.2\">PROP_MS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.3\">0.2585*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.4\">0.2696*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.5\">0.5203*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.6\">0.4810*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.7\">0.3930*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.8\">0.3980*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.9\">0.6425*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.10.10\">0.5318*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.11.11\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.2\">Webformer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.3\">0.1664*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.4\">0.1756*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.5\">0.3758*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.6\">0.4550*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.7\">0.3984*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.8\">0.4036*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.9\">0.6479*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.11.11.10\">0.5335</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.12.12\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.2\">HARP</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.3\">0.2372*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.4\">0.2465*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.5\">0.5244*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.6\">0.4721*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.7\">0.3961*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.8\">0.4012*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.9\">0.6562</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.12.12.10\">0.5337</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.13.13\">\n<td class=\"ltx_td\" id=\"Sx4.T2.3.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.2\">ARES</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.3\">0.2736*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.4\">0.2851*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.5\">0.5736*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.6\">0.4752*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.7\">0.3995*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.8\">0.4041*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.9\">0.6505*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.13.13.10\">0.5353</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.1\">Our Approach</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.2\">Wikiformer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.3.1\">0.2844</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.4.1\">0.2911</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.5.1\">0.5907</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.6.1\">0.5143</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.7\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.7.1\">0.4085</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.8\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.8.1\">0.4136</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.9\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.9.1\">0.6587</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T2.3.14.14.10\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.14.14.10.1\">0.5392</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
158
+ "capture": "Table 2: The experimental results of Wikiformer and other baselines on three datasets in the zero-shot and fine-tuning setting. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. The best results are in bold. \u201cN\u201d stands for nDCG. The zero-shot performance of both KNRM and Conv-KNRM methods is the same as randomized ranking. Therefore, their zero-shot performance is not shown in the table."
159
+ },
160
+ "3": {
161
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>The experimental results of Wikiformer and other baselines on the TREC Covid rnd2 dataset. The best results are in bold. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. \u201cN\u201d stands for nDCG.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T3.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T3.3.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"Sx4.T3.3.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.1.1.2.1\">TREC Covid rnd2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T3.3.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.2.2.2.1\">Zero-shot N@10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.2.2.3.1\">Fine-tuned N@10</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T3.3.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.3.3.1.1\">QL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.3.3.2\">0.4683*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.3.3.3\">0.4683*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.4.4.1.1\">BM25</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.4.4.2\">0.4792*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.4.4.3\">0.4792*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T3.3.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.5.5.1.1\">BERT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.5.5.2\">0.4018*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.3.5.5.3\">0.5580*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.6.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.6.6.1.1\">PROP_MS</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.6.6.2\">0.4994*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.6.6.3\">0.5944*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.7.7.1.1\">PROP_WIKI</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.7.7.2\">0.4137*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.7.7.3\">0.6104*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.8.8.1.1\">Webformer</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.8.8.2\">0.3845*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.8.8.3\">0.6032*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.9.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.9.9.1.1\">HARP</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.9.9.2\">0.4027*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.9.9.3\">0.5832*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T3.3.10.10.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.10.10.1.1\">ARES</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.10.10.2\">0.4993*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T3.3.10.10.3\">0.5969*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.3.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"Sx4.T3.3.11.11.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.11.11.1.1\">Wikiformer</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.3.11.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.11.11.2.1\">0.5449</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.3.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.3.11.11.3.1\">0.6197</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
162
+ "capture": "Table 3: The experimental results of Wikiformer and other baselines on the TREC Covid rnd2 dataset. The best results are in bold. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. \u201cN\u201d stands for nDCG."
163
+ },
164
+ "4": {
165
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>The experimental results of Wikiformer and other baselines on LeCaRD and CAIL-LCR in fine-tuning setting. The best results are in bold. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. N@5 and N@15 respectively represent nDCG@5 and nDCG@15.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T4.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T4.3.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"Sx4.T4.3.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.1.1.2.1\">LeCaRD</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"Sx4.T4.3.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.1.1.3.1\">CAIL-LCR</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T4.3.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T4.3.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.2.2.2.1\">N@5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T4.3.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.2.2.3.1\">N@15</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T4.3.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.2.2.4.1\">N@5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T4.3.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.2.2.5.1\">N@15</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T4.3.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.3.1.1.1\">BM25</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.3.3.1.2\">0.6843*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.3.3.1.3\">0.7303*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.3.3.1.4\">0.7105*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.3.3.1.5\">0.7490*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T4.3.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.4.2.1.1\">QL</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.4.2.2\">0.6906*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.4.2.3\">0.7411*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.4.2.4\">0.7389*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.4.2.5\">0.7756*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T4.3.5.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.5.3.1.1\">BERT</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.5.3.2\">0.7553*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.5.3.3\">0.7966*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.5.3.4\">0.7993*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.3.5.3.5\">0.8085</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.3.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"Sx4.T4.3.6.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.6.4.1.1\">Wikiformer</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T4.3.6.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.6.4.2.1\">0.7722</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T4.3.6.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.6.4.3.1\">0.8073</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T4.3.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.6.4.4.1\">0.8095</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T4.3.6.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.6.4.5.1\">0.8134</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
166
+ "capture": "Table 4: The experimental results of Wikiformer and other baselines on LeCaRD and CAIL-LCR in fine-tuning setting. The best results are in bold. \u201c*\u201d denotes the result is significantly worse than Wikiformer with level. N@5 and N@15 respectively represent nDCG@5 and nDCG@15."
167
+ },
168
+ "5": {
169
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Ablation study results. The best results are in bold and the worst results are underlined.\u201c*\u201d denotes the performance is significantly better than the backbone model (BERT) with level.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T5.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T5.6.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T5.6.5.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"Sx4.T5.6.5.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.5.1.2.1\">MS MARCO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.6.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.5.1.3.1\">LeCaRD</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.6.6.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T5.6.6.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.6.6.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.6.2.2.1\">MRR@10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.6.6.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.6.2.3.1\">MRR@100</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.6.6.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.6.2.4.1\">nDCG@5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T5.3.1.1\">\n <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.3.1.1.1\">SRR</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.3.1.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"Sx4.T5.3.1.2.1\">0.2334*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.3.1.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"Sx4.T5.3.1.3.1\">0.2441*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.3.1.4\">0.7613*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T5.4.2.1\">\n <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.4.2.1.1\">RWI</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.4.2.2\">0.2596*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.4.2.3\">0.2712*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.4.2.4\">0.7685*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T5.5.3.1\">\n <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.5.3.1.1\">ATI</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.5.3.2\">0.2641*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.5.3.3\">0.2751*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.5.3.4\">0.7627*</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T5.6.4.1\">\n <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.4.1.1\">LTM</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.6.4.2\">0.2726*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.6.4.3\">0.2835*</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.6.4.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"Sx4.T5.6.4.4.1\">0.7574*</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.6.7.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T5.6.7.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.7.3.1.1\">Before Pre-training</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.6.7.3.2\">0.1684</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.6.7.3.3\">0.1811</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.6.7.3.4\">0.7553</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.6.8.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx4.T5.6.8.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.8.4.1.1\">All Four Tasks</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.6.8.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.8.4.2.1\">0.2844*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.6.8.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.8.4.3.1\">0.2911*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.6.8.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.6.8.4.4.1\">0.7722*</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
170
+ "capture": "Table 5: Ablation study results. The best results are in bold and the worst results are underlined.\u201c*\u201d denotes the performance is significantly better than the backbone model (BERT) with level."
171
+ }
172
+ },
173
+ "image_paths": {
174
+ "1": {
175
+ "figure_path": "2312.10661v2_figure_1.png",
176
+ "caption": "Figure 1: Rich structured information of Wikipedia.",
177
+ "url": "http://arxiv.org/html/2312.10661v2/extracted/5325006/pic/structure.png"
178
+ },
179
+ "2": {
180
+ "figure_path": "2312.10661v2_figure_2.png",
181
+ "caption": "Figure 2: Pseudo query-document pairs generated from the tree structure of a Wikipedia article, where q\ud835\udc5eqitalic_q is the query, d+superscript\ud835\udc51d^{+}italic_d start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is the positive document and d\u2212superscript\ud835\udc51d^{-}italic_d start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are negative documents.",
182
+ "url": "http://arxiv.org/html/2312.10661v2/extracted/5325006/pic/srr.png"
183
+ },
184
+ "3": {
185
+ "figure_path": "2312.10661v2_figure_3.png",
186
+ "caption": "Figure 3: The contrastive sampling strategy of the RWI task, where D\ud835\udc37Ditalic_D is the document, q+limit-from\ud835\udc5eq+italic_q + is the positive query and q\u2212superscript\ud835\udc5eq^{-}italic_q start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT is the negative query.",
187
+ "url": "http://arxiv.org/html/2312.10661v2/extracted/5325006/pic/RWI_f.png"
188
+ },
189
+ "4": {
190
+ "figure_path": "2312.10661v2_figure_4.png",
191
+ "caption": "Figure 4: The performance of Wikiformer at different sizes of pre-training data sampled from the SRR task. The red dotted line shows the performance of Query Likelihood.",
192
+ "url": "http://arxiv.org/html/2312.10661v2/extracted/5325006/pic/scale.png"
193
+ }
194
+ },
195
+ "validation": true,
196
+ "references": [
197
+ {
198
+ "1": {
199
+ "title": "Learning a deep listwise context model for ranking refinement.",
200
+ "author": "Ai, Q.; Bi, K.; Guo, J.; and Croft, W. B. 2018.",
201
+ "venue": "In The 41st international ACM SIGIR conference on research & development in information retrieval, 135\u2013144.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "2": {
207
+ "title": "Statistics for experimenters, volume 664.",
208
+ "author": "Box, G. E.; Hunter, W. H.; Hunter, S.; et al. 1978.",
209
+ "venue": "John Wiley and sons New York.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "3": {
215
+ "title": "Learning to rank: from pairwise approach to listwise approach.",
216
+ "author": "Cao, Z.; Qin, T.; Liu, T.-Y.; Tsai, M.-F.; and Li, H. 2007.",
217
+ "venue": "In Proceedings of the 24th international conference on Machine learning, 129\u2013136.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "4": {
223
+ "title": "Pre-training tasks for embedding-based large-scale retrieval.",
224
+ "author": "Chang, W.-C.; Yu, F. X.; Chang, Y.-W.; Yang, Y.; and Kumar, S. 2020.",
225
+ "venue": "arXiv preprint arXiv:2002.03932.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "5": {
231
+ "title": "BERT-ER: Query-specific BERT Entity Representations for Entity Ranking.",
232
+ "author": "Chatterjee, S.; and Dietz, L. 2022.",
233
+ "venue": "In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1466\u20131477.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "6": {
239
+ "title": "THUIR at WSDM Cup 2023 Task 1: Unbiased Learning to Rank.",
240
+ "author": "Chen, J.; Li, H.; Su, W.; Ai, Q.; and Liu, Y. 2023.",
241
+ "venue": "arXiv preprint arXiv:2304.12650.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "7": {
247
+ "title": "Axiomatically Regularized Pre-training for Ad hoc Search.",
248
+ "author": "Chen, J.; Liu, Y.; Fang, Y.; Mao, J.; Fang, H.; Yang, S.; Xie, X.; Zhang, M.; and Ma, S. 2022.",
249
+ "venue": null,
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "8": {
255
+ "title": "Empirical methods for artificial intelligence, volume 139.",
256
+ "author": "Cohen, P. R. 1995.",
257
+ "venue": "MIT press Cambridge, MA.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "9": {
263
+ "title": "Combining pre-trained language models and structured knowledge.",
264
+ "author": "Colon-Hernandez, P.; Havasi, C.; Alonso, J.; Huggins, M.; and Breazeal, C. 2021.",
265
+ "venue": "arXiv preprint arXiv:2101.12294.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "10": {
271
+ "title": "Overview of the TREC 2019 deep learning track.",
272
+ "author": "Craswell, N.; Mitra, B.; Yilmaz, E.; Campos, D.; and Voorhees, E. M. 2020.",
273
+ "venue": "arXiv preprint arXiv:2003.07820.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "11": {
279
+ "title": "Pre-training with whole word masking for chinese bert.",
280
+ "author": "Cui, Y.; Che, W.; Liu, T.; Qin, B.; and Yang, Z. 2021.",
281
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29: 3504\u20133514.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "12": {
287
+ "title": "Convolutional neural networks for soft-matching n-grams in ad-hoc search.",
288
+ "author": "Dai, Z.; Xiong, C.; Callan, J.; and Liu, Z. 2018.",
289
+ "venue": "In Proceedings of the eleventh ACM international conference on web search and data mining, 126\u2013134.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "13": {
295
+ "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.",
296
+ "author": "Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018.",
297
+ "venue": "arXiv preprint arXiv:1810.04805.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "14": {
303
+ "title": "Pre-training Methods in Information Retrieval.",
304
+ "author": "Fan, Y.; Xie, X.; Cai, Y.; Chen, J.; Ma, X.; Li, X.; Zhang, R.; Guo, J.; and Liu, Y. 2021.",
305
+ "venue": "arXiv preprint arXiv:2111.13853.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "15": {
311
+ "title": "Design of experiments.",
312
+ "author": "Fisher, R. A. 1936.",
313
+ "venue": "British Medical Journal, 1(3923): 554.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "16": {
319
+ "title": "Rethink training of BERT rerankers in multi-stage retrieval pipeline.",
320
+ "author": "Gao, L.; Dai, Z.; and Callan, J. 2021.",
321
+ "venue": "In European Conference on Information Retrieval, 280\u2013286. Springer.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "17": {
327
+ "title": "Graph-embedding empowered entity retrieval.",
328
+ "author": "Gerritse, E. J.; Hasibi, F.; and de Vries, A. P. 2020.",
329
+ "venue": "In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14\u201317, 2020, Proceedings, Part I 42, 97\u2013110. Springer.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "18": {
335
+ "title": "Entity-aware Transformers for Entity Search.",
336
+ "author": "Gerritse, E. J.; Hasibi, F.; and de Vries, A. P. 2022.",
337
+ "venue": "arXiv preprint arXiv:2205.00820.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "19": {
343
+ "title": "Webformer: Pre-training with Web Pages for Information Retrieval.",
344
+ "author": "Guo, Y.; Ma, Z.; Mao, J.; Qian, H.; Zhang, X.; Jiang, H.; Cao, Z.; and Dou, Z. 2022.",
345
+ "venue": "In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1502\u20131512.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "20": {
351
+ "title": "LM-CORE: Language models with contextually relevant external knowledge.",
352
+ "author": "Kaur, J. N.; Bhatia, S.; Aggarwal, M.; Bansal, R.; and Krishnamurthy, B. 2022.",
353
+ "venue": "arXiv preprint arXiv:2208.06458.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "21": {
359
+ "title": "SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval.",
360
+ "author": "Li, H.; Ai, Q.; Chen, J.; Dong, Q.; Wu, Y.; Liu, Y.; Chen, C.; and Tian, Q. 2023a.",
361
+ "venue": "arXiv preprint arXiv:2304.11370.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "22": {
367
+ "title": "Constructing Tree-based Index for Efficient and Effective Dense Retrieval.",
368
+ "author": "Li, H.; Ai, Q.; Zhan, J.; Mao, J.; Liu, Y.; Liu, Z.; and Cao, Z. 2023b.",
369
+ "venue": "arXiv preprint arXiv:2304.11943.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "23": {
375
+ "title": "Towards Better Web Search Performance: Pre-training, Fine-tuning and Learning to Rank.",
376
+ "author": "Li, H.; Chen, J.; Su, W.; Ai, Q.; and Liu, Y. 2023c.",
377
+ "venue": "arXiv preprint arXiv:2303.04710.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "24": {
383
+ "title": "THUIR@ COLIEE 2023: Incorporating Structural Knowledge into Pre-trained Language Models for Legal Case Retrieval.",
384
+ "author": "Li, H.; Su, W.; Wang, C.; Wu, Y.; Ai, Q.; and Liu, Y. 2023d.",
385
+ "venue": "arXiv preprint arXiv:2305.06812.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "25": {
391
+ "title": "Roberta: A robustly optimized bert pretraining approach.",
392
+ "author": "Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019.",
393
+ "venue": "arXiv preprint arXiv:1907.11692.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "26": {
399
+ "title": "Prop: Pre-training with representative words prediction for ad-hoc retrieval.",
400
+ "author": "Ma, X.; Guo, J.; Zhang, R.; Fan, Y.; Ji, X.; and Cheng, X. 2021a.",
401
+ "venue": "In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, 283\u2013291.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "27": {
407
+ "title": "B-PROP: bootstrapped pre-training with representative words prediction for ad-hoc retrieval.",
408
+ "author": "Ma, X.; Guo, J.; Zhang, R.; Fan, Y.; Li, Y.; and Cheng, X. 2021b.",
409
+ "venue": "In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 1513\u20131522.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "28": {
415
+ "title": "CAIL-SCR.",
416
+ "author": "Ma, Y. 2022.",
417
+ "venue": "https://github.com/china-ai-law-challenge/CAIL2022/tree/main/lajs.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "29": {
423
+ "title": "LeCaRD: a legal case retrieval dataset for Chinese law system.",
424
+ "author": "Ma, Y.; Shao, Y.; Wu, Y.; Liu, Y.; Zhang, R.; Zhang, M.; and Ma, S. 2021c.",
425
+ "venue": "In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2342\u20132348.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "30": {
431
+ "title": "CaseEncoder: A Knowledge-enhanced Pre-trained Model for Legal Case Encoding.",
432
+ "author": "Ma, Y.; Wu, Y.; Su, W.; Ai, Q.; and Liu, Y. 2023.",
433
+ "venue": "arXiv preprint arXiv:2305.05393.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "31": {
439
+ "title": "Pre-training for Ad-hoc Retrieval: Hyperlink is Also You Need.",
440
+ "author": "Ma, Z.; Dou, Z.; Xu, W.; Zhang, X.; Jiang, H.; Cao, Z.; and Wen, J.-R. 2021d.",
441
+ "venue": "In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 1212\u20131221.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "32": {
447
+ "title": "MS MARCO: A human generated machine reading comprehension dataset.",
448
+ "author": "Nguyen, T.; Rosenberg, M.; Song, X.; Gao, J.; Tiwary, S.; Majumder, R.; and Deng, L. 2016.",
449
+ "venue": "In CoCo@ NIPs.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "33": {
455
+ "title": "Joint word and entity embeddings for entity retrieval from a knowledge graph.",
456
+ "author": "Nikolaev, F.; and Kotov, A. 2020.",
457
+ "venue": "In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14\u201317, 2020, Proceedings, Part I 42, 141\u2013155. Springer.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "34": {
463
+ "title": "Passage Re-ranking with BERT.",
464
+ "author": "Nogueira, R.; and Cho, K. 2019.",
465
+ "venue": "arXiv preprint arXiv:1901.04085.",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "35": {
471
+ "title": "Glove: Global vectors for word representation.",
472
+ "author": "Pennington, J.; Socher, R.; and Manning, C. D. 2014.",
473
+ "venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 1532\u20131543.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "36": {
479
+ "title": "Searching for scientific evidence in a pandemic: An overview of TREC-COVID.",
480
+ "author": "Roberts, K.; Alam, T.; Bedrick, S.; Demner-Fushman, D.; Lo, K.; Soboroff, I.; Voorhees, E.; Wang, L. L.; and Hersh, W. R. 2021.",
481
+ "venue": "Journal of Biomedical Informatics, 121: 103865.",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "37": {
487
+ "title": "The probabilistic relevance framework: BM25 and beyond.",
488
+ "author": "Robertson, S.; Zaragoza, H.; et al. 2009.",
489
+ "venue": "Foundations and Trends\u00ae in Information Retrieval, 3(4): 333\u2013389.",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "38": {
495
+ "title": "A comparison of statistical significance tests for information retrieval evaluation.",
496
+ "author": "Smucker, M. D.; Allan, J.; and Carterette, B. 2007.",
497
+ "venue": "In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, 623\u2013632.",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "39": {
503
+ "title": "Caseformer: Pre-training for Legal Case Retrieval.",
504
+ "author": "Su, W.; Ai, Q.; Wu, Y.; Ma, Y.; Li, H.; and Liu, Y. 2023a.",
505
+ "venue": "arXiv preprint arXiv:2311.00333.",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "40": {
511
+ "title": "THUIR2 at NTCIR-16 Session Search (SS) Task.",
512
+ "author": "Su, W.; Li, X.; Liu, Y.; Zhang, M.; and Ma, S. 2023b.",
513
+ "venue": "arXiv preprint arXiv:2307.00250.",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "41": {
519
+ "title": "Attention is all you need.",
520
+ "author": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, \u0141.; and Polosukhin, I. 2017.",
521
+ "venue": "Advances in neural information processing systems, 30.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "42": {
527
+ "title": "Cord-19: The covid-19 open research dataset.",
528
+ "author": "Wang, L. L.; Lo, K.; Chandrasekhar, Y.; Reas, R.; Yang, J.; Eide, D.; Funk, K.; Kinney, R.; Liu, Z.; Merrill, W.; et al. 2020.",
529
+ "venue": "ArXiv.",
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "43": {
535
+ "title": "End-to-end neural ad-hoc ranking with kernel pooling.",
536
+ "author": "Xiong, C.; Dai, Z.; Callan, J.; Liu, Z.; and Power, R. 2017.",
537
+ "venue": "In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, 55\u201364.",
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "44": {
543
+ "title": "Xlnet: Generalized autoregressive pretraining for language understanding.",
544
+ "author": "Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. R.; and Le, Q. V. 2019.",
545
+ "venue": "Advances in neural information processing systems, 32.",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "45": {
551
+ "title": "LinkBERT: Pretraining Language Models with Document Links.",
552
+ "author": "Yasunaga, M.; Leskovec, J.; and Liang, P. 2022.",
553
+ "venue": "arXiv preprint arXiv:2203.15827.",
554
+ "url": null
555
+ }
556
+ },
557
+ {
558
+ "46": {
559
+ "title": "Relevance Feedback with Brain Signals.",
560
+ "author": "Ye, Z.; Xie, X.; Ai, Q.; Liu, Y.; Wang, Z.; Su, W.; and Zhang, M. 2023.",
561
+ "venue": "ACM Transactions on Information Systems.",
562
+ "url": null
563
+ }
564
+ },
565
+ {
566
+ "47": {
567
+ "title": "Statistical language models for information retrieval.",
568
+ "author": "Zhai, C. 2008.",
569
+ "venue": "Synthesis lectures on human language technologies, 1(1): 1\u2013141.",
570
+ "url": null
571
+ }
572
+ },
573
+ {
574
+ "48": {
575
+ "title": "ERNIE: Enhanced language representation with informative entities.",
576
+ "author": "Zhang, Z.; Han, X.; Liu, Z.; Jiang, X.; Sun, M.; and Liu, Q. 2019.",
577
+ "venue": "arXiv preprint arXiv:1905.07129.",
578
+ "url": null
579
+ }
580
+ }
581
+ ],
582
+ "url": "http://arxiv.org/html/2312.10661v2"
583
+ }
20240101/2312.10841v2.json ADDED
@@ -0,0 +1,490 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Online Boosting Adaptive Learning under Concept Drift for Multistream Classification",
3
+ "abstract": "Multistream classification poses significant challenges due to the necessity for rapid adaptation in dynamic streaming processes with concept drift. Despite the growing research outcomes in this area, there has been a notable oversight regarding the temporal dynamic relationships between these streams, leading to the issue of negative transfer arising from irrelevant data.\nIn this paper, we propose a novel Online Boosting Adaptive Learning (OBAL) method that effectively addresses this limitation by adaptively learning the dynamic correlation among different streams. Specifically, OBAL operates in a dual-phase mechanism, in the first of which we design an Adaptive COvariate Shift Adaptation (AdaCOSA) algorithm to construct an initialized ensemble model using archived data from various source streams, thus mitigating the covariate shift while learning the dynamic correlations via an adaptive re-weighting strategy. During the online process, we employ a Gaussian Mixture Model-based weighting mechanism, which is seamlessly integrated with the acquired correlations via AdaCOSA to effectively handle asynchronous drift. This approach significantly improves the predictive performance and stability of the target stream.\nWe conduct comprehensive experiments on several synthetic and real-world data streams, encompassing various drifting scenarios and types. The results clearly demonstrate that OBAL achieves remarkable advancements in addressing multistream classification problems by effectively leveraging positive knowledge derived from multiple sources.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In various real-world scenarios, such as auto-driving systems, weather forecasts, and industrial production, data is continuously and sequentially generated over time, which is referred to as data streams or streaming data (Lu et al. 2018 ###reference_17###; Zhou et al. 2023b ###reference_38###; Wang et al. 2022a ###reference_29###). These data streams are susceptible to changes in their underlying distribution, resulting in concept drift. Consequently, classifiers trained on historical data may fail to predict subsequent samples, leading to a performance decrease (Li et al. 2022 ###reference_15###; Xu et al. 2023 ###reference_32###). Thus, it attracts many researchers to develop efficient learning techniques capable of analyzing streaming data with concept drift in non-stationary environments. To date, prior studies have provided empirical evidence of the efficacy of concept drift adaptation methods in effectively addressing data streams with dynamic distributions. It is worth noting that the majority of existing techniques have been tailored specifically for a single stream with delayed labels (Yu et al. 2022b ###reference_36###; Song et al. 2021b ###reference_24###). However, it is common to encounter scenarios where multiple data streams are generated simultaneously in real-world intelligent systems. For example, data samples continuously stream from sensors in manufacturing systems. These data streams, despite being associated with the same task, often exhibit distinct distributions due to varying data sources (Zhou et al. 2023a ###reference_37###). In addition, while data collection is straightforward, the labeling process incurs high time and labor costs, leading to the hybrid multiple streams where massive labeled and unlabeled streams arrive simultaneously (Yu et al. 2022a ###reference_35###).\nTo tackle this scenario, multistream classification has been proposed, in which a model can be flexibly transferred from labeled source streams to the unlabeled target stream while employing online detection and adaptation working principles. This not only enables the model to adapt to new and unlabeled data streams but also mitigates the expenses and logistical challenges. The multistream classification problem features three major challenges that have to be tackled simultaneously: 1) Scarcity of labels: this arises from the absence of labels specifically for the target stream, while the source streams possess labeled data; 2) Covariate shift: this implies that any two data streams exhibit distinct distributions, whether they are different source streams or a source stream and a target stream; and 3) Asynchronous drift: the source and target streams are susceptible to independent concept drift, which occurs at varying time periods and results in unique effects on the model performance.\nIn recent years, several approaches have been proposed to address the multistream classification problem by using online domain adaptation and drift handling techniques (Chandra et al. 2016 ###reference_3###; Haque et al. 2017 ###reference_10###; Pratama et al. 2019 ###reference_21###; Wang et al. 2021 ###reference_28###). However, many of these methods have primarily focused on the single-source stream, potentially impeding model performance due to limitations in the quality of the source data. Furthermore, such single-source-based approaches may be prone to overfitting issues. Accordingly, the multi-source configuration is introduced, which enables the acquisition of supplementary information from different source streams, thereby providing more valuable information to build a more accurate and robust model (Wang et al. 2022b ###reference_31###; Yang et al. 2021 ###reference_33###). However, leveraging the information from each individual source stream exposes a new challenge: 4) Temporal dynamic correlations between the source and target streams. In other words, any drift occurring within each stream has the potential to alter the correlation between the source and target streams. It is crucial for the predictive model to adapt promptly, extracting valuable insights from relevant source streams while avoiding the assimilation of irrelevant knowledge from other source streams.\nTo address all issues in the multi-stream classification task, we propose the Online Boosting Adaptive Learning (OBAL) method. As shown in Figure S1 ###reference_###, OBAL consists of two stages, the first of which is the initialization phase, where we propose the AdaCOSA algorithm. The fundamental principle of AdaCOSA involves an adaptive interaction between models learned in the original source space and those acquired in the target space, aiming to align the temporal covariate shift and explore the dynamic relationships between different data streams based on feedback from the target domain. This process reinforces positive knowledge transfer, leading to optimal model migration.\nThe second stage involves the online processing phase, during which our primary aim is to detect and adapt to the asynchronous drift in each data stream in real-time. To achieve this, we employ the Drift Detection Method (DDM) (Gama et al. 2004 ###reference_7###) for labeled source streams, as it offers a stable and accurate detection approach. Simultaneously, we utilize the Gaussian Mixture Model (GMM) (Oliveira, Minku, and Oliveira 2021 ###reference_20###) based weighting strategy for asynchronous drift adaptation in these streams. For the unlabeled target stream, we design two sliding windows and continuously monitor their distribution changes to effectively detect drift occurrences. Once a drift is detected in the target stream, it signifies that the dynamic relationships learned in the first stage are no longer applicable, necessitating a return to the first stage for reinitialization. The main contributions of our work can be summarized as follows:\nThis paper presents a new online ensemble approach (OBAL) for multi-source data stream classification. With the capability to dynamically detect and adapt to concept drift, OBAL demonstrates enhanced effectiveness and stability. Moreover, it offers effortless extensibility in managing diverse data streams.\nA novel algorithm (AdaCOSA) is proposed to align the covariate shift as well as investigate a new dynamic correlation issue between source and target streams. It further enhances positive knowledge transfer and prevents negative transfer effects.\nWe design a simple yet effective GMM-based module to adapt the asynchronous drift. It orchestrates an ensemble of both historical classifiers and newly trained classifiers on weighted source samples. By accumulating abundant source knowledge, the proposed approach achieves improved prediction accuracy for the target stream."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Works",
15
+ "text": "Date stream classification has become an increasingly critical area of research due to the dynamic nature of real-world data streams, i.e., concept drift. Concept drift refers to the underlying data distribution changing over time, which occurs as time if joint distribution . It poses significant challenges for classifiers to maintain accuracy and adapt promptly. To tackle the concept drift problem, many works have been proposed to ensure the effectiveness and reliability of models (Gomes et al. 2017 ###reference_9###; Miyaguchi and Kajino 2019 ###reference_18###; Chiu and Minku 2020 ###reference_4###; Jothimurugesan et al. 2023 ###reference_13###).\nHowever, most methods are designed for single-labeled stream, which is not suitable for the multi-stream scenario. To fill this research blank, Chandra et al. (Chandra et al. 2016 ###reference_3###) introduce a multi-stream classification framework that utilizes ensemble classifiers for each data stream and incorporates Kernel Mean Matching to reduce the disparity between source and target streams. They further propose the FUSION algorithm (Haque et al. 2017 ###reference_10###) to leverage the Kullback Leibler Importance Estimation Procedure for density ratio estimation and covariate shift handling. In addition, some neural-network-based models are proposed to deal with high-dimensional data (Yoon et al. 2022 ###reference_34###). For example, Autonomous Transfer Learning (ATL) (Pratama et al. 2019 ###reference_21###) is an online domain adaptation strategy that employs both generative and discriminative phases, combined with Kullback Leibler divergence-based optimization. Moreover, Yu et al. (Yu et al. 2022b ###reference_36###) propose a meta-learning-based framework to learn the invariant features of drifting data streams and then update the meta model in an online fashion.\nIn addition, multi-source stream classification is proposed to enhance the robustness by considering the complementary information from different source streams simultaneously. For example, Du et al. (Du, Minku, and Zhou 2019 ###reference_6###) introduced Melanie, which employs a weighted ensemble classifier to transfer knowledge from multiple source streams. It is the first approach capable of simultaneously transferring knowledge from various source streams with concept drift. However, Melanie is a supervised method, which cannot be used for unlabeled data prediction.\nHence, the AutOmatic Multi-Source Domain Adaptation (AOMSDA) (Renchunzi and Pratama 2022 ###reference_22###) incorporates a central moment discrepancy-based regularizer to leverage the complementary information from multi-source streams, and employs a node weighting strategy to tackle the covariate shift. AOMSDA is a chunk-based method, which means it lacks the ability to dynamically detect the changes in data streams. To address this limitation, Jiao et al. (Jiao et al. 2022 ###reference_12###) propose a reduced-space Multistream Classification based on Multi-objective Optimization (MCMO). It seeks a common feature subset to minimize the distribution shift and then uses a GMM to detect and adapt asynchronous drift. However, all these methods determine the correlation between each individual source and target stream as fixed, which does not fully exploit temporal dynamic correlations."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Proposed Method",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Problem Definition",
27
+ "text": "Multi-source-stream classification involves the presence of multiple labeled source streams and one unlabeled target stream. These streams possess interconnected internal representations and share a common label space. The objective of this task is to predict the labels of the target stream by effectively transferring knowledge from the labeled source to the target stream, and it can be defined as follows.\nMulti-source-stream Classification. It involves labeled source streams and one unlabeled target stream . Each arrived data sample at time is represented by , where is the -dimensional features, and is the true label of the instance which can only be observed in , . It aims to build a classification model to predict the class label of using the and .\nAs mentioned before, four main challenges must be addressed simultaneously in the multistream classification problem, i.e., scarcity of labels, covariate shift, asynchronous drift and dynamic correlation. These challenges are defined as follows,\nScarcity of Labels. This is a major issue in the multistream classification problem. Labeled samples are provided only to the source streams , leaving the target stream entirely unlabelled . Consequently, the challenge lies in achieving accurate predictions in the target stream, where no labeled samples are available.\nCovariate Shift. Denoting and as the distributions from and , all streams at the same time step are related but with covariate shift, i.e., while\nAsynchronous Drift. This refers to the observation of the effect of drift at different times on different independent non-stationary processes that continuously generate data from and .\nSource Drift: if but , the drift only occurs in the source stream.\nTarget Drift: if but , the drift only occurs in the target stream.\nConcurrent Drifts: if and , it means drift occurs in both source and target streams.\nTemporal Dynamic Correlation. The dynamic interplay between source and target streams leads to varying relevance, expressed as . At the time , some source streams may possess complementary information , while others may contain negative information . The complexity arises as may change over time, such as , disrupting the inherent relationship between the streams.\nTo address all challenges, we propose the OBAL method which comprises two stages: initialization (AdaCOSA) and online processing. Next, we will provide a detailed description of these two stages."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Adaptive Covariate Shift Adaptation (AdaCOSA)",
33
+ "text": "To align the covariate shift as well as to explore the temporal dynamic relationship between source and target streams, we propose an AdaCOSA algorithm. Inspired by the CORrelation ALignment (CORAL) method (Sun, Feng, and Saenko 2016 ###reference_26###), the covariance between shifting domains can be aligned by minimizing the distance between the second-order statistics, which provides a stable and effective solution. However, the standard CORAL method is incapable of identifying source instances that are irrelevant to the target, thereby leading to negative transfer effects (Wang et al. 2019 ###reference_30###; Yang et al. 2021 ###reference_33###). Furthermore, it fails to address the dynamic relationship between the data streams. As a solution, we propose an adaptive re-weighting strategy to dynamically and iteratively adjust the weights of the source data based on their relevance to the target domain.\nSpecifically, given any archived source data batch and the target data batch , we first assign a correlation weight vector to each source stream, where is the instance number of each archived data batch. Then we can align the shifting covariance by mapping each weighted source data to the target domain using a transformation matrix , and the objective can be formulated as,\nwhere is the Frobenius norm. and are the covariance matrices of and , respectively. is the covariance matrix of transformed source features , and\nThen the aligned source data can be obtained by the classical whitening and re-coloring strategy (Sun, Feng, and Saenko 2016 ###reference_26###) (Please refer to Supplementary S1 for the detailed theoretical analysis),\nNext, we use a supervised method to train the source classifiers using raw source data . In addition, the covariate-adopted target classifiers can be learned by using the transformed .\nFinally, we can employ an average ensemble that combines models derived from each original source space with those learned in the target space to re-evaluate the source data iteratively.\nOnce the predicted label is obtained, it can be used to re-estimate the correlation weights of the source instances because it contains reliable responses from the target domain. In each iteration, if the source instance is predicted mistakenly, this instance may likely conflict with the target stream. Then the effect of this irrelevant data will be diminished in the next iteration by decreasing its training weight.\nIn contrast, accurate predictions indicate a minimal distance or positive correlation between the source and target domains, resulting in increased training weights to enhance learning. Here, the weight can be updated by,\nwhere is a hyper-parameter defined as . is the total number of samples of the archived data batch , and is the maximum iterations for adaptive re-weighting.\nAfter several iterations, the instances that exhibit a positive correlation with the target stream will be assigned higher training weights, whereas the training instances that diverge from the target stream will receive lower weights. The detailed process is presented in Algorithm 1 ###reference_###. After that, the weight of each target base classifier can be assigned based on the learned correlation weight and it is calculated by . Therefore, the final ensemble for the target stream can be formulated as follows:"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Online Detection and Adaptation",
39
+ "text": "As stated in Challenge 3, asynchronous concept drifts may occur in either the source or target streams over time. Therefore, for any given stream, it is necessary to continuously monitor its drifting situation in real-time and promptly perform drift adaptation to accommodate the new concept."
40
+ },
41
+ {
42
+ "section_id": "3.3.1",
43
+ "parent_section_id": "3.3",
44
+ "section_name": "Source Stream Processing.",
45
+ "text": "For scenarios involving source drift, existing supervised drift detectors such as DDM can be employed, which offers more accurate drift detection because of the leveraging of labels. As a new source sample arrives, the source classifier predicts its label, and then the drift detector is updated based on the prediction error. If no drift is detected, we will incrementally train the target classifier using the weighted mapped with its corresponding weight . Since we have obtained the optimal weights for the archived data batch during the initialization stage, we can retrieve the most relevant data from the archived data batch and assign its weights to the new coming data by indexing the minimum distance between new coming and archived data instances.\nHowever, once a drift is detected within each source stream, an adaptation module should be deployed to handle new concepts. Here, we utilize the GMM to evaluate the distributions of the old and new concepts. GMM assumes several mixture components can model all real-world data, and it is formulated as follows:\nwhere represents the total number of Gaussians or mixture components, and is the observed multivariate. is a weight that is determined by the observations that constitute , and . represents the likelihood of observation being assigned to mixture component . It can be calculated by using the mean and the covariance of each mixture component :\nAccording to the Expectation-Maximization (EM) algorithm, all the parameters of different mixture components are randomly initialized using the archived data batch . Subsequently, it iteratively adjusts the mean and covariance of the mixture component to maximize the likelihood of each mixture component. For a newly incoming instance , its importance weight can be calculated by maximizing the conditional probability of GMM as follows:\nThen, the new coming concept in any source stream can be adapted to the old concept by multiplying . Thus, its optimal correlation weight with the target stream can also be obtained from the learned . Finally, a new target base classifier will be created and trained by using weighted mapped with its corresponding weight . Note that old base classifiers are no longer trained with new samples but are instead preserved within a base classifier pool denoted as , allowing for their retention. Finally, the joint predictive probability can be ensembled as,\nwhere is the weight of -th classifer in , and ."
46
+ },
47
+ {
48
+ "section_id": "3.3.2",
49
+ "parent_section_id": "3.3",
50
+ "section_name": "Target Stream Processing.",
51
+ "text": "To detect the drift in the target stream without utilizing labels, we use the archived target data batch to initialize a GMM model and deploy two sliding windows to detect the changes over time.\nSpecifically, we design two sliding windows, i.e., Reference Window and Detect Window , where is the instance number within the window and it is set as . Then, the average conditional probability of the reference window can be calculated by a point estimation of the mean for the normal distribution,\nThe confidence interval estimation of the is known to be , where is the standard deviation and is the significance level which is set as 3 (Kim and Park 2017 ###reference_14###).\nThe decision is made that the change has occurred when the point estimation by the mean in the detection window satisfies,\nOtherwise, and move step by step to receive new incoming data, i.e., and .\nOnce a change is detected, the historical base classifier becomes ineffective for classifying target samples. Consequently, all base classifiers are eliminated from the classifier pool, and the model undergoes re-initialization to adapt to the new concepts. The learning process is summarized in Algorithm 2 ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": "In the experiment, we first empirically demonstrated that OBAL consistently outperforms current methods, highlighting both robustness and superiority. Second, we validated the substantial impact of dynamic inter-stream relationships on prediction, emphasizing the effectiveness of the AdaCOSA by ablation study. Additionally, we confirmed OBAL\u2019s scalability across various data streams, validating its consistent predictive performance. Finally, we assessed parameter sensitivity, time complexity, and algorithmic cost."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Experiment Settings",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1.1",
67
+ "parent_section_id": "4.1",
68
+ "section_name": "Benchmark Datasets.",
69
+ "text": "We conduct the experiment on four synthetic datasets (i.e., SEA (Street and Kim 2001 ###reference_25###), Tree (Liu, Lu, and Zhang 2020 ###reference_16###), RBF (Song et al. 2021a ###reference_23###), and Hyperplane (Bifet and Gavalda 2007 ###reference_1###) ) and four popular real-world datasets (Weather (Ditzler and Polikar 2012 ###reference_5###), Kitti (Geiger, Lenz, and Urtasun 2012 ###reference_8###), CNNIBN (Vyas et al. 2014 ###reference_27###), and BBC (Vyas et al. 2014 ###reference_27###)), and more detailed descriptions of each dataset and multistream scenario simulation can be found in Supplementary S3 and Table S1."
70
+ },
71
+ {
72
+ "section_id": "4.1.2",
73
+ "parent_section_id": "4.1",
74
+ "section_name": "Baselines.",
75
+ "text": "To demonstrate the superiority of our proposed method, we conducted experiments comparing it with five state-of-the-art methods. Among them, the FUSION (Haque et al. 2017 ###reference_10###) and ATL (Pratama et al. 2019 ###reference_21###) algorithms are based on single-source streams, while the Melanie (Du, Minku, and Zhou 2019 ###reference_6###), AOMSDA (Renchunzi and Pratama 2022 ###reference_22###), and MCMO (Jiao et al. 2022 ###reference_12###) are\nspecifically designed for the multi-source classification scenario. For FUSION and ATL, we pair each source stream with the target stream, resulting in three distinct groups denoted as {FUSIONs1, FUSIONs2, and FUSIONs3} and {ATLs1, ATLs2, ATLs3}, respectively."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Results Analysis",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "4.2.1",
85
+ "parent_section_id": "4.2",
86
+ "section_name": "Overall Performance.",
87
+ "text": "Table 1 ###reference_### compares the classification accuracy of OBAL against all baselines on four synthetic and four real-world datasets. Overall, OBAL outperforms all other unsupervised multistream classification methods on both synthetic and real-world datasets, while it performs better than the supervised method (Melanie) on six out of eight datasets. First, compared to single-source-based methods (Fusion and ATL), all multi-source-based methods demonstrate significant improvement. This proves that multiple labeled source streams can provide more discriminative and complementary information, resulting in more accurate and robust predictions. Compared with Melanie, OBAL performs remarkably close to or even surpasses without considering the target labels.\nThis is because we not only mitigate the covariate shift but also adaptively adjust sample weights based on the feedback from the target domain. This effectively avoids negative transfer from irrelevant data, thereby ensuring better prediction accuracy. Although AOMSDA and MCMO also consider exploiting the complementary information among multiple source data streams, they ignore the underlying correlation between various streams. In contrast, OBAL employs an adaptive re-weighting approach to iteratively decrease the weights of negative transfer samples and strengthen the weights of positive transfer samples based on the predictive feedback from the target domain. As a result, OBAL achieves the best predictive performance.\n###figure_1### ###figure_2###"
88
+ },
89
+ {
90
+ "section_id": "4.2.2",
91
+ "parent_section_id": "4.2",
92
+ "section_name": "Ablation Study.",
93
+ "text": "To validate the rationality of each component and its impact on the overall classification results, we designed three variants of OBAL. As shown in Table 2 ###reference_###, OBAL as a baseline design does not consider the synchronous drift and covariate shift adaptations. In this situation, each stream is assigned a base classifier and it is updated incrementally. Thus, the performance of OBAL is the worst, and significantly lower than that of OBAL on all datasets. This phenomenon highlights the crucial role of concept drift adaptation in dynamic environment learning. OBAL considers the synchronous drift in each stream while ignoring the covariate shift alignment.\nOBAL further employs the traditional CORAL strategy to align the covariate shift which does not explore the dynamic correlation.\nBy comparing OBAL and OBAL, it can be seen aligning the covariate shift can effectively enhance the performance of the target prediction. Furthermore, the final OBAL highlights the significance of appropriate weights in mitigating the influence of irrelevant source samples and effectively addressing the problem of covariance shift."
94
+ },
95
+ {
96
+ "section_id": "4.2.3",
97
+ "parent_section_id": "4.2",
98
+ "section_name": "Influence of Source Numbers.",
99
+ "text": "In this section, we examine the impact of the number of source streams. To ensure a fair comparison with a fixed target stream, we initially sample seven streams from all datasets and vary the number of source streams. Specifically, we evaluate the performance of OBAL using 1, 3, 5, and 7 source streams, respectively.\nThis experiment first investigates whether using multi-source streams improves predictive capability compared to a single-source stream. From Figure 2 ###reference_###, we can observe that the performance of multi-source streams outperforms single-stream performance on all datasets. This indicates that multi-source streams can provide additional complementary information to enhance predictive performance.\nHowever, as the number of source streams increases, there may be a decline in performance. For example, the performance with five source streams is better than that with seven source streams on the Tree dataset. This may be because as the number of source streams increases, the complexity of the model also increases, which affects its performance. Overall, the performance of OBAL is stable across various sources, which demonstrates that our proposed method can easily adapt to different numbers of data streams."
100
+ },
101
+ {
102
+ "section_id": "4.2.4",
103
+ "parent_section_id": "4.2",
104
+ "section_name": "Parameter Sensitivity.",
105
+ "text": "In the proposed OBAL, there are three main parameters affecting the classification performance, including the window size of the initialization stage , the re-weighting steps , and the maximum classifier pool size . To analyze their impact on the overall performance, we carry out experiments under various values of all parameters on all datasets. Here, we set , and . During the experiment, each parameter is tuned while others are kept fixed, and the various predictive performances are shown in Figure 3 ###reference_###.\nDifferent datasets display varying optimal window sizes due to their unique drift frequencies and periods. For those with frequent drifts, a larger window might encompass multiple concepts, complicating accurate covariate adaptation. Hence, matching the window size to the dataset\u2019s drift characteristics is crucial for effective prediction. In the re-weighting phase, the optimal number of iterations for most datasets is three. This is because the algorithm tends to overfit during the initialization phase with an increasing number of iterations. Additionally, as the classifier pool size grows, predictive performance generally improves across datasets, underscoring the importance of retaining historical data. However, after a certain threshold, this performance enhancement plateaus. Detailed parameter settings are shown in Table S2 in the supplementary."
106
+ },
107
+ {
108
+ "section_id": "4.2.5",
109
+ "parent_section_id": "4.2",
110
+ "section_name": "Time Complexity and Execution Time.",
111
+ "text": "As detailed in Supplementary S4, we analyze the time complexity of OBAL, where the overall complexity is given by . Since and are both quite small, the complexity of OBAL primarily depends on the size of . Therefore, we can adjust the value of to execute OBAL efficiently within the available resources. Moreover, Table S3 in the Supplementary compares execution times, revealing that OBAL ranks second after Melanie, underscoring its competitive runtime."
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusion",
117
+ "text": "In this work, we have addressed a significant gap in multistream classification, where the dynamic relationships between streams have largely been overlooked. This oversight can often result in the issue of negative transfer stemming from irrelevant data. To overcome this challenge, we introduced the Online Boosting Adaptive Learning (OBAL) method, coupled with the proposed AdaCOSA algorithm, effectively exploring the dynamic correlation among various streams. The experiments performed on several synthetic and real-world data streams have shown that our method effectively navigates the dynamic correlations between streams, mitigates covariate shifts, and adeptly handles asynchronous drift using a GMM-based weighting mechanism. The insights gained from this study not only advance the field of multistream classification but also provide a promising direction for future research in adaptive learning across various dynamic data environments."
118
+ },
119
+ {
120
+ "section_id": "6",
121
+ "parent_section_id": null,
122
+ "section_name": "Acknowledgments",
123
+ "text": "The work presented in this paper was supported by the Australian Research Council (ARC) under Laureate project FL190100149 and discovery project DP200100700."
124
+ }
125
+ ],
126
+ "appendix": [
127
+ {
128
+ "section_id": "Appendix 1",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix A Supplementary",
131
+ "text": "To derive the solution for Eq.1 presented in this paper, we invoke the subsequent lemma.\n(Cai, Cand\u00e8s, and Shen 2010 ###reference_2###) Let be a real matrix of rank and be a real matrix of rank at most , where . let be the SVD of , and be the largest singular values and the corresponding left and right singular vectors of respectively. Then is the optimal solution to the problem of .\n(Sun, Feng, and Saenko 2016 ###reference_26###) Let be the Moore-Penrose pseudoinverse of and denote the rank of and respectively. Then, is the optimal solution of Eq.1 with .\nProof. Since is a linear transformation, will not increase the rank of , i.e., . Conducting SVD on and , we can get and , respectively. In order to get the optimal value of , we consider the following two cases:\ncase 1: . The optimal solution is . Thus, is the optimal solution of Eq.1 with .\ncase 2: . Then, according to Lemma 1 ###reference_a1###, is the optimal solution of Eq 1 where .\nTherefore, the optimal solution of Eq.1 can be derived as with . Then, to obtain based on the above analysis, let and we can get:\nSince , we have\nIt can be re-written as\nAssuming , then the right side of the above equation can be simplified as . This gives\nTherefore, we can get , and the optimal solution of can be calculated by\nFinally, as analyzed in (Sun, Feng, and Saenko 2016 ###reference_26###), the first part whitens the source data while the second part re-colors it with the target covariance.\nTo provide a clearer demonstration of the OBAL\u2019s learning process, we present a more detailed algorithmic procedure in Algorithm S1 ###reference_###.\nSEA (Street and Kim 2001 ###reference_25###) is a synthetic dataset with two classes consisting of abrupt and recurring drifts. There are three features and the feature\u2019s values range from 0 to 10. When , the data belongs to class 1. Here, and represent the first and second features, respectively. And denotes the threshold for binary classification, which changes from .\nTree (Liu, Lu, and Zhang 2020 ###reference_16###) is generated based on a tree structure where features are randomly split, and labels are assigned to the tree leaves. Each attribute is assigned a random value from a uniform distribution to create a new sample, while new concepts are generated by constructing new trees.\nRBF (Song et al. 2021a ###reference_23###) generator generates data instances using a radial basis function. Centroids are created randomly and assigned a standard deviation value, a weight, and a class label. Incremental drifts are simulated by continuously moving the centroids.\nHyperplane (Bifet and Gavalda 2007 ###reference_1###) is also a synthetic dataset based\non a rotating hyperplane explained in (Hulten, Spencer, and Domingos 2001 ###reference_11###). Positive labels are assigned to examples where , while negative labels are assigned to examples where . Concept drifts can be simulated by adjusting the relative weights.\nWeather (Ditzler and Polikar 2012 ###reference_5###) is a real-world dataset, which pertains to the task of one-step-ahead prediction for determining the occurrence of rainfall. It encompasses weather data spanning a period of 50 years, capturing both the annual seasonal variations and the long-term climate changes.\nKitti (Geiger, Lenz, and Urtasun 2012 ###reference_8###) presents a real-world computer vision challenge that stems from the autonomous driving scenario. The primary objective is to accomplish 3D object detection, employing two high-resolution video cameras\u2014one capturing color images and the other grayscale images\u2014to capture the objects of interest.\nTV News Channel Commercial Detection Dataset111https://archive.ics.uci.edu/dataset/326/tv+news+channel \n+commercial+detection+dataset.\n(Vyas et al. 2014 ###reference_27###) is a real-world multistream dataset. It comprises of prominent audio-visual features collected from 150 hours of television news broadcasts, including 30 hours each from five news channels (i.e., BBC, CNNIB, CNN, NDTV, and TIMESNOW). All the video shots are recorded in a sequential way and used for commercial or non-commercial detection. In this paper, we designate CNNIBN and BCC as the target streams, while treating the remaining streams as source streams to simulate a multistream classification task. Each individual data stream comprises 30,000 samples, thus providing two substantial benchmarks (CNNIBN and BBC) for analysis and evaluation.\nSpecifically, the original dataset is multimodal and contains 5 sets of video features (i.e., video shot length, screen text distribution, motion distribution, frame difference distribution, and edge change ratio) and 7 sets of audio features (i.e., short-term energy, zero crossing rate, spectral centroid, spectral flux, spectral roll-off frequency, fundamental frequency and bag of audio words), totally for 4125 dimensions. In this experiment, we remove the bag of audio words feature and just use the other 11 sets of features. In addition, to retain as much of the original data as possible, we re-sampled all data streams to 30,000 samples.\nTo simulate the multistream classification scenario, we first sort all samples in descending order according to the probability of each sample in a Gaussian distribution, which induces the problem of covariate shift. And then the construction of source streams follows a sequential order, with the first source stream being built upon the top samples, followed by the second source stream, the third source stream, and so on up to -th source stream. The remaining data samples are then assigned to the target stream. All samples selected in each stream will be recovered to the original chronological order to maintain the raw temporal relationship (i.e., Asynchronous drift). Only source streams exclusively consist of labels, whereas the target stream lacks labels, resulting in the scarcity of labels problem.\n###figure_3###"
132
+ }
133
+ ],
134
+ "tables": {
135
+ "1": {
136
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T1.80\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.80.81.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx3.T1.80.81.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.2\">SEA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.3\">Tree</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.4\">RBF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.5\">Hyperplane</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.6\">Weather</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.7\">Kitti</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.8\">CNNIBN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.80.81.1.9\">BBC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx3.T1.8.8.9\">FUSIONs1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.1\">85.04\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.2.2.2\">76.98\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.3.3.3\">82.03\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.4.4.4\">83.29\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.5.5.5\">71.04\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.6.6.6\">54.21\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.7.7.7\">66.76\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.8.8.8\">61.76\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.16.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.16.16.9\">FUSIONs2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.9.9.1\">85.78\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.10.10.2\">76.74\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.11.11.3\">83.46\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.12.12.4\">84.05\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.13.13.5\">70.65\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.14.14.6\">52.36\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.15.15.7\">67.54\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.16.16.8\">61.26\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.24.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.24.24.9\">FUSIONs3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.17.17.1\">84.31\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.18.18.2\">75.21\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.19.19.3\">81.03\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.20.20.4\">82.17\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.21.21.5\">72.17\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.22.22.6\">50.38\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.23.23.7\">65.34\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.24.24.8\">59.86\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.32.32\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.32.32.9\">ATLs1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.25.25.1\">88.42\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.26.26.2\">76.43\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.27.27.3\">84.53\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.28.28.4\">86.17\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.29.29.5\">74.57\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.30.30.6\">52.78\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.31.31.7\">62.78\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.32.32.8\">62.78\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.40.40.9\">ATLs2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.33.33.1\">88.74\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.34.34.2\">76.71\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.35.35.3\">85.21\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.36.36.4\">87.07\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.37.37.5\">75.03\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.38.38.6\">54.01\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.39.39.7\">65.74\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.40.40.8\">62.34\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.48.48\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.48.48.9\">ATLs3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.41.41.1\">87.62\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.42.42.2\">76.07\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.43.43.3\">83.16\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.44.44.4\">86.01\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.45.45.5\">74.62\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.46.46.6\">53.26\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.47.47.7\">62.65\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.48.48.8\">60.76\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.56.56\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.56.56.9\">Melanie</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.49.49.1\">89.18\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.50.50.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.50.50.2.1\">78.93</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.51.51.3\">86.04\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.52.52.4\">86.38\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.53.53.5\">77.74\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.54.54.6\">50.29\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.55.55.7\">68.79\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.56.56.8\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.56.56.8.1\">68.04</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.64.64\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.64.64.9\">AOMSDA</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.57.57.1\">90.23\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.58.58.2\">76.87\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.59.59.3\">85.26\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.60.60.4\">87.66\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.61.61.5\">76.55\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.62.62.6\">67.79\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.63.63.7\">69.07\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.64.64.8\">63.36\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.72.72\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx3.T1.72.72.9\">MCMO</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.65.65.1\">87.46\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.66.66.2\">77.64\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.67.67.3\">86.26\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.68.68.4\">84.04\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.69.69.5\">76.02\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.70.70.6\">64.82\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.71.71.7\">68.83\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.72.72.8\">60.12\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.80.80\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx3.T1.80.80.9\">OBAL (ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.73.73.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.73.73.1.1\">90.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.74.74.2\">78.45\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.75.75.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.75.75.3.1\">86.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.76.76.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.76.76.4.1\">88.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.77.77.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.77.77.5.1\">79.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.78.78.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.78.78.6.1\">70.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.79.79.7\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.79.79.7.1\">70.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.80.80.8\">66.43\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Classification accuracy (%) with the variance of various methods on all benchmarks.</figcaption>\n</figure>",
137
+ "capture": "Table 1: Classification accuracy (%) with the variance of various methods on all benchmarks."
138
+ },
139
+ "2": {
140
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"Sx4.T2.3.3.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.1.1.1\">OBAL\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.2.2.2\">OBAL\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.3.3.3\">OBAL\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T2.3.3.5\">OBAL</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T2.3.4.1.1\">SEA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.1.2\">79.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.1.3\">82.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.1.4\">88.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.3.4.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.4.1.5.1\">90.98</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.5.2.1\">Tree</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.2.2\">72.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.2.3\">74.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.2.4\">77.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.5.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.5.2.5.1\">78.45</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.6.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.6.3.1\">RBF</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.6.3.2\">79.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.6.3.3\">81.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.6.3.4\">84.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.6.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.6.3.5.1\">86.78</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.7.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.7.4.1\">Hyperplane</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.4.2\">81.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.4.3\">82.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.4.4\">86.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.7.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.7.4.5.1\">88.01</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.8.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.8.5.1\">Weather</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.8.5.2\">72.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.8.5.3\">74.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.8.5.4\">77.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.8.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.8.5.5.1\">79.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.9.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.9.6.1\">Kitti</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.6.2\">62.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.6.3\">64.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.6.4\">68.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.9.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.9.6.5.1\">70.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.10.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx4.T2.3.10.7.1\">CNNIBN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.7.2\">63.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.7.3\">67.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.7.4\">69.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.10.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.10.7.5.1\">70.71</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.3.11.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"Sx4.T2.3.11.8.1\">BBC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.3.11.8.2\">58.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.3.11.8.3\">62.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.3.11.8.4\">64.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.3.11.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.11.8.5.1\">66.43</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Classification accuracy (%) of OBAL variants.</figcaption>\n</figure>",
141
+ "capture": "Table 2: Classification accuracy (%) of OBAL variants."
142
+ },
143
+ "3": {
144
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T1.1.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"A1.T1.1.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.2\">Datasets</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.3\">Drift types</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.4\">Type</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.5\">#Instances</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.6\">#Features</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T1.1.1.1.7\">#Classes</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"A1.T1.1.2.2.1.1\">Synthetic</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.2\">SEA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.3\">Sudden/recurring</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.4\">Single</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.5\">25K * 4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.6\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.2.2.7\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.1\">Tree</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.2\">Sudden/gradual</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.3\">Single</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.4\">5K * 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.5\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.3.3.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.1\">RBF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.2\">Incremental</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.3\">Single</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.4\">5K * 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.5\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.4.4.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.1\">Hyperplane</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.2\">Incremental</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.3\">Single</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.4\">30K* 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.5\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.5.5.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"A1.T1.1.6.6.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"A1.T1.1.6.6.1.1\">Real-world</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.2\">Weather</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.3\">Unknown</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.4\">Single</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.5\">4.5K* 4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.6\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T1.1.6.6.7\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.1\">Kitti</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.2\">Unknown</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.3\">Single</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.4\">6.25K * 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.5\">55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.7.7.6\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.1\">CNNIBN</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.2\">Unknown</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.3\">Multistream</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.4\">30K * 4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.5\">124</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.1.8.8.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.1\">BBC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.2\">Unknown</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.3\">Multistream</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.4\">30K * 4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.5\">124</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"A1.T1.1.9.9.6\">2</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S1: </span>Characteristics of all datasets including 3 sources and 1 target stream.</figcaption>\n</figure>",
145
+ "capture": "Table S1: Characteristics of all datasets including 3 sources and 1 target stream."
146
+ },
147
+ "4": {
148
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T2.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T2.3.3.4\" style=\"padding:-2pt 0.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.1.1.1\" style=\"padding:-2pt 0.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.2.2.2\" style=\"padding:-2pt 0.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.3.3.3\" style=\"padding:-2pt 0.0pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T2.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T2.3.4.1.1\" style=\"padding:-2pt 0.0pt;\">SEA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.4.1.2\" style=\"padding:-2pt 0.0pt;\">200</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.4.1.3\" style=\"padding:-2pt 0.0pt;\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.4.1.4\" style=\"padding:-2pt 0.0pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.5.2.1\" style=\"padding:-2pt 0.0pt;\">Tree</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.5.2.2\" style=\"padding:-2pt 0.0pt;\">200</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.5.2.3\" style=\"padding:-2pt 0.0pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.5.2.4\" style=\"padding:-2pt 0.0pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.6.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.6.3.1\" style=\"padding:-2pt 0.0pt;\">RBF</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.6.3.2\" style=\"padding:-2pt 0.0pt;\">300</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.6.3.3\" style=\"padding:-2pt 0.0pt;\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.6.3.4\" style=\"padding:-2pt 0.0pt;\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.7.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.7.4.1\" style=\"padding:-2pt 0.0pt;\">Hyperplane</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.7.4.2\" style=\"padding:-2pt 0.0pt;\">400</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.7.4.3\" style=\"padding:-2pt 0.0pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.7.4.4\" style=\"padding:-2pt 0.0pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.8.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.8.5.1\" style=\"padding:-2pt 0.0pt;\">Weather</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.8.5.2\" style=\"padding:-2pt 0.0pt;\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.8.5.3\" style=\"padding:-2pt 0.0pt;\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.8.5.4\" style=\"padding:-2pt 0.0pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.9.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.9.6.1\" style=\"padding:-2pt 0.0pt;\">Kitti</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.9.6.2\" style=\"padding:-2pt 0.0pt;\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.9.6.3\" style=\"padding:-2pt 0.0pt;\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.9.6.4\" style=\"padding:-2pt 0.0pt;\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.10.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.3.10.7.1\" style=\"padding:-2pt 0.0pt;\">CNNIBN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.10.7.2\" style=\"padding:-2pt 0.0pt;\">200</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.10.7.3\" style=\"padding:-2pt 0.0pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.10.7.4\" style=\"padding:-2pt 0.0pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.11.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T2.3.11.8.1\" style=\"padding:-2pt 0.0pt;\">BBC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.11.8.2\" style=\"padding:-2pt 0.0pt;\">300</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.11.8.3\" style=\"padding:-2pt 0.0pt;\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.11.8.4\" style=\"padding:-2pt 0.0pt;\">10</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S2: </span>Parameter settings on different datasets.</figcaption>\n</figure>",
149
+ "capture": "Table S2: Parameter settings on different datasets."
150
+ },
151
+ "5": {
152
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A1.T3.1\" style=\"width:497.9pt;height:49.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(22.3pt,56.1pt) scale(1.09829047876523,0.307359954457224) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A1.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T3.1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.2\">FUSIONs1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.3\">FUSIONs2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.4\">FUSIONs3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.5\">ATLs1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.6\">ATLs2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.7\">ATLs3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.8\">Melanie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.9\">AOMSDA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.10\">MCMO</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.1.1.1.1.11\">OBAL (ours)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T3.1.1.2.1.1\">SEA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.2\">50.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.3\">52.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.4\">49.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.5\">52.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.6\">51.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.7\">50.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.8\">3.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.9\">51.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.10\">43.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.1.1.2.1.11\">16.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.3.2.1\">Tree</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.2\">37.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.3\">36.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.4\">35.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.5\">40.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.6\">39.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.7\">39.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.8\">4.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.9\">37.79</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.10\">41.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.3.2.11\">14.93</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.4.3.1\">RBF</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.2\">32.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.3\">33.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.4\">33.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.5\">43.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.6\">42.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.7\">42.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.8\">3.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.9\">43.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.10\">35.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.4.3.11\">16.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.5.4.1\">Hyperplane</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.2\">57.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.3\">55.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.4\">56.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.5\">52.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.6\">53.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.7\">53.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.8\">4.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.9\">54.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.10\">57.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.5.4.11\">19.37</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.6.5.1\">Weather</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.2\">60.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.3\">59.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.4\">57.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.5\">52.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.6\">51.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.7\">52.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.8\">4.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.9\">51.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.10\">89.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.6.5.11\">10.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.7.6.1\">Kitti</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.2\">71.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.3\">71.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.4\">70.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.5\">94.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.6\">96.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.7\">95.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.8\">10.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.9\">48.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.10\">77.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.7.6.11\">45.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T3.1.1.8.7.1\">CNNIBN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.2\">409.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.3\">412.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.4\">408.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.5\">367.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.6\">359.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.7\">355.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.8\">53.15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.9\">354.56</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.10\">532.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T3.1.1.8.7.11\">233.25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T3.1.1.9.8.1\">BBC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.2\">424.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.3\">407.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.4\">431.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.5\">419.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.6\">417.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.7\">420.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.8\">53.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.9\">424.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.10\">541.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.1.1.9.8.11\">287.93</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table S3: </span>Execution time (s) of Various Methods on All Benchmarks.</figcaption>\n</figure>",
153
+ "capture": "Table S3: Execution time (s) of Various Methods on All Benchmarks."
154
+ },
155
+ "6": {
156
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T4.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T4.1.2.1.1\">T-test for OBAL</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T4.1.2.1.2\">Melanie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T4.1.2.1.3\">AOMSDA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T4.1.2.1.4\">MCMO</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T4.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.1.1.2\">p-value</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.1.1.3\">0.0039</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.1.1.4\">0.6045</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T4.1.1.1\">2.6769e\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S4: </span>Statistical test on SEA dataset under 10 runs</figcaption>\n</figure>",
157
+ "capture": "Table S4: Statistical test on SEA dataset under 10 runs"
158
+ }
159
+ },
160
+ "image_paths": {
161
+ "1": {
162
+ "figure_path": "2312.10841v2_figure_1.png",
163
+ "caption": "Figure 1: Framework of OBAL. The initialization stage is principally devoted to mitigating the problem of covariate shift, along with learning the intricate dynamic correlations that exist between various data streams. In the online phase, the core focus is on the detection and adaptation of asynchronous drift. This stage further integrates the covariate shift alignment and correlation matrices learned during the initial phase, facilitating a seamless ensemble prediction from the source to the target stream.",
164
+ "url": "http://arxiv.org/html/2312.10841v2/x1.png"
165
+ },
166
+ "2": {
167
+ "figure_path": "2312.10841v2_figure_2.png",
168
+ "caption": "Figure 2: The influence of the different number of sources.",
169
+ "url": "http://arxiv.org/html/2312.10841v2/extracted/5325119/different_sources.png"
170
+ },
171
+ "3": {
172
+ "figure_path": "2312.10841v2_figure_3.png",
173
+ "caption": "Figure 3: The effect of different parameters on classification accuracy.",
174
+ "url": "http://arxiv.org/html/2312.10841v2/extracted/5325119/parameters.png"
175
+ },
176
+ "4": {
177
+ "figure_path": "2312.10841v2_figure_4.png",
178
+ "caption": "Figure S1: High-level illustration of OBAL. The initialization stage is principally devoted to mitigating the problem of covariate shift, along with learning the intricate dynamic correlations that exist between various data streams. In the online phase, as new source samples arrive, we will incrementally train the base classifiers if no drift is detected. Once a drift is detected within each source stream, a new base classifier will be created and trained. Note that old base classifiers are no longer trained with new samples but are instead preserved within a base classifier allowing for their retention. Furthermore, once the target drift is detected, the historical base classifier becomes ineffective for classifying the target samples. Consequently, all base classifiers are eliminated from the base classifier pool, and the model undergoes re-initialization to adapt to the new concepts.",
179
+ "url": "http://arxiv.org/html/2312.10841v2/x2.png"
180
+ }
181
+ },
182
+ "validation": true,
183
+ "references": [
184
+ {
185
+ "1": {
186
+ "title": "Learning from time-changing data with adaptive windowing.",
187
+ "author": "Bifet, A.; and Gavalda, R. 2007.",
188
+ "venue": "In Proceedings of the 2007 SIAM international conference on data mining, 443\u2013448. SIAM.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "2": {
194
+ "title": "A singular value thresholding algorithm for matrix completion.",
195
+ "author": "Cai, J.-F.; Cand\u00e8s, E. J.; and Shen, Z. 2010.",
196
+ "venue": "SIAM Journal on optimization, 20(4): 1956\u20131982.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "3": {
202
+ "title": "An adaptive framework for multistream classification.",
203
+ "author": "Chandra, S.; Haque, A.; Khan, L.; and Aggarwal, C. 2016.",
204
+ "venue": "In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, 1181\u20131190.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "4": {
210
+ "title": "A diversity framework for dealing with multiple types of concept drift based on clustering in the model space.",
211
+ "author": "Chiu, C. W.; and Minku, L. L. 2020.",
212
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems, 33(3): 1299\u20131309.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "5": {
218
+ "title": "Incremental learning of concept drift from streaming imbalanced data.",
219
+ "author": "Ditzler, G.; and Polikar, R. 2012.",
220
+ "venue": "IEEE transactions on knowledge and data engineering, 25(10): 2283\u20132301.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "6": {
226
+ "title": "Multi-source transfer learning for non-stationary environments.",
227
+ "author": "Du, H.; Minku, L. L.; and Zhou, H. 2019.",
228
+ "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN), 1\u20138. IEEE.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "7": {
234
+ "title": "Learning with drift detection.",
235
+ "author": "Gama, J.; Medas, P.; Castillo, G.; and Rodrigues, P. 2004.",
236
+ "venue": "In Advances in Artificial Intelligence\u2013SBIA 2004: 17th Brazilian Symposium on Artificial Intelligence, Sao Luis, Maranhao, Brazil, September 29-Ocotber 1, 2004. Proceedings 17, 286\u2013295. Springer.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "8": {
242
+ "title": "Are we ready for autonomous driving? the kitti vision benchmark suite.",
243
+ "author": "Geiger, A.; Lenz, P.; and Urtasun, R. 2012.",
244
+ "venue": "In 2012 IEEE conference on computer vision and pattern recognition, 3354\u20133361. IEEE.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "9": {
250
+ "title": "Adaptive random forests for evolving data stream classification.",
251
+ "author": "Gomes, H. M.; Bifet, A.; Read, J.; Barddal, J. P.; Enembreck, F.; Pfharinger, B.; Holmes, G.; and Abdessalem, T. 2017.",
252
+ "venue": "Machine Learning, 106(9): 1469\u20131495.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "10": {
258
+ "title": "Fusion: An online method for multistream classification.",
259
+ "author": "Haque, A.; Wang, Z.; Chandra, S.; Dong, B.; Khan, L.; and Hamlen, K. W. 2017.",
260
+ "venue": "In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, 919\u2013928.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "11": {
266
+ "title": "Mining time-changing data streams.",
267
+ "author": "Hulten, G.; Spencer, L.; and Domingos, P. 2001.",
268
+ "venue": "In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, 97\u2013106.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "12": {
274
+ "title": "Reduced-space Multistream Classification based on Multi-objective Evolutionary Optimization.",
275
+ "author": "Jiao, B.; Guo, Y.; Yang, S.; Pu, J.; and Gong, D. 2022.",
276
+ "venue": "IEEE Transactions on Evolutionary Computation.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "13": {
282
+ "title": "Federated learning under distributed concept drift.",
283
+ "author": "Jothimurugesan, E.; Hsieh, K.; Wang, J.; Joshi, G.; and Gibbons, P. B. 2023.",
284
+ "venue": "In International Conference on Artificial Intelligence and Statistics, 5834\u20135853. PMLR.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "14": {
290
+ "title": "An efficient concept drift detection method for streaming data under limited labeling.",
291
+ "author": "Kim, Y.; and Park, C. H. 2017.",
292
+ "venue": "IEICE Transactions on Information and systems, 100(10): 2537\u20132546.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "15": {
298
+ "title": "DDG-DA: Data Distribution Generation for Predictable Concept Drift Adaptation.",
299
+ "author": "Li, W.; Yang, X.; Liu, W.; Xia, Y.; and Bian, J. 2022.",
300
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 4092\u20134100.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "16": {
306
+ "title": "Diverse instance-weighting ensemble based on region drift disagreement for concept drift adaptation.",
307
+ "author": "Liu, A.; Lu, J.; and Zhang, G. 2020.",
308
+ "venue": "IEEE transactions on neural networks and learning systems, 32(1): 293\u2013307.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "17": {
314
+ "title": "Learning under concept drift: A review.",
315
+ "author": "Lu, J.; Liu, A.; Dong, F.; Gu, F.; Gama, J.; and Zhang, G. 2018.",
316
+ "venue": "IEEE transactions on knowledge and data engineering, 31(12): 2346\u20132363.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "18": {
322
+ "title": "Cogra: Concept-drift-aware stochastic gradient descent for time-series forecasting.",
323
+ "author": "Miyaguchi, K.; and Kajino, H. 2019.",
324
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 4594\u20134601.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "19": {
330
+ "title": "Scikit-Multiflow: A Multi-output Streaming Framework.",
331
+ "author": "Montiel, J.; Read, J.; Bifet, A.; and Abdessalem, T. 2018.",
332
+ "venue": "Journal of Machine Learning Research, 19(72): 1\u20135.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "20": {
338
+ "title": "Tackling virtual and real concept drifts: An adaptive gaussian mixture model approach.",
339
+ "author": "Oliveira, G.; Minku, L. L.; and Oliveira, A. L. 2021.",
340
+ "venue": "IEEE Transactions on Knowledge and Data Engineering.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "21": {
346
+ "title": "ATL: Autonomous knowledge transfer from many streaming processes.",
347
+ "author": "Pratama, M.; de Carvalho, M.; Xie, R.; Lughofer, E.; and Lu, J. 2019.",
348
+ "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 269\u2013278.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "22": {
354
+ "title": "Automatic online multi-source domain adaptation.",
355
+ "author": "Renchunzi, X.; and Pratama, M. 2022.",
356
+ "venue": "Information Sciences, 582: 480\u2013494.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "23": {
362
+ "title": "A segment-based drift adaptation method for data streams.",
363
+ "author": "Song, Y.; Lu, J.; Liu, A.; Lu, H.; and Zhang, G. 2021a.",
364
+ "venue": "IEEE transactions on neural networks and learning systems, 33(9): 4876\u20134889.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "24": {
370
+ "title": "Learning data streams with changing distributions and temporal dependency.",
371
+ "author": "Song, Y.; Lu, J.; Lu, H.; and Zhang, G. 2021b.",
372
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "25": {
378
+ "title": "A streaming ensemble algorithm (SEA) for large-scale classification.",
379
+ "author": "Street, W. N.; and Kim, Y. 2001.",
380
+ "venue": "In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, 377\u2013382.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "26": {
386
+ "title": "Return of frustratingly easy domain adaptation.",
387
+ "author": "Sun, B.; Feng, J.; and Saenko, K. 2016.",
388
+ "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 30.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "27": {
394
+ "title": "Commercial block detection in broadcast news videos.",
395
+ "author": "Vyas, A.; Kannao, R.; Bhargava, V.; and Guha, P. 2014.",
396
+ "venue": "In Proceedings of the 2014 Indian Conference on Computer Vision Graphics and Image Processing, 1\u20137.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "28": {
402
+ "title": "Evolving gradient boost: A pruning scheme based on loss improvement ratio for learning under concept drift.",
403
+ "author": "Wang, K.; Lu, J.; Liu, A.; Zhang, G.; and Xiong, L. 2021.",
404
+ "venue": "IEEE Transactions on Cybernetics.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "29": {
410
+ "title": "Learngene: From open-world to your learning task.",
411
+ "author": "Wang, Q.-F.; Geng, X.; Lin, S.-X.; Xia, S.-Y.; Qi, L.; and Xu, N. 2022a.",
412
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8557\u20138565.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "30": {
418
+ "title": "Characterizing and avoiding negative transfer.",
419
+ "author": "Wang, Z.; Dai, Z.; P\u00f3czos, B.; and Carbonell, J. 2019.",
420
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 11293\u201311302.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "31": {
426
+ "title": "Self-paced Supervision for Multi-Source Domain Adaptation.",
427
+ "author": "Wang, Z.; Zhou, C.; Du, B.; and He, F. 2022b.",
428
+ "venue": "In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "32": {
434
+ "title": "Open-Ended Diverse Solution Discovery with Regulated Behavior Patterns for Cross-Domain Adaptation.",
435
+ "author": "Xu, K.; Ma, Y.; Wei, B.; and Li, W. 2023.",
436
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 10585\u201310593.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "33": {
442
+ "title": "Concept drift-tolerant transfer learning in dynamic environments.",
443
+ "author": "Yang, C.; Cheung, Y.-m.; Ding, J.; and Tan, K. C. 2021.",
444
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems, 33(8): 3857\u20133871.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "34": {
450
+ "title": "Adaptive Model Pooling for Online Deep Anomaly Detection from a Complex Evolving Data Stream.",
451
+ "author": "Yoon, S.; Lee, Y.; Lee, J.-G.; and Lee, B. S. 2022.",
452
+ "venue": "In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2347\u20132357.",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "35": {
458
+ "title": "Learn-to-adapt: Concept drift adaptation for hybrid multiple streams.",
459
+ "author": "Yu, E.; Song, Y.; Zhang, G.; and Lu, J. 2022a.",
460
+ "venue": "Neurocomputing, 496: 121\u2013130.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "36": {
466
+ "title": "Meta-ADD: A meta-learning based pre-trained model for concept drift active detection.",
467
+ "author": "Yu, H.; Zhang, Q.; Liu, T.; Lu, J.; Wen, Y.; and Zhang, G. 2022b.",
468
+ "venue": "Information Sciences, 608: 996\u20131009.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "37": {
474
+ "title": "Multi-Stream Concept Drift Self-Adaptation Using Graph Neural Network.",
475
+ "author": "Zhou, M.; Lu, J.; Song, Y.; and Zhang, G. 2023a.",
476
+ "venue": "IEEE Transactions on Knowledge and Data Engineering.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "38": {
482
+ "title": "ODS: Test-Time Adaptation in the Presence of Open-World Data Shift.",
483
+ "author": "Zhou, Z.; Guo, L.-Z.; Jia, L.-H.; Zhang, D.; and Li, Y.-F. 2023b.",
484
+ "venue": null,
485
+ "url": null
486
+ }
487
+ }
488
+ ],
489
+ "url": "http://arxiv.org/html/2312.10841v2"
490
+ }
20240101/2312.11706v3.json ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Some Fibonacci-Related Sequences",
3
+ "abstract": "We discuss an interesting sequence defined recursively;\nnamely, sequence A105774 from the\nOn-Line Encyclopedia of Integer Sequences, and study some\nof its properties. Our main tools are Fibonacci representation,\nfinite automata, and the Walnut theorem-prover.\nWe also prove two new results about synchronized sequences.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Define the Fibonacci numbers as usual by\nthe initial values , , and \nfor . In 2005, the first author defined an interesting sequence,\nA105774 ###reference_oeis.org/A105774###, as follows:\nThe first few values of are given in Table 1 ###reference_###.\nAlthough strictly speaking the sequence was originally defined only\nfor , it makes sense to set , which we have done.\nThe sequence has an intricate fractal structure, which is depicted\nin Figure 1 ###reference_###.\n###figure_1### A number of properties of this sequence were stated, without proof, in the\nindex entry for A105774 ###reference_oeis.org/A105774### in the On-Line Encyclopedia of\nInteger Sequences (OEIS) [11 ###reference_11###].\nIn this paper we prove these properties,\nand many new ones, with the aid of finite automata and\nthe Walnut theorem-prover\n[8 ###reference_8###, 10 ###reference_10###].\nWe will need the Lucas numbers, , defined\nby , , and for .\nRecall the Binet forms:\nIn addition to studying A105774 ###reference_oeis.org/A105774###, we also prove two new\nresults about synchronized sequences, in Theorems 10 ###reference_rem10### and\n18 ###reference_rem18###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Fibonacci representations",
15
+ "text": "In this paper we represent the natural\nnumbers in the Fibonacci (aka Zeckendorf)\nnumeration system [7 ###reference_7###, 12 ###reference_12###].\nIn this system we write as the sum of\ndistinct Fibonacci numbers for , subject to the\ncondition that no two consecutive Fibonacci numbers appear in the sum.\nFor example, . Usually we write\nthis representation as a bit string ,\nwith , , and\n, such that .\nFor example, . The inverse function, mapping\nan arbitrary bit string (with no conditions) \nto the number it represents is defined\nto be ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Finite automata and regular expressions",
21
+ "text": "A finite automaton is a simple model of a computer. In its simplest\nincarnation, a deterministic finite automaton (DFA),\nthe automaton can be\nin any one of a finite number of different states. It takes\nas input a finite string over a finite alphabet, and processes its\ninput symbol-by-symbol, choosing the next state through a lookup\ntable (the transition function)\nbased only on the current state and the next input symbol.\nSome of the states are designated as accepting states; an\ninput string is accepted if the automaton is in an accepting state after all\nits symbols are processed. In this paper we represent integers in\nFibonacci representation, so we can think of the DFA as processing integers\ninstead of strings. Thus, a DFA is a good model to represent a set\nof natural numbers; namely, the set of all integers whose representations\nare accepted. In this sense, a DFA decides membership for\na set (or sequence). DFA\u2019s are typically drawn as transition diagrams,\nwhere the arrows represent transitions between states, double circles\ndenote accepting states, and single circles denote nonaccepting states.\nAs an example, consider the automaton in Figure 2 ###reference_###. It takes\nthe Fibonacci representation of a natural number as input, and accepts\nif and only if is even.\n###figure_2### A slightly more complicated model is the deterministic finite automaton\nwith output (DFAO). In this model, an output is associated with every\nstate, and the output on an input string is the output associated with\nthe last state reached after processing . This model is adequate for\nrepresenting sequences over a finite alphabet; since there are only finitely\nmany states, only finitely many different outputs are possible.\nIf a sequence is computed by a DFAO in the following way\u2014the input\nis the representation of and the output is \u2014then we say\n is automatic. Again, a DFAO is often displayed as a transition\ndiagram, with the notation in a state meaning that the state is\nnumbered and has output .\nFor example, the transition diagram in Figure 3 ###reference_### illustrates\na DFAO computing the sequence , where the input\nis in Fibonacci representation.\n###figure_3### We note in passing that, due to a nice result of\nCharlier et al. [4 ###reference_4###],\nthe minimal DFAO computing\n for in Fibonacci representation is\nknown to have states.\nFinally, there is the notion of synchronized automaton that computes\na function . In this case, we use a DFA, but\nwith two integer inputs and read in parallel,\nwhere the shorter input is padded with leading zeros, if necessary,\nand the DFA accepts if and only if . This model is suited\nto representing sequences of natural numbers. It has the important advantage\nthat if a sequence is represented in this way, then there is a decision\nprocedure to answer first-order queries about the values of the sequence,\nin the logical structure .\nSee [9 ###reference_9###] for more details.\nAs an example, the synchronized automaton in Figure 4 ###reference_### computes\nthe function in Fibonacci representation.\n###figure_4### It is quite useful to adopt the convention that in all three models, the result\nis not sensitive to the presence or absence of leading zeros in the\nrepresentation of numbers.\nThe most powerful of all three representations for a sequence is the synchronized automaton, since one can obtain the other two from it easily. The downside\nis that it is quite possible that there exists a DFA to decide membership\nin a sequence ,\nbut there is no corresponding synchronized\nautomaton computing the \u2019th term of .\nIn this paper, we sometimes use the notation called a regular expression.\nFor our purposes, the main thing to know\nis that , where is a single string or set of strings,\ndenotes the set of all strings consisting of zero or more\nconcatenations of elements chosen from the set specifies.\nFor more about automata and regular expressions,\nsee [6 ###reference_6###]."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Walnut",
27
+ "text": "Walnut is a theorem-prover for automatic sequences. It can answer\nfirst-order queries about automatic and synchronized sequences defined\nover the natural numbers .\nThe syntax is basically first-order logic, with the following\nmeanings of symbols:\n& is logical AND, | is logical OR,\n~ is logical NOT, => is implication, and\n<=> means IFF.\n!= means and x/y denotes integer division, that\nis, .\nA is Walnut\u2019s representation for the universal quantifier\n and E is the representation for .\nreg defines a regular expression.\ndef defines an automaton for later use.\neval evaluates a first-order logic expression with no free\nvariables, and returns either TRUE or FALSE.\n?msd_fib is a bit of jargon telling Walnut to represent\nall numbers in the Fibonacci numeration system.\nWhen Walnut returns TRUE (resp., FALSE) to an eval\nquery, the result is a rigorous proof\nthat the particular logical statement holds (resp., does not hold).\nAs examples, here is the Walnut code for generating the automata\nin Figures 2 ###reference_### and 4 ###reference_###:\nWe will need some Walnut code for\nthe functions and\n, where\n, the golden ratio.\nThe synchronized automata for these can be found in [10 ###reference_10###], under\nthe names phin and phi2n, respectively."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Guessing and checking",
33
+ "text": "One of the principal tools we use is the \u201cguess-and-check\u201d method. Here we\nguess a finite automaton computing a sequence based on empirical data,\nand then use Walnut to verify that it is correct, usually by checking\nthat the defining equation holds. The guessing procedure is\ndescribed in [10 ###reference_10###, \u00a75.7].\nWe can carry out this guessing procedure\nfor the sequence A105774 ###reference_oeis.org/A105774###, which we call\n. It produces a synchronized automaton that\ncomputes the sequence in the following sense: the input is a pair\n in Fibonacci representation, and the automaton\naccepts if and only if .\nThe automaton a105774 that we guess is displayed in Figure 5 ###reference_###.\n###figure_5### The next step, which is crucial, is to\nverify that our guessed automaton is actually correct. First we\nmust verify that the automaton really does compute a function.\nThis means that for each there is exactly one such that\nthe pair is accepted. We can verify this as follows:\nand Walnut returns TRUE for both.\nNext we must verify that our automaton obeys\nthe defining recurrence (1 ###reference_###).\nand Walnut returns TRUE. At this point we know that\nour guessed automaton is correct.\nLet\u2019s start by proving a basic property of A105774 ###reference_oeis.org/A105774###.\nNo natural number appears three or more times in A105774 ###reference_oeis.org/A105774###.\nWe use the following Walnut code.\nand Walnut returns TRUE.\n\u220e\nNow let\u2019s create a DFAO computing , the number of times each natural\nnumber appears in A105774 ###reference_oeis.org/A105774###.\nWe can do this with the following Walnut commands:\nThis creates the DFAO in Figure 6 ###reference_###.\n###figure_6### The first few values of the sequence are given in\nTable 5 ###reference_###.\nIt is sequence A368199 ###reference_oeis.org/A368199### in the OEIS.\nIf appears twice in A105774 ###reference_oeis.org/A105774###, the two occurrences are\nconsecutive.\nWe use the following Walnut code:\nand Walnut returns TRUE.\n\u220e\nThe positions of the \u2019th \n(respectively, the \u2019th and the \u2019th ) in A368199 ###reference_oeis.org/A368199### are also\nFibonacci-synchronized. Note that we index starting at .\nOnce again, we can prove this by guessing and checking. We omit\nthe checks that our guessed automata p0,p1,p2 compute\nsynchronized functions.\nThe first few values of , , and are given in Table 5 ###reference_###.\nEach of these sequences is already in the OEIS. We now prove the\ncharacterizations.\nThe numbers are precisely those in\nA007064 ###reference_oeis.org/A007064###, that is, the natural numbers not of the form\n for .\nThe sequence A007067 ###reference_oeis.org/A007067### is defined\nto be . Now\nwhere we have used the fact that\n\nfor real numbers and integers .\nHence A007067 ###reference_oeis.org/A007067### can be computed by the following Walnut code:\nThe sequence A007064 ###reference_oeis.org/A007064### consists of those integers not in\nA007067 ###reference_oeis.org/A007067###. It is not hard to see that A007064 ###reference_oeis.org/A007064### is given\n(after a change of index) by for . Indeed, we can verify this as follows:\nFinally we can prove the result about as follows:\nand Walnut returns TRUE.\n\u220e\nThe numbers , , are precisely those integers in\nA035487 ###reference_oeis.org/A035487###.\nWe use the following Walnut command:\nand Walnut returns TRUE.\n\u220e\nFinally, we prove the result for :\nThe natural numbers , i.e., the numbers\nthat do not appear in A105774 ###reference_oeis.org/A105774### are\nthose given by the sequence\nA004937 ###reference_oeis.org/A004937###, that is,\n.\nWe have, using the same idea as in the proof of\nProposition 3 ###reference_rem3###, that\nWe can now use Walnut to define a synchronized automaton\nfor A004937 ###reference_oeis.org/A004937###:\nThe automaton for A004937 ###reference_oeis.org/A004937### is given in Figure 7 ###reference_###.\n###figure_7### We can now complete the proof as follows:\nand Walnut returns TRUE.\n\u220e"
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Inequalities",
39
+ "text": "For all we have .\nFor the first inequality we write\nFor the second inequality we use the following Walnut code:\nand Walnut returns TRUE.\n\u220e\nThe bounds in Proposition 6 ###reference_rem6### are tight, as the following\ntheorem shows.\nWe have \nand .\nFor the first claim, using the well-known Binet formulas\nfor the Fibonacci and Lucas numbers, it suffices to show that\n for all .\nFor the second result it suffices to show that\n for all . This follows directly\nfrom the defining recurrence for . Or one can use Walnut:\n\u220e\nThe graph in Figure 1 ###reference_### suggests studying what the positions\nof what might be called \u201csuffix minima\u201d: those for which\n for all .\nThe suffix minima of for occur precisely when\n.\nWe use the following Walnut code:\nThe resulting automaton is depicted in Figure 8 ###reference_###, from\nwhich the regular expression is easily read off.\n###figure_8### \u220e"
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Consecutive identical or different terms",
45
+ "text": "We have\nif and only if\n;\nif and only if\n.\nif and only\nif .\nWe use the following Walnut command:\nWe use the following Walnut command:\nWe use the following Walnut commands:\n\u220e\nThe automaton computing A003623 ###reference_oeis.org/A003623### is given in Figure 9 ###reference_###.\n###figure_9###"
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Sorting the terms",
51
+ "text": "We can consider sorting the terms of A105774 ###reference_oeis.org/A105774### in ascending order.\nThe resulting sequence is A368200 ###reference_oeis.org/A368200###. We can now guess a Fibonacci\nautomaton for this sequence; it is displayed in Figure 10 ###reference_###.\n###figure_10### It now remains to verify that this is indeed A105774 ###reference_oeis.org/A105774### in ascending\norder. This is a consequence of a more general (and new) result, as\nfollows.\nSuppose and \nare synchronized sequences. Then the following property is\ndecidable: is a permutation of .\nWe just sketch the proof, which depends on linear representations.\nFor more about these, see [10 ###reference_10###, Chap. 9] and\n[2 ###reference_2###].\nGiven automata for and ,\nwe can find linear representations counting the number of occurrences\nof each natural number in each sequence. Then we can form the\nlinear representation for the difference of these two. Then we\nuse the fact that whether a linear representation is identically \nis decidable.\n\u220e\nWhen we perform this calculation\nfor the sequences A105774 ###reference_oeis.org/A105774### and A368200 ###reference_oeis.org/A368200###,\nwe find that the linear representation for the difference is indeed\nzero. It now simply remains to check\nthat the numbers computed by the automaton are in ascending order:\nLetting , we have\n.\nWe use the following Walnut code.\n\u220e\nIt now follows that the first difference sequence \ndefined by is Fibonacci-automatic.\nIn fact, it is in disguise, as the next theorem shows.\nFor we have .\nWe use the following Walnut commands:\nand Walnut returns TRUE each time.\n\u220e"
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Special values",
57
+ "text": "In this section we compute the values of when is a Fibonacci\nor Lucas number.\nLet .\nThe Fibonacci representation of\n for is\n where\nis the OEIS sequence\nA006498 ###reference_oeis.org/A006498###, satisfying the recurrence \nfor .\nA closed form is as follows:\nFor part (a) we create a synchronized automaton that\naccepts the Fibonacci representation of ,\nwhich is , in parallel with .\nIt is displayed in Figure 11 ###reference_###, from which\nthe result immediately follows by inspection.\n###figure_11### For part (b) we first create a DFA that accepts, in\nparallel, the Fibonacci representations of\n.\nThen we check the recurrence.\nand Walnut returns TRUE.\n\u220e\nSuppose .\nOver the range , the\nsequence achieves its minimum uniquely at .\nSuppose . Over the range\n, the sequence\n achieves its maximum only at and .\nWe use the following Walnut code:\nand Walnut returns TRUE for both.\n\u220e\nSuppose . Then\n.\nDefine for . Then\n for .\nA closed form for is given by\nSame ideas as in Theorem 13 ###reference_rem13###. We omit the details.\n\u220e"
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S1.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.2\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.4\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.5\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.6\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.7\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.8\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.9\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.10\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.11\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.12\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.13\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.14\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.15\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.16\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.17\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.18\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.19\">17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.20\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.21\">19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.1.1.1.22\">20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.2\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.4\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.5\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.6\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.7\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.8\">7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.9\">7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.10\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.11\">12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.12\">12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.13\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.14\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.15\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.16\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.17\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.18\">19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.19\">17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.20\">17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.21\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.2.2.2.22\">14</td>\n</tr>\n</tbody>\n</table>\n</figure>",
64
+ "capture": "Figure 1: Graph of ."
65
+ },
66
+ "2": {
67
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.3.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.2.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.2\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.4\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.5\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.6\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.7\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.8\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.9\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.10\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.11\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.12\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.13\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.14\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.15\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.16\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.17\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.18\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.19\">17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.20\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.21\">19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.2.1.1.22\">20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.3.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.3\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.4\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.5\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.6\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.7\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.8\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.9\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.10\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.11\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.12\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.13\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.14\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.15\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.16\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.17\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.18\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.19\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.20\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.21\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.3.2.2.22\">2</td>\n</tr>\n</tbody>\n</table>\n</figure>",
68
+ "capture": "Figure 7: Synchronized Fibonacci DFAO for A004937."
69
+ },
70
+ "3": {
71
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.8\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.8.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.5.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S5.5.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.2\">0</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.3\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.4\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.5\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.6\">4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.7\">5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.8\">6</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.9\">7</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.10\">8</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.11\">9</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.12\">10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.13\">11</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.14\">12</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.15\">13</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.16\">14</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.17\">15</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.18\">16</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.19\">17</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.5.1.1.20\">18</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.6.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.6.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.2\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.3\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.4\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.5\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.6\">13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.7\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.8\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.9\">21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.10\">24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.11\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.12\">29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.13\">31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.14\">34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.15\">37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.16\">39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.17\">42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.18\">45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.19\">47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.6.2.2.20\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.7.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.7.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.2\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.3\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.4\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.5\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.6\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.7\">19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.8\">23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.9\">28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.10\">32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.11\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.12\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.13\">44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.14\">49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.15\">53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.16\">57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.17\">61</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.18\">66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.19\">70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.7.3.3.20\">74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.8.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.8.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.2\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.3\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.4\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.5\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.6\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.7\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.8\">17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.9\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.10\">22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.11\">25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.12\">27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.13\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.14\">33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.15\">35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.16\">38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.17\">41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.18\">43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.19\">46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.8.4.4.20\">48</td>\n</tr>\n</tbody>\n</table>\n</figure>",
72
+ "capture": "Figure 7: Synchronized Fibonacci DFAO for A004937."
73
+ },
74
+ "4": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S11.4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S11.4.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S11.3.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S11.3.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.2\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.4\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.5\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.6\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.7\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.8\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.9\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.10\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.11\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.12\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.13\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.14\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.15\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.16\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.17\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.18\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.19\">17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.20\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S11.3.1.1.21\">19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S11.4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S11.4.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.2\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.4\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.5\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.6\">7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.7\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.8\">12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.9\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.10\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.11\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.12\">19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.13\">17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.14\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.15\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.16\">33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.17\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.18\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.19\">27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.20\">28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S11.4.2.2.21\">22</td>\n</tr>\n</tbody>\n</table>\n</figure>",
76
+ "capture": "Figure 12: Synchronized Fibonacci automaton for ."
77
+ },
78
+ "5": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S12.2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S12.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S12.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S12.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.2\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.4\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.5\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.6\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.7\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.8\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.9\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.10\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.11\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.12\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.13\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.14\">12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.15\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.16\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.17\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.18\">16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.19\">17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.20\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.21\">19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S12.1.1.1.22\">20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S12.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S12.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.2\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.4\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.5\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.6\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.7\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.8\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.9\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.10\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.11\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.12\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.13\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.14\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.15\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.16\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.17\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.18\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.19\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.20\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.21\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S12.2.2.2.22\">14</td>\n</tr>\n</tbody>\n</table>\n</figure>",
80
+ "capture": "Figure 13: Solutions to ."
81
+ }
82
+ },
83
+ "image_paths": {
84
+ "1": {
85
+ "figure_path": "2312.11706v3_figure_1.png",
86
+ "caption": "Figure 1: Graph of a\u2062(n)\ud835\udc4e\ud835\udc5ba(n)italic_a ( italic_n ).",
87
+ "url": "http://arxiv.org/html/2312.11706v3/extracted/5323794/a105774-graph.png"
88
+ },
89
+ "2": {
90
+ "figure_path": "2312.11706v3_figure_2.png",
91
+ "caption": "Figure 2: Fibonacci automaton for the set of even numbers.",
92
+ "url": "http://arxiv.org/html/2312.11706v3/x1.png"
93
+ },
94
+ "3": {
95
+ "figure_path": "2312.11706v3_figure_3.png",
96
+ "caption": "Figure 3: Fibonacci DFAO computing nmod3modulo\ud835\udc5b3n\\bmod 3italic_n roman_mod 3.",
97
+ "url": "http://arxiv.org/html/2312.11706v3/x2.png"
98
+ },
99
+ "4": {
100
+ "figure_path": "2312.11706v3_figure_4.png",
101
+ "caption": "Figure 4: Synchronized Fibonacci automaton computing \u230an/2\u230b\ud835\udc5b2\\lfloor n/2\\rfloor\u230a italic_n / 2 \u230b.",
102
+ "url": "http://arxiv.org/html/2312.11706v3/x3.png"
103
+ },
104
+ "5": {
105
+ "figure_path": "2312.11706v3_figure_5.png",
106
+ "caption": "Figure 5: Synchronized Fibonacci automaton for a\u2062(n)\ud835\udc4e\ud835\udc5ba(n)italic_a ( italic_n ).",
107
+ "url": "http://arxiv.org/html/2312.11706v3/x4.png"
108
+ },
109
+ "6": {
110
+ "figure_path": "2312.11706v3_figure_6.png",
111
+ "caption": "Figure 6: Fibonacci DFAO for c\u2062(n)\ud835\udc50\ud835\udc5bc(n)italic_c ( italic_n ).",
112
+ "url": "http://arxiv.org/html/2312.11706v3/x5.png"
113
+ },
114
+ "7": {
115
+ "figure_path": "2312.11706v3_figure_7.png",
116
+ "caption": "Figure 7: Synchronized Fibonacci DFAO for A004937.",
117
+ "url": "http://arxiv.org/html/2312.11706v3/x6.png"
118
+ },
119
+ "8": {
120
+ "figure_path": "2312.11706v3_figure_8.png",
121
+ "caption": "Figure 8: Fibonacci representation of positions of suffix minima.",
122
+ "url": "http://arxiv.org/html/2312.11706v3/x7.png"
123
+ },
124
+ "9": {
125
+ "figure_path": "2312.11706v3_figure_9.png",
126
+ "caption": "Figure 9: Synchronized Fibonacci DFAO for A003623.",
127
+ "url": "http://arxiv.org/html/2312.11706v3/x8.png"
128
+ },
129
+ "10": {
130
+ "figure_path": "2312.11706v3_figure_10.png",
131
+ "caption": "Figure 10: Synchronized Fibonacci DFAO for A368200.",
132
+ "url": "http://arxiv.org/html/2312.11706v3/x9.png"
133
+ },
134
+ "11": {
135
+ "figure_path": "2312.11706v3_figure_11.png",
136
+ "caption": "Figure 11: Synchronized Fibonacci DFAO for s\u2062(n)\ud835\udc60\ud835\udc5bs(n)italic_s ( italic_n ).",
137
+ "url": "http://arxiv.org/html/2312.11706v3/x10.png"
138
+ },
139
+ "12": {
140
+ "figure_path": "2312.11706v3_figure_12.png",
141
+ "caption": "Figure 12: Synchronized Fibonacci automaton for a\u2032\u2062(n)superscript\ud835\udc4e\u2032\ud835\udc5ba^{\\prime}(n)italic_a start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT ( italic_n ).",
142
+ "url": "http://arxiv.org/html/2312.11706v3/x11.png"
143
+ },
144
+ "13": {
145
+ "figure_path": "2312.11706v3_figure_13.png",
146
+ "caption": "Figure 13: Solutions to a\u2062(n)=n\ud835\udc4e\ud835\udc5b\ud835\udc5ba(n)=nitalic_a ( italic_n ) = italic_n.",
147
+ "url": "http://arxiv.org/html/2312.11706v3/x12.png"
148
+ }
149
+ },
150
+ "validation": true,
151
+ "references": [
152
+ {
153
+ "1": {
154
+ "title": "Fibonacci words\u2014a survey.",
155
+ "author": "J. Berstel.",
156
+ "venue": "In G. Rozenberg and A. Salomaa, editors, The Book of L, pp.\n13\u201327. Springer-Verlag, 1986.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "2": {
162
+ "title": "Noncommutative Rational Series With Applications, Vol. 137 of\nEncyclopedia of Mathematics and Its Applications.",
163
+ "author": "J. Berstel and C. Reutenauer.",
164
+ "venue": "Cambridge University Press, 2011.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "3": {
170
+ "title": "Fibonacci representations.",
171
+ "author": "L. Carlitz, R. Scoville, and V. E. Hoggatt, Jr.",
172
+ "venue": "Fibonacci Quart. 10 (1972), 1\u201328.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "4": {
178
+ "title": "The minimal automaton recognizing in a linear numeration\nsystem.",
179
+ "author": "\u00c9. Charlier, N. Rampersad, M. Rigo, and L. Waxweiler.",
180
+ "venue": "INTEGERS 11B (2011), Paper #A4.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "5": {
186
+ "title": "Problem 5407.",
187
+ "author": "S. W. Golomb.",
188
+ "venue": "Amer. Math. Monthly 73 (1966), 674.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "6": {
194
+ "title": "Introduction to Automata Theory, Languages, and Computation.",
195
+ "author": "J. E. Hopcroft and J. D. Ullman.",
196
+ "venue": "Addison-Wesley, 1979.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "7": {
202
+ "title": "Voorstelling van natuurlijke getallen door een som van getallen van\nFibonacci.",
203
+ "author": "C. G. Lekkerkerker.",
204
+ "venue": "Simon Stevin 29 (1952), 190\u2013195.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "8": {
210
+ "title": "Automatic theorem proving in Walnut.",
211
+ "author": "H. Mousavi.",
212
+ "venue": "Arxiv preprint arXiv:1603.06017 [cs.FL], available at\nhttp://arxiv.org/abs/1603.06017, 2016.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "9": {
218
+ "title": "Synchronized sequences.",
219
+ "author": "J. Shallit.",
220
+ "venue": "In T. Lecroq and S. Puzynina, editors, WORDS 2021, Vol. 12847\nof Lecture Notes in Computer Science, pp. 1\u201319. Springer-Verlag,\n2021.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "10": {
226
+ "title": "The Logical Approach To Automatic Sequences: Exploring\nCombinatorics on Words with Walnut, Vol. 482 of London Math. Soc.\nLecture Note Series.",
227
+ "author": "J. Shallit.",
228
+ "venue": "Cambridge University Press, 2022.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "11": {
234
+ "title": "The on-line encyclopedia of integer sequences, 2023.",
235
+ "author": "N. J. A. Sloane et al.",
236
+ "venue": "Available at https://oeis.org.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "12": {
242
+ "title": "Repr\u00e9sentation des nombres naturels par une somme de nombres de\nFibonacci ou de nombres de Lucas.",
243
+ "author": "E. Zeckendorf.",
244
+ "venue": "Bull. Soc. Roy. Li\u00e8ge 41 (1972), 179\u2013182.",
245
+ "url": null
246
+ }
247
+ }
248
+ ],
249
+ "url": "http://arxiv.org/html/2312.11706v3"
250
+ }
20240101/2312.13108v2.json ADDED
@@ -0,0 +1,613 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "AssistGUI: Task-Oriented Desktop Graphical User Interface Automation",
3
+ "abstract": "Graphical User Interface (GUI) automation holds significant promise for assisting users with complex tasks, thereby boosting human productivity. Existing works leveraging Large Language Model (LLM) or LLM-based AI agents have shown capabilities in automating tasks on Android and Web platforms. However, these tasks are primarily aimed at simple device usage and entertainment operations. This paper presents a novel benchmark, AssistGUI, to evaluate whether models are capable of manipulating the mouse and keyboard on the Windows platform in response to user-requested tasks. We carefully collected a set of 100 tasks from nine widely-used software applications, such as, After Effects and MS Word, each accompanied by the necessary project files for better evaluation. Moreover, we propose an advanced Actor-Critic Embodied Agent framework, which incorporates a sophisticated GUI parser driven by an LLM-agent and an enhanced reasoning mechanism adept at handling lengthy procedural tasks. Our experimental results reveal that our GUI Parser and Reasoning mechanism outshine existing methods in performance. Nevertheless, the potential remains substantial, with the best model attaining only a 46% success rate on our benchmark. We conclude with a thorough analysis of the current methods\u2019 limitations, setting the stage for future breakthroughs in this domain.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Novices often face a steep learning curve when acquainting themselves with complex desktop applications. For instance, software like After Effects and Premiere Pro offer a suite of advanced functions for video editing, yet its richness in features sets a high entry barrier for new users. An AI Assistant with the capacity to comprehend GUI interfaces, grasp software usage methodologies, and manipulate applications would significantly expedite the learning and operating processes. As such an Assistant evolves, it will liberate humans from the tedious complexities that currently impede their creativity and productivity.\nEarly software automation methods, exemplified by voice assistants such as Siri or Alexa, rely on predefined intents and the extraction of parameters from user queries to execute functions, lacking the flexibility required for complex operations. With the advent of generative models, e.g., GPT [21 ###reference_21###], there has been a shift towards using Large Language Models (LLMs) [38 ###reference_38###, 39 ###reference_39###, 37 ###reference_37###] or LLM agents [51 ###reference_51###, 49 ###reference_49###] to formulate interactive tasks as a text-to-text generation.\nSeveral benchmarks [19 ###reference_19###] are proposed to evaluate their performances on using an Ubuntu bash terminal, using a database, or engaging in card games, with some recent works [22 ###reference_22###, 21 ###reference_21###] demonstrating impressive results.\nMoreover, some benchmarks [26 ###reference_26###, 42 ###reference_42###, 50 ###reference_50###, 33 ###reference_33###] are proposed to evaluate Web navigation and Smartphone manipulation. Some work has proposed methods based on HTML [42 ###reference_42###, 26 ###reference_26###] and pure vision [31 ###reference_31###, 47 ###reference_47###]. [47 ###reference_47###] utilized GPT-4V-SoM [48 ###reference_48###] for Smartphone GUI Navigation, which has achieved promising results. While these studies are indeed exciting, these tasks are primarily centered around entertainment scenarios. Consequently, an agent\u2019s proficiency in these tasks may not necessarily lead to a substantial increase in human productivity.\nTherefore, this paper aims to evaluate the model on task-oriented Desktop Graphical User Interface Automation, aimed at assessing model performance in utilizing productivity software. This task poses unique challenges compared to previous Web and Android Automation:\nDense GUI Understanding: This involves interpreting various forms of information, not only salient texts on the screen but also various visual elements like icons and footage in the office or design software.\nComplex Operations: Desktop operations demand more sophisticated actions than those on the Web or Smartphone, extending beyond basic tapping, typing, etc. to include operations like dragging files or drawing masks on footage.\nLong Procedure: Executing a task in productivity software can involve a sequence of complex steps. For example, creating a single effect in AE will include layer creation, media import, effect adding, animation creation, etc.\nIn order to better research this important but still largely unexplored domain, we introduce AssistGUI, a benchmark designed for Desktop GUI Automation. As illustrated in Figure 1 ###reference_###, the model receives an instructional video demonstrating a specific function of an application, along with a user query pertinent to the video\u2019s content. The model\u2019s objective is to interact with the software to fulfill the task specified in the query. The inclusion of instructional videos is crucial, particularly for tools like After Effects, which have a vast array of user-developed customized features. This design aims to make the model adaptable and efficient at acquiring new usage techniques.\nCorrespondingly, we constructed a benchmark that spans 5 major categories of desktop tasks: office work, design, widget usage, system setting, and file manipulation, covering 9 popular applications, such as Premiere Pro, After Effect, PowerPoint, etc. In total of 100 specific tasks are provided, each accompanied by a textual query, an instructional video, and carefully created project files. In addition to the data, we have developed a system that enables a local Windows environment to be presented as an interactive platform to a remote server, facilitating model development and testing.\nIn addition, we introduce a robust baseline, an embodied agent framework, named Actor-Critic Embodied Agent \\textcinzelACE, as depicted in Figure 4 ###reference_###. Specifically, drawing from the concept of LLM-based Agents [49 ###reference_49###, 32 ###reference_32###, 7 ###reference_7###], we develop an advanced GUI parser that can identify a variety of UI elements. Moreover, we propose a novel reasoning approach that allows for the hierarchical decomposition of tasks and dynamically adjusts future steps by evaluating the results of each step, sharing the spirit of Actor-Critic algorithm [13 ###reference_13###]. Our experiments on the AssistGUI benchmark revealed that while the proposed model demonstrates promising potential, it also underscores the task\u2019s inherent complexity. Subsequent ablation analysis of different components within our agent framework revealed limitations in both current LLMs, LMMs, and LLM-based agents when it comes to intricate GUI automation tasks. These insights lead us to suggest future directions for improvement in GUI understanding and action generation for desktop GUI applications.\nIn summary, our work makes the following contributions:\nWe introduce, to the best of our knowledge, the first task specifically designed for desktop software automation.\nWe have created a comprehensive benchmark featuring a carefully selected collection of samples and developed environments that aid in evaluation.\nWe present a strong baseline equipped with advanced GUI perception capability and a new planning mechanism.\nExtensive experimentation assesses our approach\u2019s effectiveness and highlights the challenges in desktop GUI automation for existing models."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "UI Task Automation Benchmark.\nUI automation tasks mainly focus on mobile or web applications with both environment development and benchmark construction. The mobile scenarios are widely studied with open-source environments built on top of the Android ecosystem. The environments [40 ###reference_40###, 29 ###reference_29###] provide an interactive way for reinforcement learning for relatively simple tasks.\nThe benchmarks [16 ###reference_16###, 5 ###reference_5###, 26 ###reference_26###, 42 ###reference_42###] further extend to more diverse and complex low-level or high-level tasks.\nAdditionally, there are several simulated web environments developed for agents to learn in an interactive way[33 ###reference_33###, 25 ###reference_25###, 50 ###reference_50###, 6 ###reference_6###, 54 ###reference_54###].\nRegarding further computer tasks, NL2Bash[17 ###reference_17###] and agentbench[19 ###reference_19###] provide interaction with the terminal systems taking language as inputs and outputs. Different from them, ours is more challenging to handle graphical interaction within a real-world Desktop environment for complex UI and diverse tasks.\nLLM-as-Agent.\nRecent studies present promising research directions prompting LLM for multi-step reasoning and invoking application-specific APIs, external tools, or domain-specific models.\nSome works [41 ###reference_41###, 51 ###reference_51###, 28 ###reference_28###, 51 ###reference_51###, 34 ###reference_34###, 28 ###reference_28###, 24 ###reference_24###], such as CoT, and ReAct, enhance the model\u2019s capability for better conversation by logical reasoning. There is also a growing body of work [49 ###reference_49###, 36 ###reference_36###, 45 ###reference_45###, 32 ###reference_32###, 20 ###reference_20###, 7 ###reference_7###] focusing on using LLMs in conjunction with visual tools to perform multimodal tasks, such as visual question answering and video summarization. Some research [45 ###reference_45###] even proposes LLM-based agents for image editing. We introduce a specialized LMM-based agent tailored for Desktop GUI Automation, aiming to provide a powerful baseline for this task.\nEmbodied AI for UI Task Automation.\nThe significant challenges of GUI task automation are the understanding of the complex graphical UI observation and the planning to achieve various tasks, leading to end-to-end supervised approaches or LLM-based zero-shot two-stage solutions. Previous end-to-end methods adopt reinforcement learning[8 ###reference_8###] or imitation learning[10 ###reference_10###]. [35 ###reference_35###, 52 ###reference_52###, 31 ###reference_31###, 4 ###reference_4###, 53 ###reference_53###] rely on vision-language-action pretraining to learn to directly map visual observation to actions. However, these methods usually require a significant number of human expert demonstrations, which are still hard to generalize to the general applications. With the advent of LLM, there are some LLM-based two-stage methods. The first stage is to semantically understand the elements of the observed UI by either off-the-shelf models like OCR or learnable vision-language models [14 ###reference_14###, 3 ###reference_3###, 9 ###reference_9###, 1 ###reference_1###].\nFor example, [43 ###reference_43###, 42 ###reference_42###, 47 ###reference_47###] propose to convert GUI into HTML representation or natural language sentence.\nConsequently, the second stage is to generate executable steps given the UI elements [50 ###reference_50###, 54 ###reference_54###, 11 ###reference_11###, 15 ###reference_15###] usually with LLM. However, single OCR and vision-language models are limited to simple GUIs and fail to capture the full complexity of Desktop GUIs. They also struggle with long processes due to their single-step generation approach. To address these limitations, we\u2019ve developed an LLM-based agent equipped with diverse tools for parsing various UI elements and a new hierarchical planning and critic mechanism for handling extended procedures."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "AssistGUI Benchmark",
21
+ "text": "AssistGUI benchmark provides real-world interactive environment, dataset across broad tasks and goal-oriented evaluation."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Task Formulation",
27
+ "text": "Desktop task automation in AssistGUI can be formulated as follows: given a natural language query that briefly describes a specific task, along with an instructional video as a supplement that more detailed illustrates how to complete it, and the relevant application, the output is a sequence of UI actions to fulfill the user\u2019s query.\nTask description. To describe the task, a textual request is provided by the user, which describes the functionality of an application to be accomplished, e.g., Center align the text \u201dAssistGUI\u201d in my opened After Effect project. For some functions of productivity tools, there might be multiple user-developed implementations. We aim for the model to generate actions based on the given references. Thus, an instructional video, denoted as , is also provided.\nState observation. The state of the environment is composed of two types of information. The first type stems from the operating system\u2019s textual metadata about the software being used. In contrast to web pages, where HTML offers comprehensive information, much of this metadata in desktop applications is internal and thus not readily accessible. As a result, the metadata mainly includes the layout of panels and pop-up windows. The second type of information consists of screen captures, which offer a more holistic view by providing visual context.\nIn Figure 2 ###reference_###, we present an example of the metadata and screenshot. It\u2019s worth mentioning that for software like Premier Pro, it is challenging to obtain meta-data that encompasses all information of the software. The main information obtainable is about large panels, while specific texts and buttons are almost impossible to extract from the meta-data. Therefore, the model must rely on visual perception capabilities to process screenshots.\n###figure_1### Action space. Our action space consists of all the raw mouse and keyboard actions, including left-click, right-click, double-click, drag, keystrokes, and combinations of keys for shortcuts, among others. Mouse-related operations also include the target position at the pixel space of the observed screenshot. To construct a universal and complete representation of actions, we exactly followed a widely utilized Python library for controlling the mouse and keyboard, PyAutoGUI. One action is denoted by the syntax action_type(arguments). Here are some examples of actions that are supported in AssistGUI:\nMouse Movement:\nMove the mouse cursor to a specific position on the screen.\nExample: moveTo(100, 150)\nMouse Clicks:\nAutomate mouse clicks at a specified location.\nExample: click(200, 220)\nTyping and Sending Keystrokes:\nSimulate typing text or pressing keys.\nExample: write(\u2019Hello, world!\u2019)\nKeyboard Hotkey Combinations:\nPress and release keyboard shortcuts or hotkeys.\nExample: hotkey(\u2019ctrl\u2019, \u2019c\u2019)\nScrolling the Mouse:\nAutomate mouse scrolling up or down.\nExample: scroll(-200) for scrolling down.\nDrag and Drop:\nAutomate drag and drop actions.\nExample: dragTo(100, 200, duration=2)\nMouse Down and Mouse Up:\nHold down and release the mouse button.\nExamples: mouseDown(); mouseUp()\nPress and Release Keys:\nPress and release individual keyboard keys.\nExamples: press(\u2019enter\u2019)\nKey Down and Key Up:\nHold down and release a keyboard key.\nExamples: keyDown(\u2019shift\u2019)\nEnvironment Implementation.\nRecognizing that productivity tools usually only support Windows or Mac systems, while AI models are often deployed on Linux, we\u2019ve created a Python library to expose a local Windows environment as an interactive platform to a remote server. This is done using PyWinAuto API to collect metadata and screenshots from Windows. A communication system sends data to the server, and let server then sends predicted actions back to the local client for execution on the productivity tools. This setup allows remote control of the software by the server-based model through specific action commands."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Data Collection",
33
+ "text": "Our benchmark is designed to include a broad spectrum of desktop tasks, systematically segmented into five major categories that are indicative of routine computer-based work. These categories include design, Office work (Office), system settings (Sys. Set.), widget usage (Widget), and file management (File Mani.).\nThe collection of task data within AssistGUI is achieved by the following steps:\n###figure_2### ###figure_3### Task Collection.\nDue to the complexity of GUI operations, at this early stage, we are primarily focusing on relatively basic tasks. We carefully select some popular instructional videos and those duration do not exceed five minutes from official software websites and video-sharing platforms.\nWe also manually crafted one query for each instructional video. These queries illustrate the tasks that the model is expected to complete. It is important to note that the task indicated by the query may not always align exactly with the operations shown in the video; it could include some user-customized requirements. Therefore, the model needs to modify the steps based on the instructional video, e.g., type in a different text, which aligns more closely with real-world scenarios.\nProject File Preparation.\nTo make the results in the environment to be reproducible, we provide project files for all editing-related tasks. This ensures that all models initiate their tasks from an identical starting state. The project files included in our benchmark stem from two primary sources: A portion of the project files is directly sourced from the official tutorials available on the software\u2019s website. These files are typically crafted by the software providers to accompany their instructional materials. The remaining project files are meticulously prepared by annotators. We have also documented the version of each project file. The tested models are expected to modify this file using applications of the same version for fair comparison.\nQuality Checking. To guarantee the correctness of our benchmark, each task has undergone a quality check by letting our annotators complete the tasks within the software to verify if they yield accurate outputs. The quality check focuses on two main aspects: Firstly, it verifies the correctness of the content in the instructional video, ensuring that the demonstrated steps are accurate and lead to the anticipated outcome. Secondly, it confirms that the project files are correct and fully functional.\nAssistGUI finally collects 100 specific tasks from 9 commonly used applications like Premiere Pro, After Effects, and PowerPoint. We present the distribution of collected over software and show one example query for each software AssistGUI task in Figure 3 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Evaluation",
39
+ "text": "AssistGUI adopts an outcome-oriented evaluation approach to determine the success rate of models. Since AssistGUI yields several types of outputs: video output (Design), document output (Office), the final state of the software (Widget), system settings (Sys. Set.), and folder structure (File Mani.), it is hard to construct one general metric to fit all tasks, thus, we design specific metrics to calculate the success rate tailored to each type of task.\nFor the Design and Office tasks, we compare the similarity of the model\u2019s results with the ground truth at a pixel granularity. If it exceeds a certain threshold, it is considered successful and scores 1 point; otherwise, it scores 0. The threshold varies slightly for different tasks, depending on whether the task inherently includes a certain level of randomness. We did not adopt CLIP-Sim [44 ###reference_44###], commonly used in video generation, because video editing often involves animation changes rather than semantic changes, making it difficult for CLIP to discern subtle differences. For Widget tasks, we compare the final screenshot with the ground truth, if the same in the display region (obtained by metadata), then consider it a success. For the Sys. Set. and File Mani., we write scripts to automatically determine whether the system settings and folder structure meet the expected criteria. We also standardized the specific version numbers and languages for each software."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Method",
45
+ "text": "Overview.\nWe introduce an Actor-Critic Embodied agent, \\textcinzelACE, based on LLMs that possesses the capabilities to perceive the software environment, plan actions, and execute them, as shown in Figure 4 ###reference_###. Specifically, the agent works in two stages: In the first stage, given a query and a video, the Planner creates a high-level plan outlining the key milestones and subtasks of the task. The second stage involves the collaborative work of three modules to sequentially accomplish these subtasks. The GUI Parser observes the GUI environment, the Critic module assesses the quality of the previous action, and the Actor then adjusts the plan based on this assessment and generates code to control the desktop."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Planner",
51
+ "text": "For a given query and instructional video , the Planner aims to output a hierarchical task tree , where is a text string describe the -th milestone of the task. And each corresponds to a list of subtasks , is also a text string, indicating the -th subtask for -th milestone. This is achieved in the following steps. First, the LLM is prompted to extract hierarchical steps based on the subtitles of the video. Subsequently, the LLM is requested to modify the extracted steps in accordance with the user\u2019s query, as shown in Fig 5 ###reference_###. Finally, we design a specific traversal algorithm that will only traverse the leaf nodes in order and send the corresponding subtask to the following modules.\n###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "GUI Parser",
57
+ "text": "The goal of the GUI Parser is to convert an observed screenshot into a structured textual representation like the Document Object Model (DOM). Given that desktop software typically comprises a wide variety of UI elements, it is hard for one model to extract all information, thus, we adopt approaches similar to MMReAct [49 ###reference_49###] and VisualClues [46 ###reference_46###], invoking multiple tools to extract information, as shown in Figure 6 ###reference_###. Specifically, we utilize metadata from the system for panel segmentation, employ the OCR model to extract text from images and develop a pattern-matching method to identify icons. Additionally, some vision models, including a detector, and a segmentation model, are used to localize the objects in footage, and we have designed simple algorithms to extract specific elements such as scrolls and reference lines, etc. The GUI information is represented panel by panel, including the meanings of UI elements and their spatial position coordinates.\n###figure_5###"
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "Critic",
63
+ "text": "The Critic utilizes an LLM to evaluate the success of the executed action by analyzing the screenshots taken before and after the execution of the action , where is a function for identifying differences.\nIt outputs four kinds of information: whether the previous action was executed correctly (a Boolean Success Flag), and if not, it provides an explanation; whether the current subtask is completed (Boolean Finish Flag), and if not, it offers an explanation, as shown in the Top of Figure 7 ###reference_###. The two flags and explanations, denoted as will feed to the Actor."
64
+ },
65
+ {
66
+ "section_id": "4.4",
67
+ "parent_section_id": "4",
68
+ "section_name": "Actor",
69
+ "text": "The Actor is built upon an LLM, aiming to generate actions within the action space of the AssistGUI benchmark. Specifically, given the Finish Flag provided by the Critic, the mode first plans what should be done next, as shown in Figure 7 ###reference_###. If the Finished Flag is False, the subtask at time will still be , otherwise, , where indicates moving to next subtask by using our designed traverse method illustrated in Sec. 4.1 ###reference_###.\n###figure_6### Then, the Actor generates an output action by considering various factors: the current state of the observed software , the previous action , the current subtask , and its corresponding milestone (which indicates the milestone associated with the current subtask). Additionally, the Actor takes into account the Critic\u2019s feedback on the performance of the previous action.\nFormally,\nwhere denotes the action space, comprised of Python code. It\u2019s important to note that the output action can either be a single action or a sequence of actions. This is implemented by prompting an LLM to process all the aforementioned information as input and subsequently generate the code for the next step, as illustrated at the bottom of Figure 7 ###reference_###."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Experiments",
75
+ "text": "Implementation Details. In the following experiments, we use gpt-4-0613 [22 ###reference_22###] provided by OpenAI as the default LLM. In the GUI parser, we use Google OCR for extracting text, Yolo-v8 [27 ###reference_27###] to coarsely localize objects, and LangSAM [18 ###reference_18###, 12 ###reference_12###] to obtain the precise object contours. The difference module is implemented by using DeepDiff [30 ###reference_30###]."
76
+ },
77
+ {
78
+ "section_id": "5.1",
79
+ "parent_section_id": "5",
80
+ "section_name": "Quantitative Results",
81
+ "text": "As AssistGUI is a novel Task that requires planning with an instructional video and processing Desktop environments (previous works mostly focus on Web or Android), there are no ready-made state-of-the-art models available for evaluation. Thus, we construct various variants based on our approach, to contrast some core concepts from previous works, thereby showing the effectiveness of our method and the challenges of AssistGUI.\nComparision with SOTA Planning Method.\nIn Table 1 ###reference_###, we compare the planning approaches that have recently demonstrated exceptional performance in other environments. Specifically, we retained the GUI Parser and removed both the Planning and Actor-Critic modules. The subtitle of the instructional video is simply put into the prompt. Then, the model plans the steps in the following methods:\nCoT [41 ###reference_41###]: The CoT generates all the steps at once, which cannot obtain information from the environment.\nReAct [51 ###reference_51###]: It iteratively interacts with the environment through a cycle of thought, action, and observation.\nThe experimental results demonstrate that our model significantly surpasses previous planning methods. The result of CoT reveals that Desktop GUI Automation tasks often entail screen chanthus, thus, it is unable to cope effectively. Regarding ReAct, since it does not convert lengthy videos into discrete steps, it can operate on the finest granularity of action plans. Additionally, ReAct\u2019s absence of a dedicated module for evaluating and adjusting the planning path becomes a shortcoming, especially for complex tasks in office and design environments. The overall results indicate that AssistGUI poses significant challenges, especially for complex productivity tools. This difficulty arises from the intricacies involved in understanding and navigating sophisticated software interfaces, which require nuanced interpretation of visual elements and context-aware decision-making.\nAblation on Planner, Actor and Critic.\nWe also conducted ablation studies on our Critic and Planner, as shown in Table 2 ###reference_###, where the w/o Planner method directly feeds the whole subtitle into Actor, instead of the parsed subtask. For simple tasks, the impact of these components was not particularly significant. However, their influence becomes much more apparent in complex Office and Design tasks. On another note, while the Critic appears to be a very important module, its performance enhancement was not as large as we initially expected. This is primarily because the Critic\u2019s judgments in complex tasks are not always accurate. It requires a high level of action-vision alignment, which still remains a relatively underexplored area, but we believe it is a direction worth exploring. Additionally, we constructed a variant that does not take into account the subtitles of videos. Instead, it utilizes GPT-4 to plan milestones and subtasks, denoted as w/o Ins. Video. This approach showed almost no significant performance loss in simple tasks because there weren\u2019t many alternative solutions. However, for the use of complex software like After Effects and Premiere Pro, instructional Videos proved to be very helpful.\nAblation on GUI Parser.\nCorrectly parsing UI Elements is essential for generating actions. Here, we eliminate different UI elements in parsed GUI data to observe their impact. Table 3 ###reference_### shows that removing OCR had the most significant impact since text often contains crucial information in a GUI. Icons also led to notable performance loss, especially in Design and Office software, where many icons lack corresponding textual descriptions and are essential for specific functions. Interestingly, Panel Layout had minimal impact on performance, indicating GPT-4 can recognize the button without panel information, though it\u2019s still necessary for operations like clicking in a blank or margin of the panel area. The Others category, including footage content, scrolls, and similar elements, also had little effect. This is due to the model\u2019s current limitations in handling complex footage operations, even though they correctly recognize but still fail to complete the task. We also try to replace Qwen-VL-Chat [2 ###reference_2###] to replace the GUI Parser, allowing GPT-4 to plan button interactions and Qwen-VL-Chat to determine their positions. However, the results were not very satisfactory, as there may not be particular training for GUI button grounding.\nImpact of Large Language Model.\nWe also experimented with different language models, gpt-3.5-turbo, and Llama2-7B [39 ###reference_39###], in various modules, but found the results to be generally unsatisfactory, as shown in Table 4 ###reference_###. There are two main reasons for this: 1) The requirement for specific output formats. For instance, an action must be in the form of current step code and can only output code; any other content would render it non-executable. Similarly, the results from planning need to adhere to a certain format, which other language models sometimes fail to follow. 2) The issue of model hallucination. For the generation of actions, the model needs to stop at appropriate times, using updated GUI information to continue generating actions. However, non-GPT-4 models often hallucinate or invent too much information, leading to an incorrect code. For these relatively lightweight models to perform such customized functions effectively, they may require fine-tuning with specific datasets."
82
+ },
83
+ {
84
+ "section_id": "5.2",
85
+ "parent_section_id": "5",
86
+ "section_name": "Qualitative Results",
87
+ "text": "In Figure 8 ###reference_###, we showcase some visualized results. Firstly, we present a successful prediction example, demonstrating that the model can effectively plan each step for relatively long processes, accurately perceive specific elements in the GUI, and convert them into the correct action code. Additionally, we display the performance of our designed Multi-modal LLM Agent, which can accurately identify most content, including small icons such as a clock-shaped keyframe button, checkboxes, and expand buttons. In contrast, although GPT-4V [23 ###reference_23###] possesses robust OCR capabilities, it fails to output button positions, rendering it unable to execute operations. The current best method to modify GPT-4V for button grounding is GPT-4V-SoM [48 ###reference_48###], which uses semantic-SAM to segment the image first, then label it, and finally input it to GPT-4V. This approach achieves remarkable results in Web and Android Navigation tasks. However, as seen, for desktop GUI understanding, the performance of GPT-4V-SoM is almost nullified due to the limitations of Semantic-SAM\u2019s segmentation capabilities in productivity software.\nWe also highlight some common errors encountered. 1) The model struggles with complex operations on footage, which can be highly intricate. For instance, Query 1 requires using a roto brush to select an object, necessitating continuous adjustments based on the generated edges, a capability our model currently lacks. Achieving this function might require training with specific samples or a more powerful Agent framework. 2) The model has difficulty understanding blurred areas, such as the edges of documents, blank spaces in Panels, or determining which area to select when multiple files are involved. 3) The spatial relationship in dense text. The granularity of OCR output bounding boxes is uncontrollable. Selecting a specific word or character in a text segment is not straightforward with the current OCR predictions. This may require a highly versatile text grounding model to address effectively.\n###figure_7### In Figure 9 ###reference_###, we present an example of the Planner prediction. The results show that, despite the strength of GPT-4, the predictions still have some flaws, such as including redundant operations. For instance, Task 6 does not actually correspond to any specific action. This issue mainly arises from the fact that these steps are included in the instructional video, and GPT-4 cannot definitively determine whether to exclude them.\n###figure_8### ###figure_9### ###figure_10### In Figure 10 ###reference_###, we show one example of the outputs of GUI Parser. The model can detect most UI elements, but there are still some flaws. 1) There are still errors in text detection. For example, there are some issues with the detection of timestamps. The timestamp in the lower-left corner should be 0:00:00:00, but it is detected as : 00 : 00 : 00. The numbers in the Timeline in the lower-left corner are not detected. 2) Some visual elements are still difficult to recognize, such as the long bars on the timeline corresponding to each layer. Additionally, the current method is unable to understand some curves and figures, and it might be necessary to leverage the capabilities of GPT-4V in the future.\nIn Figure 11 ###reference_### we showcase prediction examples from the Actor and Critic modules. It is evident that the model is capable of not only producing individual step actions but also generating a continuous action sequence. Additionally, for the Critic module, current models can effectively judge the outcomes of some simple actions, such as clicking action, as demonstrated in the left and right examples. However, for more complex scenarios, such as determining whether an object has been completely cropped out, as seen in the middle case, the model still lacks the capability to perceive this accurately."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Comparison with Previous Benchmarks",
93
+ "text": "We discuss the differences between our approach and existing benchmarks in the following aspects, as shown in Table 5 ###reference_###:\nPlatform: Previous methods [40 ###reference_40###, 50 ###reference_50###, 42 ###reference_42###] mainly focused on Web and SmartPhone platforms, such as AndroidEnv, AutoDroid, and WebShop. AssistGUI, however, concentrates on desktop operations. This distinction primarily brings about differences in GUI complexity. The complexity on desktops is significantly higher than on other platforms, mainly reflected in the density of information, the diversity of visual elements, and the diversity of panel layouts.\nTask Focus: Exisiting methods [40 ###reference_40###, 50 ###reference_50###, 42 ###reference_42###] primarily study two types of tasks. One category is games, for instance, the majority of tasks in AndroidEnv are games, such as FlappyDroid, and Pong. The characteristic of game tasks is that the environment has a clear reward, making it easy to measure the performance of the model. Additionally, for most games, the types of operations are relatively limited. The other category includes web navigation and basic smartphone operations. These tasks have relatively simple operational patterns. For example, web navigation mainly involves buying a series of items according to requirements, with the difficulty lying in planning what to buy. The operations are relatively limited in type.\nThe distinguishing feature of AssistGUI is its focus on the use of productivity tools. The challenge of this category of tasks lies in the possibility of encountering new types of operations with different software. For instance, with After Effects, one might need to perform some drawing on the material. This presents a more formidable challenge for the model\u2019s understanding of the GUI and the generation of actions.\nDataset Scale and Annotation: Previous benchmarks [40 ###reference_40###, 50 ###reference_50###, 42 ###reference_42###] mainly involved about a hundred tasks. WebShop is somewhat unique; it primarily consists of one task, which is purchasing items, but it comes with different instructions specifying various purchasing requirements. The dataset scale of our benchmark is similar. However, a distinctive feature of our tasks is the use of professional software to modify documents or materials. Therefore, we also provide some project files to ensure that all methods start from the same initial state."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "This paper introduced AssistGUI, a novel benchmark for assessing the capability of models to manipulate the mouse and keyboard on the Windows platform in response to user requests. To this end, we collected a diverse set of 100 tasks across 9 widely-used applications, ensuring each task was supplemented with the necessary project files for a fair evaluation. We also presented our Actor-Critic Embodied Agent framework, a significant step forward in the realm of GUI automation. This framework is anchored by a GUI parser driven by an LLM-agent, coupled with an enhanced reasoning mechanism. This design is particularly adept at handling complex, lengthy procedural tasks that are commonplace in professional software environments. Our experimental results were promising, demonstrating that our approach notably outperforms existing methods in GUI automation. However, despite these advancements, our findings also highlight the considerable challenges that remain in this field."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.4.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.T1.5.2\" style=\"font-size:90%;\">Success rate (%) of agents with different planning methods on <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.5.2.1\">AssistGUI</span>.\nHuman* represents the average performance of three non-expert humans who have viewed the instructional video only once, like how the model does. These results are a reference to better sense the extent of the model\u2019s capabilities.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.1\" style=\"width:213.6pt;height:81.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-11.5pt,4.4pt) scale(0.903166667472984,0.903166667472984) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.2.1.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.2.1.2\">Design</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.2.1.3\">Office</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.2.1.4\">Widget</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.2.1.5\">Sys. Set</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.2.1.6\">File Mani.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.2.1.7\">Overall</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.3.1.1\">CoT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.3.1.2\">5.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.3.1.3\">10.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.3.1.4\">20.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.3.1.5\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.3.1.6\">36.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.3.1.7\">12.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.1.1.4.2.1\">ReAct</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.4.2.2\">14.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.4.2.3\">27.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.4.2.4\">50.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.4.2.5\">62.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.1.4.2.6\">63.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.4.2.7\">32.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.5.3\" style=\"background-color:#DFDFDF;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.1.1.5.3.1\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.1.1\" style=\"background-color:#DFDFDF;\">Ours</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.5.3.2\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.2.1\" style=\"background-color:#DFDFDF;\">32.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.5.3.3\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.3.1\" style=\"background-color:#DFDFDF;\">40.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.5.3.4\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.4.1\" style=\"background-color:#DFDFDF;\">60.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.5.3.5\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.5.1\" style=\"background-color:#DFDFDF;\">75.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.1.5.3.6\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.6.1\" style=\"background-color:#DFDFDF;\">72.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.1.5.3.7\"><span class=\"ltx_text\" id=\"S5.T1.1.1.5.3.7.1\" style=\"background-color:#DFDFDF;\">46.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.1\">Human\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.1.1.2\">73.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.1.1.3\">83.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.1.1.4\">100.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.1.1.5\">100.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.6\">100.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.1.1.7\">85.0</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
106
+ "capture": "Table 1: Success rate (%) of agents with different planning methods on AssistGUI.\nHuman* represents the average performance of three non-expert humans who have viewed the instructional video only once, like how the model does. These results are a reference to better sense the extent of the model\u2019s capabilities."
107
+ },
108
+ "2": {
109
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.2.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.3.2\" style=\"font-size:90%;\">Success rate (%) of agents with ablation of reasoning module.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.4\" style=\"width:243.5pt;height:81.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-13.1pt,4.4pt) scale(0.902917929793804,0.902917929793804) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.4.1.1.1.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.4.1.1.1.2\">Design</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.4.1.1.1.3\">Office</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.4.1.1.1.4\">Widget</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.4.1.1.1.5\">Sys. Set.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T2.4.1.1.1.6\">File Mani.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.4.1.1.1.7\">Overall</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.1.2.1\" style=\"background-color:#DFDFDF;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.4.1.2.1.1\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.1.1\" style=\"background-color:#DFDFDF;\">Full Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.1.2.1.2\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.2.1\" style=\"background-color:#DFDFDF;\">32.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.1.2.1.3\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.3.1\" style=\"background-color:#DFDFDF;\">40.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.1.2.1.4\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.4.1\" style=\"background-color:#DFDFDF;\">60.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.1.2.1.5\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.5.1\" style=\"background-color:#DFDFDF;\">75.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.4.1.2.1.6\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.6.1\" style=\"background-color:#DFDFDF;\">72.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.4.1.2.1.7\"><span class=\"ltx_text\" id=\"S5.T2.4.1.2.1.7.1\" style=\"background-color:#DFDFDF;\">46.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.4.1.3.2.1\">w/o Planning</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.3.2.2\">20.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.3.2.3\">27.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.3.2.4\">50.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.3.2.5\">75.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.1.3.2.6\">63.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.3.2.7\">35.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.4.1.4.3.1\">w/o Critic</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.4.3.2\">26.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.4.3.3\">32.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.4.3.4\">60.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.4.3.5\">75.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.1.4.3.6\">72.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.1.4.3.7\">41.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T2.4.1.5.4.1\">w/o Ins. Video</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.4.1.5.4.2\">11.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.4.1.5.4.3\">37.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.4.1.5.4.4\">60.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.4.1.5.4.5\">62.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T2.4.1.5.4.6\">72.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.4.1.5.4.7\">37.0</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
110
+ "capture": "Table 2: Success rate (%) of agents with ablation of reasoning module."
111
+ },
112
+ "3": {
113
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.2.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.3.2\" style=\"font-size:90%;\">Success rate (%) of agents with ablation of GUI Parser.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.4\" style=\"width:149.1pt;height:133.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-5.6pt,5.0pt) scale(0.930159638512631,0.930159638512631) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S5.T3.4.1.1.1.1\">UI Elements</th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.4.1.1.1.2\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.4.1.2.2.1\">Panel Layout</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.4.1.2.2.2\">Icon</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.4.1.2.2.3\">OCR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T3.4.1.2.2.4\">Others</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.4.1.2.2.5\">Overall</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.3.1\" style=\"background-color:#DFDFDF;\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.1.3.1.1\"><span class=\"ltx_text\" id=\"S5.T3.4.1.3.1.1.1\" style=\"background-color:#DFDFDF;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.1.3.1.2\"><span class=\"ltx_text\" id=\"S5.T3.4.1.3.1.2.1\" style=\"background-color:#DFDFDF;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.1.3.1.3\"><span class=\"ltx_text\" id=\"S5.T3.4.1.3.1.3.1\" style=\"background-color:#DFDFDF;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.1.3.1.4\"><span class=\"ltx_text\" id=\"S5.T3.4.1.3.1.4.1\" style=\"background-color:#DFDFDF;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.1.3.1.5\"><span class=\"ltx_text\" id=\"S5.T3.4.1.3.1.5.1\" style=\"background-color:#DFDFDF;\">46.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.4.2.1\"><span class=\"ltx_text\" id=\"S5.T3.4.1.4.2.1.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.4.2.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.4.2.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.1.4.2.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.4.2.5\">44.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.5.3.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.5.3.2\"><span class=\"ltx_text\" id=\"S5.T3.4.1.5.3.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.5.3.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.1.5.3.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.5.3.5\">13.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.6.4.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.6.4.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.6.4.3\"><span class=\"ltx_text\" id=\"S5.T3.4.1.6.4.3.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.1.6.4.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.6.4.5\">4.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.7.5.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.7.5.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.7.5.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.1.7.5.4\"><span class=\"ltx_text\" id=\"S5.T3.4.1.7.5.4.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.1.7.5.5\">43.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.1.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S5.T3.4.1.8.6.1\">Qwen-VL-Chat\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib2\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">2</span></a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.4.1.8.6.2\">5.0</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
114
+ "capture": "Table 3: Success rate (%) of agents with ablation of GUI Parser."
115
+ },
116
+ "4": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.2.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.3.2\" style=\"font-size:90%;\">Success rate (%) of agents with ablation on LLM.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T4.4\" style=\"width:154.0pt;height:101.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-5.0pt,3.3pt) scale(0.939568403582861,0.939568403582861) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T4.4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.4.1.1.1.1\">Planner</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.4.1.1.1.2\">Actor &amp; Critic</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.4.1.1.1.3\">Overall Score</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.2.2\" style=\"background-color:#DFDFDF;\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T4.4.1.2.2.1\"><span class=\"ltx_text\" id=\"S5.T4.4.1.2.2.1.1\" style=\"background-color:#DFDFDF;\">GPT-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.1.2.2.2\"><span class=\"ltx_text\" id=\"S5.T4.4.1.2.2.2.1\" style=\"background-color:#DFDFDF;\">46.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.3.3\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T4.4.1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.4.1.3.3.2\">GPT-3.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.1.3.3.3\">12.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.1.4.4.1\" rowspan=\"-2\" style=\"background-color:#DFDFDF;\"><span class=\"ltx_text\" id=\"S5.T4.4.1.4.4.1.1\" style=\"background-color:#DFDFDF;\">GPT-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.1.4.4.2\">Llama2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.1.4.4.3\">1.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.1.5.5.1\">GPT-3.5</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T4.4.1.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.4.1.5.5.3\">19.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.4.1.6.6.1\">Llama2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T4.4.1.6.6.2\" rowspan=\"-2\" style=\"background-color:#DFDFDF;\"><span class=\"ltx_text\" id=\"S5.T4.4.1.6.6.2.1\" style=\"background-color:#DFDFDF;\">GPT-4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.4.1.6.6.3\">5.0</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
118
+ "capture": "Table 4: Success rate (%) of agents with ablation on LLM."
119
+ },
120
+ "5": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T5.4.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S6.T5.5.2\" style=\"font-size:90%;\">Comparison of related benchmarks.<span class=\"ltx_text ltx_font_medium\" id=\"S6.T5.5.2.1\"> <span class=\"ltx_text ltx_font_smallcaps\" id=\"S6.T5.5.2.1.1\">AssistGUI</span> is unique in its platform and task focus. It additionally provides project files for each task.</span></span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T5.6\" style=\"width:397.5pt;height:61.2pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-97.1pt,14.8pt) scale(0.671826339938195,0.671826339938195) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T5.6.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T5.6.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S6.T5.6.1.1.1.1\">Benchmark</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.6.1.1.1.2\"># APPs</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.6.1.1.1.3\"># Tasks</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.6.1.1.1.4\">Platform</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.6.1.1.1.5\">Task Focus</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T5.6.1.1.1.6\">Project File</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T5.6.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T5.6.1.2.1.1\">AndroidEnv\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib40\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">40</span></a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.6.1.2.1.2\">~30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.6.1.2.1.3\">\u00bf100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.6.1.2.1.4\">Android OS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.6.1.2.1.5\">Game &amp; App Usage</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.6.1.2.1.6\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.6.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S6.T5.6.1.3.2.1\">WebShop\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib50\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">50</span></a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.3.2.2\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.3.2.3\">1 task, 12K instructions</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.3.2.4\">OpenAI Gym</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.3.2.5\">Web-based e-commerce</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.3.2.6\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.6.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S6.T5.6.1.4.3.1\">AutoDroid\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib42\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">42</span></a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.4.3.2\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.4.3.3\">158</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.4.3.4\">Android OS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.4.3.5\">App Usage</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T5.6.1.4.3.6\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.6.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S6.T5.6.1.5.4.1\">AssistGUI</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T5.6.1.5.4.2\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T5.6.1.5.4.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T5.6.1.5.4.4\">Windows</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T5.6.1.5.4.5\">Productivity Software Usage</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S6.T5.6.1.5.4.6\">\u2713</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
122
+ "capture": "Table 5: Comparison of related benchmarks. AssistGUI is unique in its platform and task focus. It additionally provides project files for each task."
123
+ }
124
+ },
125
+ "image_paths": {
126
+ "2": {
127
+ "figure_path": "2312.13108v2_figure_2.png",
128
+ "caption": "Figure 2: One example of screenshot and metadata.",
129
+ "url": "http://arxiv.org/html/2312.13108v2/x2.png"
130
+ },
131
+ "3": {
132
+ "figure_path": "2312.13108v2_figure_3.png",
133
+ "caption": "Figure 3: Distribution of collect tasks and one example query for each task. We have gathered tasks across 9 applications, focusing on the use of productivity software as well as fundamental computer operations and settings.",
134
+ "url": "http://arxiv.org/html/2312.13108v2/x3.png"
135
+ },
136
+ "4": {
137
+ "figure_path": "2312.13108v2_figure_4.png",
138
+ "caption": "Figure 4: Diagram Illustration of \\textcinzelACE. It first outlines key milestones and subtasks, then iteratively employs a GUI Parser, a Critic module for action assessment, and an Actor module for adjusting the plan and generating code for controlling the desktop, sequentially completing subtasks until the task is finished.",
139
+ "url": "http://arxiv.org/html/2312.13108v2/x4.png"
140
+ },
141
+ "5": {
142
+ "figure_path": "2312.13108v2_figure_5.png",
143
+ "caption": "Figure 5: Diagram Illustration of Planner. The Planner first translates video subtitles into a structured raw plan with milestones and subtasks. It then refines this plan by specifying the user-provided query.",
144
+ "url": "http://arxiv.org/html/2312.13108v2/x5.png"
145
+ },
146
+ "6": {
147
+ "figure_path": "2312.13108v2_figure_6.png",
148
+ "caption": "Figure 6: Diagram Illustration of GUI Parser. An LLM invokes different vision tools to parse various UI elements.",
149
+ "url": "http://arxiv.org/html/2312.13108v2/x6.png"
150
+ },
151
+ "7": {
152
+ "figure_path": "2312.13108v2_figure_7.png",
153
+ "caption": "Figure 7: Top: The Critic assesses the effectiveness of the previous action by analyzing the screenshots taken before and after its execution. Bottom: The Actor first updates the current subtask, then generates the subsequent action, considering the current observation, current subtask, historical actions, and Critic\u2019s feedback.",
154
+ "url": "http://arxiv.org/html/2312.13108v2/x7.png"
155
+ },
156
+ "8": {
157
+ "figure_path": "2312.13108v2_figure_8.png",
158
+ "caption": "Figure 8: Qualititave Results. Top: We show one successful prediction. Middle: We compare our GUI Parser results with Semantic-SAM which is the core component for supporting GUI-4V to ground in Web or Smartphone Platform (i.e., GPT-4V-SoM). Bottom: We display some common errors with explanation.",
159
+ "url": "http://arxiv.org/html/2312.13108v2/x8.png"
160
+ },
161
+ "9": {
162
+ "figure_path": "2312.13108v2_figure_9.png",
163
+ "caption": "Figure 9: Planning Results. The UI elements are organized panel by panel.",
164
+ "url": "http://arxiv.org/html/2312.13108v2/x9.png"
165
+ },
166
+ "10": {
167
+ "figure_path": "2312.13108v2_figure_10.png",
168
+ "caption": "Figure 10: Parsed GUI Results. The UI elements are organized panel by panel.",
169
+ "url": "http://arxiv.org/html/2312.13108v2/x10.png"
170
+ },
171
+ "11": {
172
+ "figure_path": "2312.13108v2_figure_11.png",
173
+ "caption": "Figure 11: Prediction Results of Actor and Critic Module. We show the prediction results of one specific subtask in solving a query.",
174
+ "url": "http://arxiv.org/html/2312.13108v2/x11.png"
175
+ }
176
+ },
177
+ "validation": true,
178
+ "references": [
179
+ {
180
+ "1": {
181
+ "title": "Uibert: Learning generic multimodal representations for ui understanding.",
182
+ "author": "Chongyang Bai, Xiaoxue Zang, Ying Xu, Srinivas Sunkara, Abhinav Rastogi, Jindong Chen, et al.",
183
+ "venue": "arXiv preprint arXiv:2107.13731, 2021.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "2": {
189
+ "title": "Qwen-vl: A frontier large vision-language model with versatile abilities.",
190
+ "author": "Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou.",
191
+ "venue": "arXiv preprint arXiv:2308.12966, 2023.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "3": {
197
+ "title": "Lexi: Self-supervised learning of the ui language.",
198
+ "author": "Pratyay Banerjee, Shweti Mahajan, Kushal Arora, Chitta Baral, and Oriana Riva.",
199
+ "venue": "arXiv preprint arXiv:2301.10165, 2023.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "4": {
205
+ "title": "Rt-2: Vision-language-action models transfer web knowledge to robotic control.",
206
+ "author": "Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, et al.",
207
+ "venue": "arXiv preprint arXiv:2307.15818, 2023.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "5": {
213
+ "title": "A dataset for interactive vision-language navigation with unknown command feasibility.",
214
+ "author": "Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate Saenko, and Bryan A Plummer.",
215
+ "venue": "In European Conference on Computer Vision, pages 312\u2013328. Springer, 2022.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "6": {
221
+ "title": "Mind2web: Towards a generalist agent for the web.",
222
+ "author": "Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su.",
223
+ "venue": "arXiv preprint arXiv:2306.06070, 2023.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "7": {
229
+ "title": "Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn.",
230
+ "author": "Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou.",
231
+ "venue": "arXiv preprint arXiv:2306.08640, 2023.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "8": {
237
+ "title": "Learning to navigate the web.",
238
+ "author": "Izzeddin Gur, Ulrich Rueckert, Aleksandra Faust, and Dilek Hakkani-Tur.",
239
+ "venue": "arXiv preprint arXiv:1812.09195, 2018.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "9": {
245
+ "title": "Actionbert: Leveraging user actions for semantic understanding of user interfaces.",
246
+ "author": "Zecheng He, Srinivas Sunkara, Xiaoxue Zang, Ying Xu, Lijuan Liu, Nevan Wichers, Gabriel Schubiner, Ruby Lee, and Jindong Chen.",
247
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, pages 5931\u20135938, 2021.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "10": {
253
+ "title": "A data-driven approach for learning to control computers.",
254
+ "author": "Peter C Humphreys, David Raposo, Tobias Pohlen, Gregory Thornton, Rachita Chhaparia, Alistair Muldal, Josh Abramson, Petko Georgiev, Adam Santoro, and Timothy Lillicrap.",
255
+ "venue": "In International Conference on Machine Learning, pages 9466\u20139482. PMLR, 2022.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "11": {
261
+ "title": "Language models can solve computer tasks.",
262
+ "author": "Geunwoo Kim, Pierre Baldi, and Stephen McAleer.",
263
+ "venue": "arXiv preprint arXiv:2303.17491, 2023.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "12": {
269
+ "title": "Segment anything.",
270
+ "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.",
271
+ "venue": "arXiv preprint arXiv:2304.02643, 2023.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "13": {
277
+ "title": "Actor-critic algorithms.",
278
+ "author": "Vijay Konda and John Tsitsiklis.",
279
+ "venue": "Advances in neural information processing systems, 12, 1999.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "14": {
285
+ "title": "Pix2struct: Screenshot parsing as pretraining for visual language understanding.",
286
+ "author": "Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova.",
287
+ "venue": "In International Conference on Machine Learning, pages 18893\u201318912. PMLR, 2023.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "15": {
293
+ "title": "A zero-shot language agent for computer control with structured reflection.",
294
+ "author": "Tao Li, Gang Li, Zhiwei Deng, Bryan Wang, and Yang Li.",
295
+ "venue": "arXiv preprint arXiv:2310.08740, 2023.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "16": {
301
+ "title": "Mapping natural language instructions to mobile ui action sequences.",
302
+ "author": "Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge.",
303
+ "venue": "arXiv preprint arXiv:2005.03776, 2020.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "17": {
309
+ "title": "Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system.",
310
+ "author": "Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D Ernst.",
311
+ "venue": "arXiv preprint arXiv:1802.08979, 2018.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "18": {
317
+ "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection.",
318
+ "author": "Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al.",
319
+ "venue": "arXiv preprint arXiv:2303.05499, 2023a.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "19": {
325
+ "title": "Agentbench: Evaluating llms as agents.",
326
+ "author": "Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al.",
327
+ "venue": "arXiv preprint arXiv:2308.03688, 2023b.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "20": {
333
+ "title": "Chameleon: Plug-and-play compositional reasoning with large language models.",
334
+ "author": "Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao.",
335
+ "venue": "arXiv preprint arXiv:2304.09842, 2023.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "21": {
341
+ "title": "Introducing chatgpt.",
342
+ "author": "OpenAI.",
343
+ "venue": "OpenAI Blog, 2021.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "22": {
349
+ "title": "Gpt-4 technical report, 2023a.",
350
+ "author": "OpenAI.",
351
+ "venue": null,
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "23": {
357
+ "title": "Gpt-4v(ision) system card., 2023b.",
358
+ "author": "OpenAI.",
359
+ "venue": null,
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "24": {
365
+ "title": "ART: automatic multi-step reasoning and tool-use for large language models.",
366
+ "author": "Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco T\u00falio Ribeiro.",
367
+ "venue": "arXiv preprint arXiv:2303.09014, 2023.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "25": {
373
+ "title": "Mapping natural language commands to web elements.",
374
+ "author": "Panupong Pasupat, Tian-Shun Jiang, Evan Zheran Liu, Kelvin Guu, and Percy Liang.",
375
+ "venue": "arXiv preprint arXiv:1808.09132, 2018.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "26": {
381
+ "title": "Android in the wild: A large-scale dataset for android device control.",
382
+ "author": "Christopher Rawles, Alice Li, Daniel Rodriguez, Oriana Riva, and Timothy Lillicrap.",
383
+ "venue": "arXiv preprint arXiv:2307.10088, 2023.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "27": {
389
+ "title": "You only look once: Unified, real-time object detection.",
390
+ "author": "Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi.",
391
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779\u2013788, 2016.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "28": {
397
+ "title": "Toolformer: Language models can teach themselves to use tools.",
398
+ "author": "Timo Schick, Jane Dwivedi-Yu, Roberto Dess\u00ec, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom.",
399
+ "venue": "arXiv preprint arXiv:2302.04761, 2023.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "29": {
405
+ "title": "mobile-env: An open platform for reinforcement learning in wireless mobile networks.",
406
+ "author": "Stefan Schneider, Stefan Werner, Ramin Khalili, Artur Hecker, and Holger Karl.",
407
+ "venue": "In NOMS 2022-2022 IEEE/IFIP Network Operations and Management Symposium, pages 1\u20133. IEEE, 2022.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "30": {
413
+ "title": "Deepdiff (version 6.7.1).",
414
+ "author": "Dehpour Sep.",
415
+ "venue": "https://github.com/seperman/deepdiff, 2023.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "31": {
421
+ "title": "From pixels to ui actions: Learning to follow instructions via graphical user interfaces.",
422
+ "author": "Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, and Kristina Toutanova.",
423
+ "venue": "arXiv preprint arXiv:2306.00245, 2023.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "32": {
429
+ "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface.",
430
+ "author": "Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang.",
431
+ "venue": "arXiv preprint arXiv:2303.17580, 2023.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "33": {
437
+ "title": "World of bits: An open-domain platform for web-based agents.",
438
+ "author": "Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang.",
439
+ "venue": "In International Conference on Machine Learning, pages 3135\u20133144. PMLR, 2017.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "34": {
445
+ "title": "Reflexion: Language agents with verbal reinforcement learning.",
446
+ "author": "Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao.",
447
+ "venue": "In Thirty-seventh Conference on Neural Information Processing Systems, 2023.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "35": {
453
+ "title": "Meta-gui: Towards multi-modal conversational agents on mobile gui.",
454
+ "author": "Liangtai Sun, Xingyu Chen, Lu Chen, Tianle Dai, Zichen Zhu, and Kai Yu.",
455
+ "venue": "arXiv preprint arXiv:2205.11029, 2022.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "36": {
461
+ "title": "Vipergpt: Visual inference via python execution for reasoning.",
462
+ "author": "D\u00eddac Sur\u00eds, Sachit Menon, and Carl Vondrick.",
463
+ "venue": "arXiv preprint arXiv:2303.08128, 2023.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "37": {
469
+ "title": "Stanford alpaca: An instruction-following llama model.",
470
+ "author": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto.",
471
+ "venue": "https://github.com/tatsu-lab/stanford_alpaca, 2023.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "38": {
477
+ "title": "Llama: Open and efficient foundation language models.",
478
+ "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.",
479
+ "venue": "arXiv preprint arXiv:2302.13971, 2023a.",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "39": {
485
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
486
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.",
487
+ "venue": "arXiv preprint arXiv:2307.09288, 2023b.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "40": {
493
+ "title": "Androidenv: A reinforcement learning platform for android.",
494
+ "author": "Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup.",
495
+ "venue": "arXiv preprint arXiv:2105.13231, 2021.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "41": {
501
+ "title": "Chain of thought prompting elicits reasoning in large language models.",
502
+ "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou.",
503
+ "venue": "arXiv preprint arXiv:2201.11903, 2022.",
504
+ "url": null
505
+ }
506
+ },
507
+ {
508
+ "42": {
509
+ "title": "Empowering llm to use smartphone for intelligent task automation.",
510
+ "author": "Hao Wen, Yuanchun Li, Guohong Liu, Shanhui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu.",
511
+ "venue": "arXiv preprint arXiv:2308.15272, 2023a.",
512
+ "url": null
513
+ }
514
+ },
515
+ {
516
+ "43": {
517
+ "title": "Droidbot-gpt: Gpt-powered ui automation for android.",
518
+ "author": "Hao Wen, Hongming Wang, Jiaxuan Liu, and Yuanchun Li.",
519
+ "venue": "arXiv preprint arXiv:2304.07061, 2023b.",
520
+ "url": null
521
+ }
522
+ },
523
+ {
524
+ "44": {
525
+ "title": "Godiva: Generating open-domain videos from natural descriptions.",
526
+ "author": "Chenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, and Nan Duan.",
527
+ "venue": "arXiv preprint arXiv:2104.14806, 2021.",
528
+ "url": null
529
+ }
530
+ },
531
+ {
532
+ "45": {
533
+ "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models.",
534
+ "author": "Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan.",
535
+ "venue": "arXiv preprint arXiv:2303.04671, 2023.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "46": {
541
+ "title": "Visual clues: Bridging vision and language foundations for image paragraph captioning.",
542
+ "author": "Yujia Xie, Luowei Zhou, Xiyang Dai, Lu Yuan, Nguyen Bach, Ce Liu, and Michael Zeng.",
543
+ "venue": "Advances in Neural Information Processing Systems, 35:17287\u201317300, 2022.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "47": {
549
+ "title": "Gpt-4v in wonderland: Large multimodal models for zero-shot smartphone gui navigation.",
550
+ "author": "An Yan, Zhengyuan Yang, Wanrong Zhu, Kevin Lin, Linjie Li, Jianfeng Wang, Jianwei Yang, Yiwu Zhong, Julian McAuley, Jianfeng Gao, et al.",
551
+ "venue": "arXiv preprint arXiv:2311.07562, 2023.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "48": {
557
+ "title": "Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v.",
558
+ "author": "Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao.",
559
+ "venue": "arXiv preprint arXiv:2310.11441, 2023a.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "49": {
565
+ "title": "Mm-react: Prompting chatgpt for multimodal reasoning and action.",
566
+ "author": "Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang.",
567
+ "venue": "arXiv preprint arXiv:2303.11381, 2023b.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "50": {
573
+ "title": "Webshop: Towards scalable real-world web interaction with grounded language agents.",
574
+ "author": "Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan.",
575
+ "venue": "Advances in Neural Information Processing Systems, 35:20744\u201320757, 2022a.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "51": {
581
+ "title": "React: Synergizing reasoning and acting in language models.",
582
+ "author": "Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.",
583
+ "venue": "arXiv preprint arXiv:2210.03629, 2022b.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "52": {
589
+ "title": "You only look at screens: Multimodal chain-of-action agents.",
590
+ "author": "Zhuosheng Zhan and Aston Zhang.",
591
+ "venue": "arXiv preprint arXiv:2309.11436, 2023.",
592
+ "url": null
593
+ }
594
+ },
595
+ {
596
+ "53": {
597
+ "title": "Reinforced ui instruction grounding: Towards a generic ui task automation api.",
598
+ "author": "Zhizheng Zhang, Wenxuan Xie, Xiaoyi Zhang, and Yan Lu.",
599
+ "venue": "arXiv preprint arXiv:2310.04716, 2023.",
600
+ "url": null
601
+ }
602
+ },
603
+ {
604
+ "54": {
605
+ "title": "Webarena: A realistic web environment for building autonomous agents.",
606
+ "author": "Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al.",
607
+ "venue": "arXiv preprint arXiv:2307.13854, 2023.",
608
+ "url": null
609
+ }
610
+ }
611
+ ],
612
+ "url": "http://arxiv.org/html/2312.13108v2"
613
+ }
20240101/2312.14557v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2312.16767v2.json ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Adaptive Anytime Multi-Agent Path Finding Using Bandit-Based Large Neighborhood Search",
3
+ "abstract": "Anytime multi-agent path finding (MAPF) is a promising approach to scalable path optimization in large-scale multi-agent systems. State-of-the-art anytime MAPF is based on Large Neighborhood Search (LNS), where a fast initial solution is iteratively optimized by destroying and repairing a fixed number of parts, i.e., the neighborhood of the solution, using randomized destroy heuristics and prioritized planning.\nDespite their recent success in various MAPF instances, current LNS-based approaches lack exploration and flexibility due to greedy optimization with a fixed neighborhood size which can lead to low-quality solutions in general. So far, these limitations have been addressed with extensive prior effort in tuning or offline machine learning beyond actual planning.\nIn this paper, we focus on online learning in LNS and propose Bandit-based Adaptive LArge Neighborhood search Combined with Exploration (BALANCE). BALANCE uses a bi-level multi-armed bandit scheme to adapt the selection of destroy heuristics and neighborhood sizes on the fly during search. We evaluate BALANCE on multiple maps from the MAPF benchmark set and empirically demonstrate performance improvements of at least 50% compared to state-of-the-art anytime MAPF in large-scale scenarios. We find that Thompson Sampling performs particularly well compared to alternative multi-armed bandit algorithms.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A wide range of real-world applications like goods transportation in warehouses, search and rescue missions, and traffic management can be formulated as Multi-Agent Path Finding (MAPF) problem, where the goal is to find collision-free paths for multiple agents with each having an assigned start and goal location. Finding optimal solutions, w.r.t. minimal flowtime or makespan is NP-hard, which limits scalability of most state-of-the-art MAPF solvers (Ratner and Warmuth 1986 ###reference_24###; Yu and LaValle 2013 ###reference_35###; Sharon et al. 2012 ###reference_29###).\nAnytime MAPF based on Large Neighborhood Search (LNS) is a popular approach to finding fast and near-optimal solutions to the MAPF problem within a fixed time budget (Li et al. 2021 ###reference_19###). Given an initial feasible solution and a set of destroy heuristics, LNS iteratively destroys and replans so-called neighborhoods of the solution, i.e., a fixed number of paths, until the time budget runs out. MAPF-LNS represents the current state-of-the-art in anytime MAPF and has been experimentally shown to scale up to large-scale scenarios with hundreds of agents (Li et al. 2021 ###reference_19###). Due to its increasing popularity, several extensions have been recently proposed like fast local repairing, integration of primal heuristics, or machine learning-guided neighborhood selection (Huang et al. 2022 ###reference_14###; Li et al. 2022 ###reference_20###; Lam et al. 2023 ###reference_18###).\nHowever, MAPF-LNS and its variants currently suffer from two limitations that can lead to low-quality solutions in general:\nThe neighborhood size is typically fixed, which limits the flexibility of the optimization process, thus possibly affecting the solution quality, especially for a large number of agents (Li et al. 2021 ###reference_19###). Therefore, prior tuning is required \u2013 in addition to the actual LNS procedure \u2013 to obtain good solutions.\nRoulette wheel selection is commonly used to execute and adapt the destroy heuristic selection to determine the neighborhood (Mara et al. 2022 ###reference_21###; Li et al. 2021 ###reference_19###). During optimization, roulette wheel selection could greedily converge to poor choices due to the lack of exploration. Offline machine learning can guide the selection with solution score prediction but requires sufficient data acquisition and feature engineering (Huang et al. 2022 ###reference_14###).\nIn this paper, we address these limitations by proposing Bandit-based Adaptive LArge Neighborhood search Combined with Exploration (BALANCE).\nBALANCE uses a bi-level multi-armed bandit scheme to adapt the selection of destroy heuristics and neighborhood sizes on the fly during search.\nOur contributions are as follows:\nWe formulate BALANCE as a simple but effective MAPF-LNS framework with an adaptive selection of destroy heuristics and neighborhood sizes during search.\nWe propose and discuss three concrete instantiations of BALANCE based on roulette wheel selection, UCB1, and Thompson Sampling, respectively.\nWe evaluate BALANCE on multiple maps from the MAPF benchmark set and empirically demonstrate cost improvements of at least 50% compared to state-of-the-art anytime MAPF in large-scale scenarios. We find that Thompson Sampling performs particularly well compared to alternative multi-armed bandit algorithms."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Multi-Agent Path Finding (MAPF)",
21
+ "text": "We focus on maps as undirected unweighted graphs , where vertex set contains all possible locations and edge set contains all possible transitions or movements between adjacent locations. An instance consists of a map and a set of agents with each agent having a start location and a goal location .\nMAPF aims to find a collision-free plan for all agents. A plan consists of individual paths per agent , where , , , and is the length or travel distance of path . The delay of path is defined by the difference of path length and the length of the shortest path from to w.r.t. map .\nIn this paper, we consider vertex conflicts that occur when two agents and occupy the same location at time step and edge conflicts that occur when two agents and traverse the same edge in opposite directions at time step (Stern et al. 2019 ###reference_32###). A plan is a solution, i.e., feasible, when it does not have any vertex or edge conflicts. Our goal is to find a solution that minimizes the flowtime which is equivalent to minimizing the sum of delays . We use the sum of delays or (total) cost as the primary performance measure in our evaluations."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Anytime MAPF with LNS",
27
+ "text": "Anytime MAPF searches for solutions within a given time budget. The solution quality monotonically improves with increasing time budget (Cohen et al. 2018 ###reference_9###; Li et al. 2021 ###reference_19###).\nMAPF-LNS based on Large Neighborhood Search (LNS) is the current state-of-the-art approach to anytime MAPF and is shown to scale up to large-scale scenarios with hundreds of agents (Huang et al. 2022 ###reference_14###; Li et al. 2021 ###reference_19###). Starting with an initial feasible plan , e.g., found via prioritized planning (PP) from (Silver 2005 ###reference_30###), MAPF-LNS iteratively modifies by destroying paths, i.e., the neighborhood . The destroyed neighborhood is then repaired or replanned using PP to quickly generate a new solution . If the new cost is lower than the previous cost , then is replaced by , and the search continues until the time budget runs out. The result of MAPF-LNS is the last accepted solution with the lowest cost so far.\nMAPF-LNS uses a set of three destroy heuristics , namely a random uniform selection of paths, an agent-based heuristic, and a map-based heuristic (Li et al. 2021 ###reference_19###). The agent-based heuristic generates the neighborhood, including the path of agent with the current maximum delay and other paths (determined via random walks) that prevent from achieving a lower delay. The map-based heuristic randomly chooses a vertex with a degree greater than 2 and generates a neighborhood of paths containing .\nMAPF-LNS uses a selection algorithm like roulette wheel selection to choose destroy heuristics by maintaining updatable weights or some statistics for all destroy heuristics (Ropke and Pisinger 2006 ###reference_25###; Li et al. 2021 ###reference_19###). All weights or statistics used by to select a destroy heuristic are denoted by , which could represent, e.g., the average cost improvement or the selection count per destroy heuristic . The statistics will be further explained in Section 4.2 ###reference_### as the concrete definition depends on ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Multi-Armed Bandits",
33
+ "text": "Multi-armed bandits (MABs) or simply bandits are fundamental decision-making problems, where an MAB or selection algorithm repeatedly chooses an arm among a given set of arms or options to maximize an expected reward of a stochastic reward function , where is a random variable with an unknown distribution . To solve an MAB, one has to determine an optimal arm , which maximizes the expected reward . The MAB algorithm has to balance between sufficiently exploring all arms to accurately estimate via statistics and to exploit its current estimates by greedily selecting the arm with the currently highest estimate of . This is known as the exploration-exploitation dilemma, where exploration can find arms with higher rewards but requires more time for trying them out, while exploitation can lead to fast convergence but possibly gets stuck in a poor local optimum. In this paper, we will cover roulette wheel selection, UCB1, and Thompson Sampling as concrete MAB algorithms and further explain them in Section 4.2 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Related Work",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Multi-Armed Bandits for LNS",
45
+ "text": "In recent years, MABs have been used as adaptive meta-controllers to tune learning and optimization algorithms on the fly (Schaul et al. 2019 ###reference_28###; Badia et al. 2020 ###reference_2###; Hendel 2022 ###reference_12###). Besides roulette wheel selection, UCB1 and -greedy are commonly used for destroy heuristic selection in LNS in the context of mixed integer programming, vehicle routing, and scheduling problems with fixed neighborhood sizes (Chen et al. 2016 ###reference_7###; Chen and Bai 2018 ###reference_6###; Chmiela et al. 2023 ###reference_8###). (Hendel 2022 ###reference_12###) adapts the neighborhood size for mixed integer programming using a mutation-based approach inspired by evolutionary algorithms (Rothberg 2007 ###reference_26###). Most works use rather complex rewards that are composed of multiple weighted terms with several tunable hyperparameters.\nWe focus on MAPF problems and propose a bi-level MAB scheme to adapt the selection of destroy heuristics and neighborhood sizes, which is simple to use without any additional mechanisms like mutation. Our approach uses the cost improvement as a reward, which simply represents the cost difference between two solutions w.r.t. the original objective of MAPF without depending on any additional weighted term that requires prior tuning. To the best of our knowledge, our work first effectively applies Thompson Sampling to anytime MAPF in addition to more common MAB algorithms like UCB1 and roulette wheel selection."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Multi-Armed Bandits in Anytime Planning",
51
+ "text": "MABs are popular in anytime planning algorithms, especially in single-agent Monte Carlo planning (Kocsis and Szepesv\u00e1ri 2006 ###reference_17###; Silver and Veness 2010 ###reference_31###). Monte-Carlo Tree Search (MCTS) is the state-of-the-art framework of current Monte Carlo planning algorithms which uses MABs to traverse a search tree within a limited time budget (Kocsis and Szepesv\u00e1ri 2006 ###reference_17###; Silver and Veness 2010 ###reference_31###). UCB1 is most commonly used, but Thompson Sampling has also gained attention in the last few years due to its effectiveness in domains of high uncertainty (Bai, Wu, and Chen 2013 ###reference_3###; Bai et al. 2014 ###reference_4###; Phan et al. 2019a ###reference_22###, b ###reference_23###).\nAs MABs have been shown to converge to good decisions within short-time budgets, we use MABs in our adaptive multi-agent path finding setting. Inspired by the latest progress in Monte Carlo planning (\u015awiechowski et al. 2023 ###reference_33###), we intend to employ more sophisticated MAB algorithms like Thompson Sampling to anytime MAPF to improve exploration and performance."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Machine Learning in Anytime MAPF",
57
+ "text": "Machine learning has been used in MAPF to directly learn collision-free path finding, to guide node selection in search trees, or to select appropriate MAPF algorithms for certain maps (Sartoretti et al. 2019 ###reference_27###; Kaduri, Boyarski, and Stern 2020 ###reference_15###; Huang, Dilkina, and Koenig 2021 ###reference_13###). MAPF-ML-LNS is an anytime MAPF approach that extends MAPF-LNS with a learned score predictor for neighborhood selection as well as a random uniform selection of the neighborhood size . The predictor is trained offline on pre-collected data from previous MAPF runs (Huang et al. 2022 ###reference_14###). The score predictor generalizes to some degree but is fixed after training; therefore, not being able to adapt during search, which limits flexibility. MAPF-ML-LNS depends on extensive prior effort like data acquisition, model training, and feature engineering for meaningful score learning.\nWe propose an online learning approach to adaptive MAPF-LNS using MABs. The MABs can be trained on the fly with data directly obtained from the LNS without any prior data acquisition. Since MABs only learn from scalar rewards, there is no need for expensive feature engineering, simplifying our approach and easing application to other domains."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Bandit-Based Adaptive MAPF-LNS",
63
+ "text": "We now introduce Bandit-based Adaptive LArge Neighborhood search Combined with Exploration (BALANCE) as a simple but effective LNS framework for adaptive MAPF."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Formulation",
69
+ "text": "BALANCE uses a bi-level MAB scheme to adapt the selection of destroy heuristics and neighborhood sizes on the fly during search.\nThe first level consists of a single MAB, called -Bandit with arms, which selects a destroy heuristic . The second level consists of so-called -Bandits with arms. Each -Bandit conditions on a destroy heuristic choice and determines the corresponding neighborhood size 111The set of neighborhood size options can be defined arbitrarily. For simplicity, we focus on sets consisting of powers of two. based on an exponent selection . The bi-level MAB scheme is shown in Figure 1 ###reference_###.\n###figure_1### BALANCE first selects a destroy heuristic with the top-level -Bandit based on its current statistics . The selected destroy heuristic determines the corresponding bottom-level -Bandit, which is used to select an exponent based on its current conditional statistics . The neighborhood size is then determined by .\nAfter evaluating the total cost of the new solution , i.e. the sum of delays, the statistics of the top-level -Bandit and corresponding bottom-level -Bandit are updated incrementally. The MAB reward for the update is defined by the cost improvement of the new solution compared to the previous one (Li et al. 2021 ###reference_19###).\nThe full formulation of BALANCE is provided in Algorithm 1 ###reference_###, where represents the instance to be solved, represents the MAB algorithm for the bi-level scheme, and represents the number of neighborhood size options."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Instantiations",
75
+ "text": "In the following, we describe three concrete MAB algorithms to implement the bi-level scheme in Figure 1 ###reference_###. The definition of the statistics depends on the MAB algorithm."
76
+ },
77
+ {
78
+ "section_id": "4.2.x",
79
+ "parent_section_id": "4.2",
80
+ "section_name": "Roulette Wheel Selection",
81
+ "text": "selects an arm with a probability of , where is the sum of rewards or weight and is the selection count of arm . Statistics consists of all weights , which can be updated incrementally after each iteration (Goldberg 1988 ###reference_11###)."
82
+ },
83
+ {
84
+ "section_id": "4.2.x",
85
+ "parent_section_id": "4.2",
86
+ "section_name": "UCB1",
87
+ "text": "selects arms by maximizing the upper confidence bound of rewards , where is the average reward of arm , is an exploration constant, is the total number of arm selections, and is the selection count of arm . The second term represents the exploration bonus, which becomes smaller with increasing (Auer, Cesa-Bianchi, and Fischer 2002 ###reference_1###). Statistics consists of all average rewards and selection counts ."
88
+ },
89
+ {
90
+ "section_id": "4.2.x",
91
+ "parent_section_id": "4.2",
92
+ "section_name": "Thompson Sampling",
93
+ "text": "uses a Bayesian approach to balance between exploration and exploitation of arms (Thompson 1933 ###reference_34###).\nWe focus on a generalized variant of Thompson Sampling, which works for arbitrary reward distributions by assuming that follows a Normal distribution with unknown mean and precision , where is the variance (Bai, Wu, and Chen 2013 ###reference_3###; Bai et al. 2014 ###reference_4###). follows a Normal Gamma distribution with , , and . The distribution over is a Gamma distribution and the conditional distribution over given is a Normal distribution . Given a prior distribution and observed rewards , the posterior distribution is defined by , where ,\n, , and . is the observed average reward in and is the variance. The posterior is inferred for each arm to sample an estimate of the expected reward . The arm with the highest estimate is selected.\nStatistics consists of all average rewards , average of squared rewards , and selection counts ."
94
+ },
95
+ {
96
+ "section_id": "4.3",
97
+ "parent_section_id": "4",
98
+ "section_name": "Conceptual Discussion",
99
+ "text": "As MAB algorithms balance between exploration and exploitation to quickly find optimal choices, we believe that they are naturally suited to enhance MAPF-LNS with self-adaptive capabilities. According to previous works on MAB-based tree search, BALANCE can provably converge to an optimal destroy heuristic and neighborhood size choice with sufficient exploration if there is a stationary optimum (Kocsis and Szepesv\u00e1ri 2006 ###reference_17###; Bai et al. 2014 ###reference_4###). Otherwise, non-stationary MAB techniques are required, which we defer to as future work (Garivier and Moulines 2008 ###reference_10###). Depending on the choice of , BALANCE maintains MABs in total. Since can be updated incrementally for any quantity like arm selection counts or average rewards , the bi-level MAB scheme can be updated in constant time thus introducing negligible overhead to the LNS (as replanning of neighborhoods requires significantly more compute).\nRoulette wheel selection is the simplest method to implement because it only uses the weights as the sum of rewards. However, it could lack exploration in the long run since arms with small weights are likely to be neglected or forgotten over time. UCB1 accommodates this issue by introducing an exploration bonus that explicitly considers the selection count of arms . Arms that are selected less over time will have a larger exploration bonus and are therefore more incentivized for selection, depending on the choice of exploration constant . Thompson Sampling is a randomized algorithm whose initial exploration depends on prior parameters, i.e., , , , and thus being more complex than the other MAB approaches. However, previous works report that using prior distributions that are close to a uniform distribution is sufficient in most cases without requiring extensive tuning (Bai, Wu, and Chen 2013 ###reference_3###; Bai et al. 2014 ###reference_4###).\nAdaptation in MAPF-LNS can be regarded as stochastic optimization problem, since all destroy heuristics defined by (Li et al. 2021 ###reference_19###) are randomized. Therefore, uncertainty-based methods like Thompson Sampling seem promising for this setting as reported in (Chapelle and Li 2011 ###reference_5###; Kaufmann, Korda, and Munos 2012 ###reference_16###; Bai et al. 2014 ###reference_4###).\nAlternatively to the proposed bi-level MAB scheme, a single MAB can be employed to directly search the joint arm space of . While this approach would basically solve the same problem, the joint arm space scales quadratically, which could lead to low-quality solutions, if the time budget is very restricted. The bi-level scheme mitigates the scalability issue by first selecting a destroy heuristic (Section 5.3 ###reference_### indicates that performance is more sensitive to ) before deciding on the neighborhood size (whose quality depends on the choice of )."
100
+ },
101
+ {
102
+ "section_id": "5",
103
+ "parent_section_id": null,
104
+ "section_name": "Experiments",
105
+ "text": ""
106
+ },
107
+ {
108
+ "section_id": "5.1",
109
+ "parent_section_id": "5",
110
+ "section_name": "Setup",
111
+ "text": ""
112
+ },
113
+ {
114
+ "section_id": "5.1.x",
115
+ "parent_section_id": "5.1",
116
+ "section_name": "Maps",
117
+ "text": "We evaluate BALANCE on five maps from the MAPF benchmark set of (Stern et al. 2019 ###reference_32###), namely (1) a random map (random-32-32-10), (2) a warehouse map (warehouse-10-20-10-2-1), (3) two game maps ost003d and (4) den520d as well as (5) a city map (Paris_1_256). All maps have different sizes and structures and are the same as used in (Huang et al. 2022 ###reference_14###) for comparability with state-of-the-art anytime MAPF as presented below. We conduct all experiments on the available 25 random scenarios per map."
118
+ },
119
+ {
120
+ "section_id": "5.1.x",
121
+ "parent_section_id": "5.1",
122
+ "section_name": "Anytime MAPF Algorithms",
123
+ "text": "We implemented222Code available at github.com/thomyphan/anytime-mapf ###reference_om/thomyphan/anytime-mapf###. different variants of BALANCE using Thompson Sampling (with , , , ), UCB1 (with ), and roulette wheel selection. Each BALANCE variant is denoted by BALANCE (X), where X is the concrete MAB algorithm (or just random uniform sampling) used for our bi-level scheme in Figure 1 ###reference_###. Unless stated otherwise, we always use the destroy heuristics from (Li et al. 2021 ###reference_19###) and set such that the neighborhood size is chosen from 333Even though previous works (Li et al. 2021 ###reference_19###, 2022 ###reference_20###) already indicate good values for fixed neighborhood sizes , we keep optimizing our MABs on a broader set of options to confirm convergence to adequate choices without assuming any prior knowledge.. Our BALANCE implementation is based on the public code of (Li et al. 2022 ###reference_20###) and uses its default configuration unless stated otherwise.\nWe determine the Empirically Best Choice, where we run a grid search over all destroy heuristics and neighborhood size options to compare with a pre-tuned LNS without any adaptation.\nTo directly compare BALANCE with MAPF-LNS and MAPF-ML-LNS, as state-of-the-art approaches, we take the performance values reported in (Huang et al. 2022 ###reference_14###), running our experiments on the same hardware specification. We also compare with a single-MAB approach that optimizes over the Joint Arm Space using Thompson Sampling."
124
+ },
125
+ {
126
+ "section_id": "5.1.x",
127
+ "parent_section_id": "5.1",
128
+ "section_name": "Compute Infrastructure",
129
+ "text": "All experiments were run on a x86_64 GNU/Linux (Ubuntu 18.04.5 LTS) machine with i7 @ 2.4 GHz CPU and 16 GB RAM, as in (Huang et al. 2022 ###reference_14###)."
130
+ },
131
+ {
132
+ "section_id": "5.2",
133
+ "parent_section_id": "5",
134
+ "section_name": "Experiment \u2013 BALANCE Convergence",
135
+ "text": ""
136
+ },
137
+ {
138
+ "section_id": "5.2.x",
139
+ "parent_section_id": "5.2",
140
+ "section_name": "Setting",
141
+ "text": "To assess convergence w.r.t. time budget, we run BALANCE (Thompson), BALANCE (UCB1), BALANCE (Roulette), and BALANCE (Random) on the random and city map with and agents respectively."
142
+ },
143
+ {
144
+ "section_id": "5.2.x",
145
+ "parent_section_id": "5.2",
146
+ "section_name": "Results",
147
+ "text": "The results are shown in Figure 2 ###reference_###. With increasing time budget, all BALANCE variants converge to an average sum of delays close to the empirically best choice. All MAB-enhanced variants converge faster than BALANCE (Random). BALANCE (Thompson) performs best in both maps, especially when the time budget is low.\n###figure_2###"
148
+ },
149
+ {
150
+ "section_id": "5.2.x",
151
+ "parent_section_id": "5.2",
152
+ "section_name": "Discussion",
153
+ "text": "The results show that any version of BALANCE is able to perform well with an increasing time budget. Given a sufficient time budget, all versions are able to keep up with the empirically best choice through online learning without running a prior grid search that requires roughly times the compute of any BALANCE variant in total. Thompson Sampling performs particularly well, presumably due to the inherent uncertainty exhibited by the randomized destroy heuristics."
154
+ },
155
+ {
156
+ "section_id": "5.3",
157
+ "parent_section_id": "5",
158
+ "section_name": "Experiment \u2013 BALANCE Exploration",
159
+ "text": ""
160
+ },
161
+ {
162
+ "section_id": "5.3.x",
163
+ "parent_section_id": "5.3",
164
+ "section_name": "Setting",
165
+ "text": "Next, we evaluate the explorative behavior of BALANCE (Thompson), BALANCE (UCB1), and BALANCE (Roulette) on the random, ost003d, and city map after 128 seconds of LNS runtime. We also evaluate the progress of MAB choice over time for BALANCE (Thompson) and BALANCE (Roulette) in the ost003d map."
166
+ },
167
+ {
168
+ "section_id": "5.3.x",
169
+ "parent_section_id": "5.3",
170
+ "section_name": "Results",
171
+ "text": "The final relative frequencies of MAB choices are displayed as heatmaps in Figure 3 ###reference_###. The empirically best destroy heuristics and neighborhood sizes are highlighted by magenta dashed boxes. BALANCE (UCB1) and BALANCE (Roulette) strongly prefer the random destroy heuristic, while the preferred neighborhood size depends on the actual map. BALANCE (Thompson) also prefers the random destroy heuristic to some degree but still explores other heuristics, mainly with neighborhood sizes . Compared to the other variants, BALANCE (Thompson) explores more regions where either the destroy heuristic or the neighborhood size is empirically best, at least.\n###figure_3### Figure 4 ###reference_### shows the average progress of the chosen destroy heuristic and neighborhood size during search for Thompson Sampling and Roulette in the ost003d map. While Roulette quickly converges to the random heuristic, Thompson Sampling\nadapts its preferences through continuous exploration. Thompson Sampling mostly prefers the largest neighborhood size over time, whereas Roulette almost uniformly chooses over time with a slight preference toward .\n###figure_4###"
172
+ },
173
+ {
174
+ "section_id": "5.3.x",
175
+ "parent_section_id": "5.3",
176
+ "section_name": "Discussion",
177
+ "text": "None of the BALANCE variants clearly converges to the empirically best choice, which could be due to a short time budget, marginal improvement over time, or potential non-stationarity of the actual optimal choice. Nevertheless, Figure 3 ###reference_### suggests that Thompson Sampling performs more focused exploration than any other MAB."
178
+ },
179
+ {
180
+ "section_id": "5.4",
181
+ "parent_section_id": "5",
182
+ "section_name": "Experiment \u2013 Neighborhood Size Options",
183
+ "text": ""
184
+ },
185
+ {
186
+ "section_id": "5.4.x",
187
+ "parent_section_id": "5.4",
188
+ "section_name": "Setting",
189
+ "text": "We run BALANCE (Thompson), BALANCE (UCB1), BALANCE (Roulette), and BALANCE (Random) with different neighborhood size options by varying , i.e., the number of exponents to determine the neighborhood size . The same maps and number of agents as above are used with a time budget of 128 seconds. We additionally evaluate with a doubled number of agents per map."
190
+ },
191
+ {
192
+ "section_id": "5.4.x",
193
+ "parent_section_id": "5.4",
194
+ "section_name": "Results",
195
+ "text": "The results are shown in Figure 5 ###reference_###. All approaches significantly improve when the number of options is increased to with marginal to no improvement afterward. BALANCE (Thompson) and BALANCE (Random) benefit the most from the increase of except in the city map with 700 agents, where BALANCE (UCB1) keeps up with BALANCE (Thompson).\n###figure_5###"
196
+ },
197
+ {
198
+ "section_id": "5.4.x",
199
+ "parent_section_id": "5.4",
200
+ "section_name": "Discussion",
201
+ "text": "Since Thompson Sampling and random uniform explore more than UCB1 and Roulette, they can better leverage the neighborhood size options. The results indicate that neighborhood size adaptation and the sufficient availability of options can significantly affect performance. However, the neighborhood size also affects the amount of compute for replanning, which explains why BALANCE (Random) performs worse in ost003d when ."
202
+ },
203
+ {
204
+ "section_id": "5.5",
205
+ "parent_section_id": "5",
206
+ "section_name": "Experiment \u2013 State-of-the-Art Comparison",
207
+ "text": ""
208
+ },
209
+ {
210
+ "section_id": "5.5.x",
211
+ "parent_section_id": "5.5",
212
+ "section_name": "Setting",
213
+ "text": "We run BALANCE (Thompson), BALANCE (UCB1), BALANCE (Roulette), and Joint Arm Space (Thompson) on the random, warehouse, ost003d, den520d, and city map with different numbers of agents . For direct comparability with MAPF-LNS and MAPF-ML-LNS, we set the time budget to 60 seconds (Huang et al. 2022 ###reference_14###). Since no error or deviation bars are reported in (Huang et al. 2022 ###reference_14###), we only show the average performance of MAPF-LNS and MAPF-ML-LNS as dashed lines."
214
+ },
215
+ {
216
+ "section_id": "5.5.x",
217
+ "parent_section_id": "5.5",
218
+ "section_name": "Results",
219
+ "text": "The results are shown in Figure 6 ###reference_###. All BALANCE variants and Joint Arm Space (Thompson) significantly outperform MAPF-LNS and MAPF-ML-LNS by at least 50% when . In the random, ost003d, and city map, BALANCE (Thompson) slightly outperforms the other BALANCE variants. Joint Arm Space (Thompson) is consistently outperformed by the BALANCE variants.\n###figure_6###"
220
+ },
221
+ {
222
+ "section_id": "5.5.x",
223
+ "parent_section_id": "5.5",
224
+ "section_name": "Discussion",
225
+ "text": "The experiment demonstrates that BALANCE effectively mitigates the limitations of state-of-the-art anytime MAPF regarding fixed neighborhood sizes and the lack of exploration in roulette wheel selection, especially in instances with a large number of agents . While Thompson Sampling seemingly performs best in most cases, using BALANCE with any MAB algorithm is generally beneficial to improve performance. As discussed in Section 4.3 ###reference_###, our bi-level MAB scheme can outperform joint arm space alternatives when the time budget is very restricted due to meaningful decomposition, which is confirmed in all tested maps. However, since Joint Arm Space (Thompson) also outperforms the state-of-the-art, we suggest that bandit-based adaptation in MAPF-LNS is generally promising."
226
+ },
227
+ {
228
+ "section_id": "6",
229
+ "parent_section_id": null,
230
+ "section_name": "Conclusion",
231
+ "text": "We presented BALANCE, an LNS framework using a bi-level multi-armed bandit scheme to adapt the selection of destroy heuristics and neighborhood sizes during search.\nOur experiments show that BALANCE offers a simple but effective framework for adaptive anytime MAPF, which is able to significantly outperform state-of-the-art anytime MAPF without requiring extensive prior efforts like neighborhood size tuning, data acquisition, or feature engineering. Sufficient availability of neighborhood size options is important to provide enough room for adaptation at the potential cost of runtime due to increasing replanning effort. Thompson Sampling is a promising choice for most scenarios due to the inherent uncertainty of the randomized destroy heuristics and its ability to explore promising choices.\nFuture work includes the investigation of non-stationary MAB approaches and online learnable destroy heuristics."
232
+ }
233
+ ],
234
+ "appendix": [],
235
+ "tables": {},
236
+ "image_paths": {
237
+ "1": {
238
+ "figure_path": "2312.16767v2_figure_1.png",
239
+ "caption": "Figure 1: Bi-level multi-armed bandit scheme of BALANCE. The top-level \u210b\u210b\\mathcal{H}caligraphic_H-Bandit selects a destroy heuristic H\u2208\u210b\ud835\udc3b\u210bH\\in\\mathcal{H}italic_H \u2208 caligraphic_H. Each bottom-level \ud835\udca9\ud835\udca9\\mathcal{N}caligraphic_N-Bandit corresponds to a destroy heuristic choice and selects an exponent e\u2208\ud835\udca9={1,\u2026,E}\ud835\udc52\ud835\udca91\u2026\ud835\udc38e\\in\\mathcal{N}=\\{1,...,E\\}italic_e \u2208 caligraphic_N = { 1 , \u2026 , italic_E } to determine the neighborhood size N=2e\ud835\udc41superscript2\ud835\udc52N=2^{e}italic_N = 2 start_POSTSUPERSCRIPT italic_e end_POSTSUPERSCRIPT.",
240
+ "url": "http://arxiv.org/html/2312.16767v2/x1.png"
241
+ },
242
+ "2": {
243
+ "figure_path": "2312.16767v2_figure_2.png",
244
+ "caption": "Figure 2: Sum of delays for different BALANCE variants with different time budgets compared with the respective empirically best choices. Shaded areas show the 95% confidence interval. The legend at the top applies across all plots.",
245
+ "url": "http://arxiv.org/html/2312.16767v2/x2.png"
246
+ },
247
+ "3": {
248
+ "figure_path": "2312.16767v2_figure_3.png",
249
+ "caption": "Figure 3: Relative frequencies of selected destroy heuristic and neighborhood size combinations \u27e8H,N\u27e9\ud835\udc3b\ud835\udc41\\langle H,N\\rangle\u27e8 italic_H , italic_N \u27e9 per BALANCE variant after 128 seconds of planning. Magenta dashed boxes indicate the empirically best destroy heuristic and neighborhood size.",
250
+ "url": "http://arxiv.org/html/2312.16767v2/x3.png"
251
+ },
252
+ "4": {
253
+ "figure_path": "2312.16767v2_figure_4.png",
254
+ "caption": "Figure 4: MAB choices over time for ost003d.",
255
+ "url": "http://arxiv.org/html/2312.16767v2/x4.png"
256
+ },
257
+ "5": {
258
+ "figure_path": "2312.16767v2_figure_5.png",
259
+ "caption": "Figure 5: Sum of delays for different BALANCE variants with different neighborhood size options E\ud835\udc38Eitalic_E and numbers of agents m\ud835\udc5amitalic_m. The time budget is 128 seconds. Shaded areas show the 95% confidence interval. The legend at the top applies across all plots.",
260
+ "url": "http://arxiv.org/html/2312.16767v2/x5.png"
261
+ },
262
+ "6": {
263
+ "figure_path": "2312.16767v2_figure_6.png",
264
+ "caption": "Figure 6: Sum of delays for different variants of BALANCE compared with state-of-the-art anytime MAPF-LNS and MAPF-ML-LNS for different numbers of agents m\ud835\udc5amitalic_m. The performance values of MAPF-LNS and MAPF-ML-LNS are taken from (Huang et al. 2022) without any error or deviation bars. Our experiments are run on the same hardware specification with a time budget of 60 seconds. Shaded areas show the 95% confidence interval. The legend at the top applies across all plots.",
265
+ "url": "http://arxiv.org/html/2312.16767v2/x6.png"
266
+ }
267
+ },
268
+ "validation": true,
269
+ "references": [
270
+ {
271
+ "1": {
272
+ "title": "Finite-Time Analysis of the Multiarmed Bandit Problem.",
273
+ "author": "Auer, P.; Cesa-Bianchi, N.; and Fischer, P. 2002.",
274
+ "venue": "Machine learning, 47(2-3): 235\u2013256.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "2": {
280
+ "title": "Agent57: Outperforming the Atari Human Benchmark.",
281
+ "author": "Badia, A. P.; Piot, B.; Kapturowski, S.; Sprechmann, P.; Vitvitskyi, A.; Guo,\nZ. D.; and Blundell, C. 2020.",
282
+ "venue": "In International conference on machine learning, 507\u2013517.\nPMLR.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "3": {
288
+ "title": "Bayesian Mixture Modelling and Inference based Thompson\nSampling in Monte-Carlo Tree Search.",
289
+ "author": "Bai, A.; Wu, F.; and Chen, X. 2013.",
290
+ "venue": "In Advances in Neural Information Processing Systems,\n1646\u20131654.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "4": {
296
+ "title": "Thompson Sampling based Monte-Carlo Planning in POMDPs.",
297
+ "author": "Bai, A.; Wu, F.; Zhang, Z.; and Chen, X. 2014.",
298
+ "venue": "In Proceedings of the Twenty-Fourth International Conferenc on\nInternational Conference on Automated Planning and Scheduling, 29\u201337. AAAI\nPress.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "5": {
304
+ "title": "An Empirical Evaluation of Thompson Sampling.",
305
+ "author": "Chapelle, O.; and Li, L. 2011.",
306
+ "venue": "In Advances in neural information processing systems,\n2249\u20132257.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "6": {
312
+ "title": "A Reinforcement Learning Based Variable Neighborhood\nSearch Algorithm for Open Periodic Vehicle Routing Problem with\nTime Windows.",
313
+ "author": "Chen, B.; and Bai, R. 2018.",
314
+ "venue": null,
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "7": {
320
+ "title": "A Multi-Arm Bandit Neighbourhood Search for Routing and\nScheduling Problems.",
321
+ "author": "Chen, Y.; Cowling, P. I.; Polack, F. A. C.; and Mourdjis, P. 2016.",
322
+ "venue": null,
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "8": {
328
+ "title": "Online Learning for Scheduling MIP Heuristics.",
329
+ "author": "Chmiela, A.; Gleixner, A.; Lichocki, P.; and Pokutta, S. 2023.",
330
+ "venue": "In International Conference on Integration of Constraint\nProgramming, Artificial Intelligence, and Operations Research, 114\u2013123.\nSpringer.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "9": {
336
+ "title": "Anytime Focal Search with Applications.",
337
+ "author": "Cohen, L.; Greco, M.; Ma, H.; Hern\u00e1ndez, C.; Felner, A.; Kumar, T. S.; and\nKoenig, S. 2018.",
338
+ "venue": "In IJCAI, 1434\u20131441.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "10": {
344
+ "title": "On Upper-Confidence Bound Policies for Non-Stationary\nBandit Problems.",
345
+ "author": "Garivier, A.; and Moulines, E. 2008.",
346
+ "venue": "arXiv preprint arXiv:0805.3415.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "11": {
352
+ "title": "Genetic Algorithms in Search Optimization and Machine\nLearning.",
353
+ "author": "Goldberg, D. E. 1988.",
354
+ "venue": null,
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "12": {
360
+ "title": "Adaptive Large Neighborhood Search for Mixed Integer\nProgramming.",
361
+ "author": "Hendel, G. 2022.",
362
+ "venue": "Mathematical Programming Computation, 1\u201337.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "13": {
368
+ "title": "Learning Node-Selection Strategies in Bounded Suboptimal\nConflict-Based Search for Multi-Agent Path Finding.",
369
+ "author": "Huang, T.; Dilkina, B.; and Koenig, S. 2021.",
370
+ "venue": "In International Joint Conference on Autonomous Agents and\nMultiagent Systems (AAMAS).",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "14": {
376
+ "title": "Anytime Multi-Agent Path Finding via Machine\nLearning-Guided Large Neighborhood Search.",
377
+ "author": "Huang, T.; Li, J.; Koenig, S.; and Dilkina, B. 2022.",
378
+ "venue": "In Proceedings of the 36th AAAI Conference on Artificial\nIntelligence (AAAI), 9368\u20139376.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "15": {
384
+ "title": "Algorithm Selection for Optimal Multi-Agent Pathfinding.",
385
+ "author": "Kaduri, O.; Boyarski, E.; and Stern, R. 2020.",
386
+ "venue": "In Proceedings of the International Conference on Automated\nPlanning and Scheduling, volume 30, 161\u2013165.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "16": {
392
+ "title": "Thompson Sampling: An Asymptotically Optimal\nFinite-Time Analysis.",
393
+ "author": "Kaufmann, E.; Korda, N.; and Munos, R. 2012.",
394
+ "venue": "In International Conference on Algorithmic Learning Theory,\n199\u2013213. Springer.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "17": {
400
+ "title": "Bandit based Monte-Carlo Planning.",
401
+ "author": "Kocsis, L.; and Szepesv\u00e1ri, C. 2006.",
402
+ "venue": "In ECML, volume 6, 282\u2013293. Springer.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "18": {
408
+ "title": "Exact Anytime Multi-Agent Path Finding Using\nBranch-and-Cut-and-Price and Large Neighborhood Search.",
409
+ "author": "Lam, E.; Harabor, D.; Stuckey, P. J.; and Li, J. 2023.",
410
+ "venue": "In Proceedings of the International Conference on Automated\nPlanning and Scheduling (ICAPS).",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "19": {
416
+ "title": "Anytime Multi-Agent Path Finding via Large Neighborhood\nSearch.",
417
+ "author": "Li, J.; Chen, Z.; Harabor, D.; Stuckey, P. J.; and Koenig, S. 2021.",
418
+ "venue": "In Proceedings of the International Joint Conference on\nArtificial Intelligence (IJCAI), 4127\u20134135.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "20": {
424
+ "title": "MAPF-LNS2: Fast Repairing for Multi-Agent Path\nFinding via Large Neighborhood Search.",
425
+ "author": "Li, J.; Chen, Z.; Harabor, D.; Stuckey, P. J.; and Koenig, S. 2022.",
426
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n36(9): 10256\u201310265.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "21": {
432
+ "title": "A Survey of Adaptive Large Neighborhood Search\nAlgorithms and Applications.",
433
+ "author": "Mara, S. T. W.; Norcahyo, R.; Jodiawan, P.; Lusiantoro, L.; and Rifai, A. P.\n2022.",
434
+ "venue": "Computers & Operations Research, 146: 105903.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "22": {
440
+ "title": "Memory Bounded Open-Loop Planning in Large POMDPs using\nThompson Sampling.",
441
+ "author": "Phan, T.; Belzner, L.; Kiermeier, M.; Friedrich, M.; Schmid, K.; and\nLinnhoff-Popien, C. 2019a.",
442
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n33(01): 7941\u20137948.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "23": {
448
+ "title": "Adaptive Thompson Sampling Stacks for Memory Bounded\nOpen-Loop Planning.",
449
+ "author": "Phan, T.; Gabor, T.; M\u00fcller, R.; Roch, C.; and Linnhoff-Popien, C.\n2019b.",
450
+ "venue": "In Proceedings of the 28th International Joint Conference on\nArtificial Intelligence, IJCAI-19, 5607\u20135613. International Joint\nConferences on Artificial Intelligence Organization.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "24": {
456
+ "title": "Finding a Shortest Solution for the NxN Extension of the\n15-Puzzle is Intractable.",
457
+ "author": "Ratner, D.; and Warmuth, M. 1986.",
458
+ "venue": "In Proceedings of the Fifth AAAI National Conference on\nArtificial Intelligence, AAAI\u201986, 168\u2013172. AAAI Press.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "25": {
464
+ "title": "An Adaptive Large Neighborhood Search Heuristic for the\nPickup and Delivery Problem with Time Windows.",
465
+ "author": "Ropke, S.; and Pisinger, D. 2006.",
466
+ "venue": "Transportation science, 40(4): 455\u2013472.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "26": {
472
+ "title": "An Evolutionary Algorithm for Polishing Mixed Integer\nProgramming Solutions.",
473
+ "author": "Rothberg, E. 2007.",
474
+ "venue": "INFORMS Journal on Computing, 19(4): 534\u2013541.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "27": {
480
+ "title": "PRIMAL: Pathfinding via Reinforcement and Imitation\nMulti-Agent Learning.",
481
+ "author": "Sartoretti, G.; Kerr, J.; Shi, Y.; Wagner, G.; Kumar, T. S.; Koenig, S.; and\nChoset, H. 2019.",
482
+ "venue": "IEEE Robotics and Automation Letters, 4(3): 2378\u20132385.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "28": {
488
+ "title": "Adapting Behaviour for Learning Progress.",
489
+ "author": "Schaul, T.; Borsa, D.; Ding, D.; Szepesvari, D.; Ostrovski, G.; Dabney, W.; and\nOsindero, S. 2019.",
490
+ "venue": "arXiv preprint arXiv:1912.06910.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "29": {
496
+ "title": "Conflict-Based Search For Optimal Multi-Agent Path\nFinding.",
497
+ "author": "Sharon, G.; Stern, R.; Felner, A.; and Sturtevant, N. 2012.",
498
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n26(1): 563\u2013569.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "30": {
504
+ "title": "Cooperative Pathfinding.",
505
+ "author": "Silver, D. 2005.",
506
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence\nand Interactive Digital Entertainment, 1(1): 117\u2013122.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "31": {
512
+ "title": "Monte-Carlo Planning in Large POMDPs.",
513
+ "author": "Silver, D.; and Veness, J. 2010.",
514
+ "venue": "In Advances in neural information processing systems,\n2164\u20132172.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "32": {
520
+ "title": "Multi-Agent Pathfinding: Definitions, Variants, and\nBenchmarks.",
521
+ "author": "Stern, R.; Sturtevant, N.; Felner, A.; Koenig, S.; Ma, H.; Walker, T.; Li, J.;\nAtzmon, D.; Cohen, L.; Kumar, T.; et al. 2019.",
522
+ "venue": "In Proceedings of the International Symposium on Combinatorial\nSearch, volume 10, 151\u2013158.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "33": {
528
+ "title": "Monte Carlo Tree Search: A Review of Recent\nModifications and Applications.",
529
+ "author": "\u015awiechowski, M.; Godlewski, K.; Sawicki, B.; and Ma\u0144dziuk, J. 2023.",
530
+ "venue": "Artificial Intelligence Review, 56(3): 2497\u20132562.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "34": {
536
+ "title": "On the Likelihood that One Unknown Probability exceeds\nAnother in View of the Evidence of Two Samples.",
537
+ "author": "Thompson, W. R. 1933.",
538
+ "venue": "Biometrika, 25(3/4): 285\u2013294.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "35": {
544
+ "title": "Structure and Intractability of Optimal Multi-Robot Path\nPlanning on Graphs.",
545
+ "author": "Yu, J.; and LaValle, S. 2013.",
546
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n27(1): 1443\u20131449.",
547
+ "url": null
548
+ }
549
+ }
550
+ ],
551
+ "url": "http://arxiv.org/html/2312.16767v2"
552
+ }
20240101/2312.17046v2.json ADDED
@@ -0,0 +1,829 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Representing and Modeling Inconsistent, Impossible and Incoherent Shapes and Scenes with 2D Non-Conservative Vector Fields mapped on 2-Complexes",
3
+ "abstract": "In this paper, we present a framework to represent mock 3D objects and scenes, which are not 3D but appear 3D. In our framework, each mock-3D object is represented using 2D non-conservative vector fields and thickness information that are mapped on 2-complexes. Mock-3D scenes are simply scenes consisting of more than one mock-3D object.\nWe demonstrated that using this representation, we can dynamically compute a 3D shape using rays emanating from any given point in 3D. These mock-3D objects are view-dependent since their computed shapes depend on the positions of ray centers. Using these dynamically computed shapes, we can compute shadows, reflections, and refractions in real time.\nThis representation is mainly useful for 2D artistic applications to model incoherent, inconsistent, and impossible objects. Using this representation, it is possible to obtain expressive depictions with shadows and global illumination effects. The representation can also be used to convert existing 2D artworks into a Mock-3D form that can be interactively re-rendered.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Despite the significant advances made in 3D computer graphics and shape modeling, according to recent market research 3D Graphics is still only 8% of the whole graphics market, while 2D graphics markets such as vector, image, and video constitute the rest, i.e. more than 90%, of the graphics market Hart (2008 ###reference_1###). Moreover, the 3D modeling market does not grow as rapidly as the 2D painting/editing market.\nThere are several usual suspects to explain the reluctance to adapt 3D modeling such that 3D modeling is less intuitive, more expensive, and requires more training than 2D. We think that there exists an additional and important reason. using 3D, it is hard to include all types of expressive depictions that are caused by impossible, inconsistent, and incoherent shapes.\nAlthough this can be viewed as a problem in the shape modeling community which is mainly focused on 3D, we think that this shortcoming presents an opportunity for the community to explore new areas of shape modeling research. Namely, this reluctance suggests that there exists a critical need to develop hybrid systems that can provide 3D effects along with the convenience and expressive power of 2D.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### In this paper, we present a framework for developing such hybrid systems that can support expressive depictions of impossible, inconsistent, and incoherent shapes and scenes. Although there exists a significant amount of research on non-(photo)realistic rendering (NPR), there has not yet been a comprehensive expressive depiction framework that can provide an integrated non-realistic approach for both modeling and rendering. There is, therefore, a need for a representation that is powerful enough to handle all types of expressive depictions from impossible renderings/shapes to incoherent or inconsistent renderings/shapes.\nWe envision a future in which static pictorial documents such as illustrations, paintings, and photographs are converted into dynamic re-renderable forms that can be accessible and continuously enriched by almost everybody. Our specific goal in this paper is to present an easy-to-use, easy-to-extend, and powerful framework that can provide a formal representation for such future applications. This new framework turns shape modeling into a 2D graphics application, and users can define shapes by painting images, creating illustrations, and photographing real objects.\nThe key part of this framework is a mock-3D scene representation that consists of texture-mapped 2-complexes, and the key part of this representation is the textures that define non-conservative 2D vector fields along with thickness fields, which we call shape maps. Using shape maps, for any given mock-3D scene and a given 3D position, we can uniquely compute every 3D shape in the scene using rays emanating from the given position. These mock-3D scenes are view-dependent since the shapes of all objects in the scene depend on the positions of ray centers. Using these dynamically computed shapes, we can compute any illumination effect that requires geometry, such as shadows, reflection, and refraction, in real-time."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Mock-3D with Non-Conservative Vector Fields",
15
+ "text": "An important property of illustrations and paintings is that they rarely correspond directly to real 3D scenes; they are usually expressively depicted stylized representations and/or interpretations of real 3D scenes Gooch et al. (1998 ###reference_4###); Winkenbach and Salesin (1994 ###reference_5###); House and Singh (2007 ###reference_6###). Therefore, it is impossible to turn illustrations such as those shown in Figure 1 ###reference_### into real 3D scenes, since the shapes in such pictures rarely correspond to real 3D shapes, the illumination is usually inconsistent, and the rendering is almost always expressive and stylistic.\nIn paintings and illustrations, styles vary significantly from one artist to another. Like styles, shape, and illumination, inconsistencies are also introduced by artists \u2014usually on purpose\u2014 since inconsistencies can make images interesting. And most importantly, if the fake 3D effects that 2D painting/editing provides are good enough, people will still tend to view these images as if they are in the 3D world. For example, the fact that the imperfect perspective in Figure 1 ###reference_###(b) does not distract us from appreciating the picture. Richard Davison intentionally introduced contradictory vanishing points in this painting to demonstrate that humans do not consciously check for optical correctness Davison (2007 ###reference_7###).\nTo bridge the gap between 2D painting and 3D rendering, we need mock-3D shapes that are not 3D but appear 3D. In current practice, there has been some use of mock-3D representations in the form of normal and depth maps Wang et al. (2014 ###reference_8###); Gonen (2016 ###reference_9###); Youyou (2014 ###reference_10###); Akleman et al. (2017 ###reference_11###, 2022 ###reference_12###, 2023 ###reference_13###). However, these representations, such as normal maps, usually do not correspond to thoroughly impossible shapes. Depth maps are essentially Bas-Reliefs Weyrich et al. (2007 ###reference_14###). They cannot represent impossible shapes since their gradient can only produce conservative vector fields. Normal maps are usually produced as conservative vector fields, which are constructed as gradients of height functions Wikipedia (2014a ###reference_15###). In both cases, there is always a unique constructible geometry Fattal et al. (2002 ###reference_16###).\n###figure_6### ###figure_7### ###figure_8### In this paper, we propose using 2D vector fields to construct representations that truly mock 3D geometry. The advantage of random 2D-vector fields is that they do not necessarily come from gradients of height fields. Therefore, they are not necessarily conservative. If the field is non-conservative, there is no corresponding height field, and, as a result, we have a mathematical representation that does not correspond to any real shape. Non-conservative vector fields are used to conceptualize impossibility in shapes such as the never-ending staircase in Escher\u2019s \"Ascending and Descending\" Wikipedia (2014b ###reference_17###). In other words, the words \"impossibility\", \"inconsistency\" or \"incoherency\" really refer to global consistency that can be introduced by a nonconservative vector field.\nFortunately, this global inconsistency does not prevent us from locally reconstructing height fields. In fact, for any given line in 2D, we can always construct a slice of a height field from any given vector field using a simple line integral. If we choose a set of rays emanating from the same point, we can then construct the whole height field in 2D. The reconstructed height field, of course, depends on the point from which the rays emanated. These shapes are therefore view-dependent, which is, in fact, also a desired property in cartoon animation Rademacher (1999 ###reference_18###).\nThe problem is that 2D-vector fields in the plane can provide only mock height fields. Even if we map them on 3D surfaces, we can only obtain mock displacement fields. Neither of them thoroughly provides boundaries for 3D solids. We, therefore, need to give them a volume to turn them into mock-3D solid objects. This can be done using two vector fields, one for positive displacement and another one for negative displacement. A simple solution is simply to add a thickness field to create the second displacement. In other words, both two-sided mock-3D-displacements can be described by only three numbers, which can then be provided by single three-color \u2013RGB\u2013 images. We use the term \u201cshape map\u201d to describe the images that provide this two-sided mock-3D displacement information.\n###figure_9### ###figure_10### ###figure_11### Figure 3 ###reference_### shows a mock 3D scene that consists of only two texture-mapped planar rectangles. The colors in 3 ###reference_###(a) provide two-sided mock-3D-displacement information. Using this information, we can obtain global shadows as if the planar shapes had solid volumes. However, the straight line created by the intersection of the two planes creates a visual distraction. To avoid this problem, we need more flexible structures than planar quadrilaterals."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Mock-3D Scenes with 2-Complexes",
21
+ "text": "To utilize this approach in a general setting, we propose to construct mock-3D scenes with 2-complexes, which can be represented using the shape algebra recently introduced Akleman et al. (2015 ###reference_19###). 2-complexes in 3D can include deformed planes, curves, and their connections (see Figure 4 ###reference_###). Z-depth deformations introduced by Gershon Elber to represent impossible objects are particularly useful in this case, since they can deform objects without changing the visible parts Elber (2011 ###reference_20###). Deformations in the Z depth can also effectively provide local layering McCann and Pollard (2009 ###reference_21###), which is important to include cases that cannot be handled by simple ordering, such as knots, links, and handshakes. We will refer to each 2-complex as a \u201clayer\u201d just to be consistent with the standard terminology used in image manipulation, as they will appear as layers to the most casual users.\n###figure_12### ###figure_13### ###figure_14### Using 2-complexes to represent single 2D objects may seem to be an overkill, and one may think that every 2D object can be represented simply by using a 2-manifold mesh with boundary. Figure 4 ###reference_###(a) shows how impractical it can be to use a single 2-manifold mesh with a boundary to represent a 2D object. As shown in the figure, if we use such a structure, we need to change the mesh structure of the 2-manifold with any change in any part of the object. The most common solution in Computer Graphics for such problems is to use groups. For example, if we want to represent a human, we can have layers of each part of the body, such as arms, legs, body, and head. For each part, we can also have subparts, which are also defined as sublayers. Then, the whole structure is organized into groups.\nOne problem in using groups: we lose connectivity information available in using a single 2-manifold mesh. If we want to keep connectivity information, we need 2-complexes. For instance, if more than two polygons share an edge, we cannot use 2-manifolds, by definition. On the other hand, two complexes can easily provide the information that the three faces, , , and , share the same edge. Until recently, it was hard to use 2-complexes since there was no strong data structure to represent non-simplicial 2-complexes. Fortunately, a recently introduced general framework for representing 3-manifold decompositions can also be used to represent 2-complexes with some minor modifications Akleman et al. (2015 ###reference_19###).\nOne significant advantage of 2-complexes is to develop templates for common objects such as humans, animals, chairs, or planes. These 2-complex templates can be much simpler than 3D shapes and, therefore, easy to use for most people. These 2-complexes, i.e. layers, are dynamic objects that can help us re-render the image. Therefore, each layer should consist of several components, each of which can be considered as textures (raster or vector) that are projected to the layers. We call these components \u201cchannels\u201d consistent, again, with the standard terminology used in image manipulation. On the other hand, the term channel in this case will refer to entities that are more general than simple color channels. For example, one channel should be a shape map that provides shape information. These shape maps that contain 2D vector field and thickness information, as mentioned earlier, help us to turn these 2-complexes into mock-3D shapes by providing two-sided mock-3D-displacements.\nUsing channels, we can also provide material properties to control how the final image should be rendered. For example, Figure 3 ###reference_### does not have any material information and the images in Figure 3 ###reference_###(a) and (b) are simply diffuse illumination. Figure 5 ###reference_### demonstrates how material information can be used to obtain a particular style.\n###figure_15### ###figure_16### ###figure_17### ###table_1### ###figure_18### ###figure_19### ###figure_20###"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Related Work",
27
+ "text": "There currently exist two representations that are related to ours: bas-reliefs and normal maps. However, both of them correspond to real shapes that can exist in 3D.\nIn this work, we present a fuzzy and view-dependent representation that is suitable for global illumination while providing all the representational powers of both bas-reliefs and normal maps.\nBas-reliefs are sculptures that can be viewed from many angles with no perspective distortion as if they are just images. In other words, the perspective transformation is embedded in bas-relief sculptures Weyrich et al. (2007 ###reference_14###). One problem with bas-reliefs for 2D artists is that their construction is still a sculpting process. This may not be suitable for illustrators and painters who are not interested in sculpting shapes.\n###table_2### ###figure_21### ###figure_22### Normal maps became popular as soon as they were introduced in 1998 Cohen et al. (1998 ###reference_22###). Although they are mainly used as texture maps to include details in polygonal meshes, they can be used directly as shape representations by embedding perspective information, as shown by Johnston Johnston (2002 ###reference_23###). He developed a sketch-based system, called Lumo, to model normal maps by diffusing 2D normals in a line drawing. Since then, only a few groups have investigated the potential use of normal maps as a shape representation, such as Okabe et al. (2006 ###reference_24###); Bezerra et al. (2005 ###reference_25###); Winnemoeller et al. (2009 ###reference_26###); Shao et al. (2012 ###reference_27###). Sun et al. Sun et al. (2007 ###reference_28###) introduced Gradient Mesh to semi-automatically and quickly interpolate normals from edges, and Orzan et al. Orzan et al. (2008 ###reference_29###) calculate a diffusion from edges by solving the Poisson equation. Sykora et al. S\u00fdkora et al. (2009 ###reference_30###) proposed the Lazy-Brush, which can propagate scribbles to accelerate the definition of constant-color regions. Finch et al. Finch et al. (2011 ###reference_31###) build thin-plate splines that provide smoothness everywhere, except at user-specified tears and creases. The underlying splines are used to interpolate the normals.\nWu et al. Wu et al. (2007 ###reference_32###) proposed a shape palette, where the user can draw a simple 2D primitive in the 2D view and then specify its 3D orientation by drawing a corresponding primitive. This method also performs diffusion using a thin-plate spline. Recently, Shado et al. Shao et al. (2012 ###reference_27###) developed CrossShade, another sketch-based interface to design complicated shapes such as normal maps. They used an explicit mathematical formulation of the relationships between cross-sectional curves and geometry. The specified cross-section is used as an extra control point to control the normals. Vergne et al. Vergne et al. (2012 ###reference_33###) introduce surface flow from smooth differential analysis, which can be used to measure smooth luminance variations. Therefore, the author also proposes drawing shadows and other shading effects.\nThe issue from our perspective is that normal maps are designed to correspond to conservative vector fields, and there is, therefore, always a unique bas-relief corresponding to a normal map. For example, Sykora et al. S\u00fdkora et al. (2014(to appear ###reference_34###) developed a user-assisted method to convert normal maps into Bass-Reliefs that can provide correct shadows in a commercial renderer, but this approach will fail if the normal maps do not correspond to Bas-Reliefs that can have explicitly meaningful 3D geometry."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Shape Maps: Images as Mock-3D Shapes",
33
+ "text": "Let and denote two functions of to with\n and . Then, a view-dependent and 2-sided mock 3-D shape can be defined using the following inequality:\nwhere .\nThe first difference with traditional bas-reliefs is that these Mock-3D shapes are two-sided. In this definition, the two-sidedness of the shape directly comes from the usage of two functions and . The second and more important difference from bas-reliefs is that the shape is also a function of angle . In other words, this definition allows us to have a view-dependent geometry since the shape depends on the angle . View dependency is a desired feature in artistic applications Rademacher (1999 ###reference_18###). It also provides a fuzzy definition for shapes that can be particularly useful for representing impossible shapes. The only problem with this general structure is that it is hard for 2D artists to create these two essentially 3D functions. We therefore present shape maps, which are represented as images. We demonstrate that shape maps can be used to construct these two functions.\nShape map images encode two types of information: a 2D vector field and a thickness map. Let denote the 2D vector field of the domain to be defined, and let denote two coordinates of the map. Let denote the 2D vector field and . Thickness maps are also defined over the same domain of the 2D vector field where denotes a thickness map as .\nOne significant advantage of having only three variables for shape maps is that we can readily convert them into Low Dynamic Range (LDR) images and save them using any common image format, which can easily passed to GPU like a normal map. Let an LDR image on be denoted by where . The conversion from and to is given as . We use the opacity channel to describe the domain of the function . If , we choose . Of course, aliasing needs to be avoided in this case by appropriately sampling each pixel.\n###figure_23### ###figure_24### Using these maps, the functions and are computed as a summation of the two line integrals of 2D gradient fields that are obtained directly from 2D vector fields and displacement maps as\nwhere\nwith is the starting point of the integral, which is computed as the intersection of the ray starting from in the direction of with the boundary of the domain and and is the floor function that return largest integer smaller than , is an integer, quantization term and are scale parameters.\nIf the 2D vector field is conservative, then is independent of . In other words, this integral provides all continuous function bas-reliefs when the 2D vector field is conservative. If the 2D vector field is not conservative, then the integral is dependent on , but it is still uniquely defined. The resulting function is continuous in the direction of and may not necessarily be continuous in other directions. Therefore, it turns a 2D vector field defining an impossible shape into a fuzzy geometry, and a thickness map provides the back side of the shape. Shape maps, which are mapped onto layers of 2-complexes, can be considered \u201cextended billboards\".\nThese texture-mapped layers can easily be used to create scenes that allow for local and global illumination effects. We first project all rays and shapes to a 2D plane and we compute geometry in fragment shader dynamically. The dynamically computed geometry is used to compute global illumination effects, such as ambient occlusion, shadows, and refraction. On the other hand, for effects that only require surface normals, such as diffuse and specular reflection, dynamic computing surface normals from the local structure of geometry can be an overkill. For a simple alternative, it is possible to rebuild normal vectors using the unit vector property as where is another user-defined control that is used to scale the values of and to and . This operation, in effect, changes the effective depth of a corresponding bas-relief if there exists such a height field. In other words, this value indicates how flat we want to make the corresponding bas-relief if it exists. Moreover, choosing always guarantees ."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Creating Shape Maps",
39
+ "text": "Shape maps have some visual and conceptual similarities to normal maps. This similarity is useful since we can directly use normal maps as shapes if necessary. Moreover, the blue component of normal maps can still provide an acceptable thickness map. Despite this similarity, shape map images are usually more colorful than normal maps since (1) we allow non-conservative fields and (2) we use blue color for thickness information. In practice, any image can be used as a shape map. The main advantage of allowing any image as a shape map is that artists can create these maps directly by painting or rendering an image, by taking a photograph, or through illustration."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Painting or Rendering Shape Maps",
45
+ "text": "To paint shape descriptors, artists can imagine an object that is illuminated with parallel red light from the left side and with parallel green light from the top. By ignoring shadows, they paint an image based on how much red and green light they want to see in every pixel. For instance, a pixel color red = 0.95 and green = 0.75 means that the artist wants 95% of the light from the left and 75% of the light from the top to illuminate that particular pixel 111This information can be interpreted as local scattering information from left and right lights This could be one of the additional reasons why non-conservative fields can produce realistic looking renderings by implicitly providing subsurface scattering from sides.. Moreover, unlike normal maps, it cannot be guaranteed that the sum of their squares will be smaller than as in this example. This is not a problem for estimating surface normals or overall shape as previously discussed.\nThe thickness values are also easy to paint. values have to be nonzero for the object and zero for everywhere else. Moreover, the values have to be small close to the boundaries of the object (if considering a perspective transformation) and thin regions. This is sufficient to obtain visually correct-looking refractions. For the rest of the object, the values of can simply be any reasonable positive real number smaller than . Figure 9 ###reference_### shows some examples of shape maps painted by artists. The thickness information is useful for artistic control of the refractions, as shown in Figure 10 ###reference_###. The bottle image, in particular, shows how thickness values control the refraction. Since this bottle is half filled with a liquid, in the places where the bottle is not filled with the liquid, the value must be very small to indicate a thin glass, although the component of the unit vector is not small.\n###table_3### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###table_4### ###figure_29### ###figure_30### ###figure_31### ###figure_32### Although we prefer artists to create shape maps, they can also convert virtual objects into shape maps. The procedure to obtain a 2D vector field is a straightforward rendering process. The and components of the 3D-normal vector of the visible point are simply converted to the red and green colors of the image. It is even possible to directly use the component of the unit normal vector as a value of . However, an approximate thickness value can also be directly computed."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Photographing Objects to Create Shape Maps",
51
+ "text": "###table_5### ###figure_33### ###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### Note that the red and green light vectors and are linearly independent of each other. Therefore, any 2D light can be given a linear combination of the two as . Therefore, to compute illumination coming from an arbitrary parallel light, all we need to do is compute the contribution from two linearly independent components. This property provides another method to obtain shape maps by photographing real objects using red and green lights. This can be used as a simple alternative to environment matting Zongker et al. (1999 ###reference_35###). Figure 11 ###reference_###, shows such examples. As shown in the detailed images, we can even obtain minor details. One may think that this approach will not work for high-genus or transparent objects. Even in those cases, we observe that the results are unexpectedly satisfactory, as shown in Figures 13 ###reference_### and 13 ###reference_###. In these examples, we have made only minimal changes in the original image: (1) we removed and replaced backgrounds with yellow color, and (2) we added a nonzero blue value for object regions. Although a constant thickness is not correct, the resulting refractions, which are not shown in these examples, appear reasonably convincing. Artists, of course, can further manipulate these photographs to obtain the desired effects."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Illustrating/Sketching Shape Maps",
57
+ "text": "The most viable option to create shape maps is to model 2D vector fields directly with a sketch-based interface. As discussed earlier, there already exist many sketch-based interface approaches, such as those of Lumo Johnston (2002 ###reference_23###) or CrossShade Shao et al. (2012 ###reference_27###), that can be used directly to create normal maps as shape maps, as shown in 7 ###reference_###. To construct non-conservative vector fields, however, there is a need to provide more control to users. We have developed a sketch-based integrated mock-3D scene modeling system that can allow users to obtain any vector field. For example, we have created impossible objects shown in Figures 14 ###reference_###, and 16 ###reference_### using our system.\n###table_6### ###figure_51### ###figure_52### ###figure_53### ###table_7### ###figure_54### ###figure_55### ###figure_56### In our system, we represent mock-3D shapes using 2-manifold quadrilateral meshes with boundaries. Each quadrilateral face of the manifold mesh is a cubic Bezier patch based on the tensor product Beatty and Barsky (1987 ###reference_36###). We choose parametric formulation over subdivision to allow, in particular, valent-2 vertices. With cubic patches, users can easily obtain continuity or introduce derivative discontinuities. We construct pseudo-2-complexes by stitching the boundaries of these quad-meshes. Internally, these meshes are still kept as 2-manifold surfaces. To obtain boundaries, we simply label some faces as \"invisible\". The advantage of this flexibility is that we can design arbitrary complex manifold structures. To handle curved edges, we extended an existing manifold data structure to include cubic Bezier curves as edge shapes.\n###table_8### ###figure_57### ###figure_58### ###figure_59### ###table_9### ###figure_60### ###figure_61### ###figure_62### The system looks and feels exactly like a 2D vector graphics system, in which users can only change the 2D positions of control points. On the other hand, we allow users to change the positions of control points. Note that the endpoints of the Bezier control points correspond to the corners of the vertices of the manifold meshes. In other words, it is possible to have discontinuities in positions in the boundaries of two neighboring patches while visually looking stitched together.\n###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67### ###figure_68### ###figure_69### ###figure_70### ###figure_71### ###figure_72### ###figure_73### ###figure_74### The quad-patch-based representation provides simple control of 2D vector fields. In our system, at each corner, a 2D vector is assigned. Initial \u2013default\u2013 assignments are created using boundary gradient information Johnston (2002 ###reference_23###) and local normals Shao et al. (2012 ###reference_27###). Users can change these default assignments simply by changing the 2D vector, as shown in Figures 18 ###reference_###(a), 19 ###reference_###(a), and 20 ###reference_###(a). We first compute the 2D vector field along the curved edges by rotating the vectors along the curves, as shown in Figures 18 ###reference_###(b), 19 ###reference_###(b), and 20 ###reference_###(b). These 1D vector fields defined at the edges serve as boundary functions that can later be interpolated using Coons patches to fill the inside of quad patches Beatty and Barsky (1987 ###reference_36###), as shown in Figure 18 ###reference_###(c), 19 ###reference_###(c), and 20 ###reference_###(c). Then, converting it into a shape map is a simple process, as in Figures 18 ###reference_###(d), 19 ###reference_###(d), and 20 ###reference_###(d). Figures 19 ###reference_### and 20 ###reference_### show how conservative and non-conservative fields can be obtained inside the same Bezier patch just by changing the direction of the control vector. We chose quadrilaterals as the main structure, since a straightforward Coons interpolation formula exists only for quadrilaterals. Therefore, our system only provides operations to create quadrilaterals. The user can also create triangles since they can be included by making one edge zero length.\n###table_10### ###figure_75### ###figure_76### ###figure_77### ###figure_78### Our system allows us to create a set of initial meshes that consists mainly of quadrilaterals. Most of the quadmesh operators created are self-explanatory, as shown in Figure 21 ###reference_###(b), (c), and (d). The thin medial axis is the only operation conceptually different from the others. In this case, two types of quad mesh are created starting from the user-drawn medial axis, which can be a planar graph created as a network of connected curves, as shown in Figure 21 ###reference_###(d). Users further manipulate these meshes using quad-preserving operations. We have identified two operations that can introduce a new quadrilateral to the mesh and remove a quadrilateral, as shown in Figure 23 ###reference_###(a). We have also generalized existing local operations, such as extruding face and inserting eye operations into group operations, extrude and wrinkle Akleman and Chen (2006 ###reference_37###) (see Figure 23 ###reference_###(b)). We have also introduced new local operations, such as insert handle, that can construct a 2-complex (not shown here). These operations are local in the sense that they can be applied in any local area and do not affect the rest. On the other hand, edge split and insert edge operations can only be applied from boundary to boundary. With these operations, the user can create complicated quad mesh structures.\n###figure_79### ###table_11### ###figure_80### ###figure_81### Figures 2 ###reference_###, 3 ###reference_###, 18 ###reference_###, 25 ###reference_###, 27 ###reference_###, and 28 ###reference_### show examples of shape maps created using our prototype system. The mock-3D shape in Figure 28 ###reference_### is a 2-complex. Figure 25 ###reference_### is an example of a reconstruction of an illustration. Figures 27 ###reference_### and 28 ###reference_### are examples of photographs\u2019 reconstructions. In these two cases, material textures are automatically computed from the photograph using a method similar to single-view relighting Okabe et al. (2006 ###reference_24###)."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {
63
+ "1(a)": {
64
+ "figure_path": "2312.17046v2_figure_1(a).png",
65
+ "caption": "(a) An example of incoherent scenes: A cubist self-portrait by Pablo Picasso from 1907.\nFigure 1: Examples of static 2D pictorial documents that include incoherent, inconsistent, and impossible expressive depictions. For example, cubist artists create images based on their successive and subjective experiences in space and time Gleizes and Metzinger (1947); Robbins (1985) that result in incoherent structures. Our approach makes it possible to turn any of such structures into dynamic ones with re-renderable elements.",
66
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Picasso/original.jpg"
67
+ },
68
+ "1(b)": {
69
+ "figure_path": "2312.17046v2_figure_1(b).png",
70
+ "caption": "(b) Examples of impossible objects: A hand-drawn composition of two impossible objects on photographs as composition class project.\nFigure 1: Examples of static 2D pictorial documents that include incoherent, inconsistent, and impossible expressive depictions. For example, cubist artists create images based on their successive and subjective experiences in space and time Gleizes and Metzinger (1947); Robbins (1985) that result in incoherent structures. Our approach makes it possible to turn any of such structures into dynamic ones with re-renderable elements.",
71
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/4.jpg"
72
+ },
73
+ "1(c)": {
74
+ "figure_path": "2312.17046v2_figure_1(c).png",
75
+ "caption": "(c) An example of inconsistent scenes: A landscape painting by Richard Davison from 2001. Davison intentionally introduced contradictory vanishing points in this painting.\nFigure 1: Examples of static 2D pictorial documents that include incoherent, inconsistent, and impossible expressive depictions. For example, cubist artists create images based on their successive and subjective experiences in space and time Gleizes and Metzinger (1947); Robbins (1985) that result in incoherent structures. Our approach makes it possible to turn any of such structures into dynamic ones with re-renderable elements.",
76
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/davison1.jpg"
77
+ },
78
+ "2(a)": {
79
+ "figure_path": "2312.17046v2_figure_2(a).png",
80
+ "caption": "Figure 2: Examples of (a) conservative and (b) non-conservative fields. (c) demonstrate that non-conservative vector fields can produce realistic-looking rendering by converting non-conservative fields to view-dependent geometry. More interestingly, such non-conservative fields can rotate the objects behind them through refraction.",
81
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a3.png"
82
+ },
83
+ "2(b)": {
84
+ "figure_path": "2312.17046v2_figure_2(b).png",
85
+ "caption": "Figure 2: Examples of (a) conservative and (b) non-conservative fields. (c) demonstrate that non-conservative vector fields can produce realistic-looking rendering by converting non-conservative fields to view-dependent geometry. More interestingly, such non-conservative fields can rotate the objects behind them through refraction.",
86
+ "url": "http://arxiv.org/html/2312.17046v2/x2.png"
87
+ },
88
+ "2(c)": {
89
+ "figure_path": "2312.17046v2_figure_2(c).png",
90
+ "caption": "Figure 2: Examples of (a) conservative and (b) non-conservative fields. (c) demonstrate that non-conservative vector fields can produce realistic-looking rendering by converting non-conservative fields to view-dependent geometry. More interestingly, such non-conservative fields can rotate the objects behind them through refraction.",
91
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/c5.png"
92
+ },
93
+ "3(a)": {
94
+ "figure_path": "2312.17046v2_figure_3(a).png",
95
+ "caption": "(a)\nFigure 3: An example of local and global shadows cast by a shape map mapped on a planar billboard. The mock-3D scene consists of two texture-mapped planar rectangles. Note that global shadow is volumetric even when light is in the same plane as the billboard.",
96
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Homer/SM1.png"
97
+ },
98
+ "3(b)": {
99
+ "figure_path": "2312.17046v2_figure_3(b).png",
100
+ "caption": "(b)\nFigure 3: An example of local and global shadows cast by a shape map mapped on a planar billboard. The mock-3D scene consists of two texture-mapped planar rectangles. Note that global shadow is volumetric even when light is in the same plane as the billboard.",
101
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Homer/0.png"
102
+ },
103
+ "3(c)": {
104
+ "figure_path": "2312.17046v2_figure_3(c).png",
105
+ "caption": "(c)\nFigure 3: An example of local and global shadows cast by a shape map mapped on a planar billboard. The mock-3D scene consists of two texture-mapped planar rectangles. Note that global shadow is volumetric even when light is in the same plane as the billboard.",
106
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Homer/1.png"
107
+ },
108
+ "4(a)": {
109
+ "figure_path": "2312.17046v2_figure_4(a).png",
110
+ "caption": "(a)\nFigure 4: An example that demonstrates the advantage of 2-complexes in representing single objects. (a) shows a cartoon face represented by a single 2-manifold mesh with a boundary. This requires a complex mesh structure that should change with shape. For example, in (a) F4subscript\ud835\udc394F_{4}italic_F start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and F8subscript\ud835\udc398F_{8}italic_F start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT are pentagonal patches; and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT, F6subscript\ud835\udc396F_{6}italic_F start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT and F10subscript\ud835\udc3910F_{10}italic_F start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT are triangles and the rest of the patches are quads. If we make the nose slightly smaller, we need to reconstruct the mesh. Using layers, it is possible to obtain general structures that do not require restructuring and/or remeshing. For example, in (b), regardless of the size of the nose, we can keep every patch as a quad. In this case, the problem is that we lose the information that the faces F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT share an edge. On the other hand, 2-complexes provide the best of both worlds and can describe that F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT share the same edge.",
111
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/images/2complex0.png"
112
+ },
113
+ "4(b)": {
114
+ "figure_path": "2312.17046v2_figure_4(b).png",
115
+ "caption": "(b)\nFigure 4: An example that demonstrates the advantage of 2-complexes in representing single objects. (a) shows a cartoon face represented by a single 2-manifold mesh with a boundary. This requires a complex mesh structure that should change with shape. For example, in (a) F4subscript\ud835\udc394F_{4}italic_F start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and F8subscript\ud835\udc398F_{8}italic_F start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT are pentagonal patches; and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT, F6subscript\ud835\udc396F_{6}italic_F start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT and F10subscript\ud835\udc3910F_{10}italic_F start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT are triangles and the rest of the patches are quads. If we make the nose slightly smaller, we need to reconstruct the mesh. Using layers, it is possible to obtain general structures that do not require restructuring and/or remeshing. For example, in (b), regardless of the size of the nose, we can keep every patch as a quad. In this case, the problem is that we lose the information that the faces F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT share an edge. On the other hand, 2-complexes provide the best of both worlds and can describe that F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT share the same edge.",
116
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/images/2complex2.png"
117
+ },
118
+ "4(c)": {
119
+ "figure_path": "2312.17046v2_figure_4(c).png",
120
+ "caption": "(c)\nFigure 4: An example that demonstrates the advantage of 2-complexes in representing single objects. (a) shows a cartoon face represented by a single 2-manifold mesh with a boundary. This requires a complex mesh structure that should change with shape. For example, in (a) F4subscript\ud835\udc394F_{4}italic_F start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT and F8subscript\ud835\udc398F_{8}italic_F start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT are pentagonal patches; and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT, F6subscript\ud835\udc396F_{6}italic_F start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT and F10subscript\ud835\udc3910F_{10}italic_F start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT are triangles and the rest of the patches are quads. If we make the nose slightly smaller, we need to reconstruct the mesh. Using layers, it is possible to obtain general structures that do not require restructuring and/or remeshing. For example, in (b), regardless of the size of the nose, we can keep every patch as a quad. In this case, the problem is that we lose the information that the faces F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT share an edge. On the other hand, 2-complexes provide the best of both worlds and can describe that F0subscript\ud835\udc390F_{0}italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, F2subscript\ud835\udc392F_{2}italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and F5subscript\ud835\udc395F_{5}italic_F start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT share the same edge.",
121
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/images/2complex3.png"
122
+ },
123
+ "5(a)": {
124
+ "figure_path": "2312.17046v2_figure_5(a).png",
125
+ "caption": "(a)\nFigure 5: Relighting Mona Lisa: an example of face relighting through channel extraction by removing shadows and shading. Note that (c) and (d) show an artificially added pearl earring as an homage to Vermeer\u2019s painting \u201cThe Girl with Pearl Earring\u201d.",
126
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Leonardo/ML0.jpg"
127
+ },
128
+ "5(b)": {
129
+ "figure_path": "2312.17046v2_figure_5(b).png",
130
+ "caption": "(b)\nFigure 5: Relighting Mona Lisa: an example of face relighting through channel extraction by removing shadows and shading. Note that (c) and (d) show an artificially added pearl earring as an homage to Vermeer\u2019s painting \u201cThe Girl with Pearl Earring\u201d.",
131
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Leonardo/ML1.jpg"
132
+ },
133
+ "5(c)": {
134
+ "figure_path": "2312.17046v2_figure_5(c).png",
135
+ "caption": "(c)\nFigure 5: Relighting Mona Lisa: an example of face relighting through channel extraction by removing shadows and shading. Note that (c) and (d) show an artificially added pearl earring as an homage to Vermeer\u2019s painting \u201cThe Girl with Pearl Earring\u201d.",
136
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Leonardo/ML3.jpg"
137
+ },
138
+ "6(a)": {
139
+ "figure_path": "2312.17046v2_figure_6(a).png",
140
+ "caption": "Figure 6: Relighting Picasso\u2019s self-portrait. Note that although this image is intentionally flattened and does not correspond to any real shape, it is still possible to illuminate it with our approach.",
141
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Picasso/result1.jpg"
142
+ },
143
+ "6(b)": {
144
+ "figure_path": "2312.17046v2_figure_6(b).png",
145
+ "caption": "Figure 6: Relighting Picasso\u2019s self-portrait. Note that although this image is intentionally flattened and does not correspond to any real shape, it is still possible to illuminate it with our approach.",
146
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Picasso/result2.jpg"
147
+ },
148
+ "6(c)": {
149
+ "figure_path": "2312.17046v2_figure_6(c).png",
150
+ "caption": "Figure 6: Relighting Picasso\u2019s self-portrait. Note that although this image is intentionally flattened and does not correspond to any real shape, it is still possible to illuminate it with our approach.",
151
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Picasso/result3.jpg"
152
+ },
153
+ "7(a)": {
154
+ "figure_path": "2312.17046v2_figure_7(a).png",
155
+ "caption": "Figure 7: Examples of normal maps generated using sketch-based modeling programs. Normal maps are designed to correspond to conservative vector fields. Therefore, there always exists a unique bas-relief corresponding to a normal map.",
156
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/7.png"
157
+ },
158
+ "7(b)": {
159
+ "figure_path": "2312.17046v2_figure_7(b).png",
160
+ "caption": "Figure 7: Examples of normal maps generated using sketch-based modeling programs. Normal maps are designed to correspond to conservative vector fields. Therefore, there always exists a unique bas-relief corresponding to a normal map.",
161
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/6.png"
162
+ },
163
+ "8(a)": {
164
+ "figure_path": "2312.17046v2_figure_8(a).png",
165
+ "caption": "Figure 8: Examples of shape maps of conservative and non-conservative fields from Figure 2. In these examples, the thickness is zero in the boundaries and the largest in the center. Note that they do not look too different visually.",
166
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a4.png"
167
+ },
168
+ "8(b)": {
169
+ "figure_path": "2312.17046v2_figure_8(b).png",
170
+ "caption": "Figure 8: Examples of shape maps of conservative and non-conservative fields from Figure 2. In these examples, the thickness is zero in the boundaries and the largest in the center. Note that they do not look too different visually.",
171
+ "url": "http://arxiv.org/html/2312.17046v2/x3.png"
172
+ },
173
+ "9(a)": {
174
+ "figure_path": "2312.17046v2_figure_9(a).png",
175
+ "caption": "Figure 9: These shape maps are all painted manually by an artist inspired by original paintings or photographs. In all four cases, the entire process of creating a 2D vector field does not take more than one hour using a digital painting program.",
176
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/4.png"
177
+ },
178
+ "9(b)": {
179
+ "figure_path": "2312.17046v2_figure_9(b).png",
180
+ "caption": "Figure 9: These shape maps are all painted manually by an artist inspired by original paintings or photographs. In all four cases, the entire process of creating a 2D vector field does not take more than one hour using a digital painting program.",
181
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/0.png"
182
+ },
183
+ "9(c)": {
184
+ "figure_path": "2312.17046v2_figure_9(c).png",
185
+ "caption": "Figure 9: These shape maps are all painted manually by an artist inspired by original paintings or photographs. In all four cases, the entire process of creating a 2D vector field does not take more than one hour using a digital painting program.",
186
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/1.png"
187
+ },
188
+ "9(d)": {
189
+ "figure_path": "2312.17046v2_figure_9(d).png",
190
+ "caption": "Figure 9: These shape maps are all painted manually by an artist inspired by original paintings or photographs. In all four cases, the entire process of creating a 2D vector field does not take more than one hour using a digital painting program.",
191
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/SM/2.png"
192
+ },
193
+ "10(a)": {
194
+ "figure_path": "2312.17046v2_figure_10(a).png",
195
+ "caption": "Figure 10: Examples of non-photo-realistic compositing with reflection, glossy reflection, refraction, and translucence combined with Fresnel using hand-painted shape maps. In these examples, materials including transparency are described by a separate set of images.",
196
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Fishbowl/result1.png"
197
+ },
198
+ "10(b)": {
199
+ "figure_path": "2312.17046v2_figure_10(b).png",
200
+ "caption": "Figure 10: Examples of non-photo-realistic compositing with reflection, glossy reflection, refraction, and translucence combined with Fresnel using hand-painted shape maps. In these examples, materials including transparency are described by a separate set of images.",
201
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bottle/a1.png"
202
+ },
203
+ "10(c)": {
204
+ "figure_path": "2312.17046v2_figure_10(c).png",
205
+ "caption": "Figure 10: Examples of non-photo-realistic compositing with reflection, glossy reflection, refraction, and translucence combined with Fresnel using hand-painted shape maps. In these examples, materials including transparency are described by a separate set of images.",
206
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bottle/a2.png"
207
+ },
208
+ "10(d)": {
209
+ "figure_path": "2312.17046v2_figure_10(d).png",
210
+ "caption": "Figure 10: Examples of non-photo-realistic compositing with reflection, glossy reflection, refraction, and translucence combined with Fresnel using hand-painted shape maps. In these examples, materials including transparency are described by a separate set of images.",
211
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bottle/fr1.png"
212
+ },
213
+ "11(a)": {
214
+ "figure_path": "2312.17046v2_figure_11(a).png",
215
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
216
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/0SM.png"
217
+ },
218
+ "11(b)": {
219
+ "figure_path": "2312.17046v2_figure_11(b).png",
220
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
221
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/0dif.png"
222
+ },
223
+ "11(c)": {
224
+ "figure_path": "2312.17046v2_figure_11(c).png",
225
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
226
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/0detail.png"
227
+ },
228
+ "11(d)": {
229
+ "figure_path": "2312.17046v2_figure_11(d).png",
230
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
231
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/1SM.png"
232
+ },
233
+ "11(e)": {
234
+ "figure_path": "2312.17046v2_figure_11(e).png",
235
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
236
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/1dif.png"
237
+ },
238
+ "11(f)": {
239
+ "figure_path": "2312.17046v2_figure_11(f).png",
240
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
241
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/1detail.png"
242
+ },
243
+ "11(g)": {
244
+ "figure_path": "2312.17046v2_figure_11(g).png",
245
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
246
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/2SM.jpg"
247
+ },
248
+ "11(h)": {
249
+ "figure_path": "2312.17046v2_figure_11(h).png",
250
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
251
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/2dif.png"
252
+ },
253
+ "11(i)": {
254
+ "figure_path": "2312.17046v2_figure_11(i).png",
255
+ "caption": "Figure 11: Examples of shape maps generated from photographs and diffuse rendering results. As demonstrated in detailed images, it is possible to obtain unexpected visuals such as tapes that are visible in the last-detail image.",
256
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/2detail.png"
257
+ },
258
+ "12(a)": {
259
+ "figure_path": "2312.17046v2_figure_12(a).png",
260
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
261
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/3SM.png"
262
+ },
263
+ "12(b)": {
264
+ "figure_path": "2312.17046v2_figure_12(b).png",
265
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
266
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/3dif.png"
267
+ },
268
+ "12(c)": {
269
+ "figure_path": "2312.17046v2_figure_12(c).png",
270
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
271
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/3detail.png"
272
+ },
273
+ "12(d)": {
274
+ "figure_path": "2312.17046v2_figure_12(d).png",
275
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
276
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/4SM.jpg"
277
+ },
278
+ "12(e)": {
279
+ "figure_path": "2312.17046v2_figure_12(e).png",
280
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
281
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/4dif.jpg"
282
+ },
283
+ "12(f)": {
284
+ "figure_path": "2312.17046v2_figure_12(f).png",
285
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
286
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/4detail.png"
287
+ },
288
+ "12(g)": {
289
+ "figure_path": "2312.17046v2_figure_12(g).png",
290
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
291
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/5SM.jpg"
292
+ },
293
+ "12(h)": {
294
+ "figure_path": "2312.17046v2_figure_12(h).png",
295
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
296
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/5dif.jpg"
297
+ },
298
+ "12(i)": {
299
+ "figure_path": "2312.17046v2_figure_12(i).png",
300
+ "caption": "Figure 12: An example of a shape map generated from a photograph of translucent sculptures. The translucency of the original sculpture is visible in B&W rendering.",
301
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Photos/5detail.png"
302
+ },
303
+ "13(a)": {
304
+ "figure_path": "2312.17046v2_figure_13(a).png",
305
+ "caption": "Figure 14: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
306
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3SM.png"
307
+ },
308
+ "13(b)": {
309
+ "figure_path": "2312.17046v2_figure_13(b).png",
310
+ "caption": "Figure 14: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
311
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3DI0.png"
312
+ },
313
+ "13(c)": {
314
+ "figure_path": "2312.17046v2_figure_13(c).png",
315
+ "caption": "Figure 14: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
316
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3DI1.png"
317
+ },
318
+ "14(a)": {
319
+ "figure_path": "2312.17046v2_figure_14(a).png",
320
+ "caption": "Figure 15: Diffuse renderings of the impossible object in Figure 14 with ambient occlusion and local shadows. Note how much visual quality improves with subtle local shadows and beveled-edge look caused by ambient occlusion that is obtained from dynamically computed geometry information.",
321
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3diffuse0.png"
322
+ },
323
+ "14(b)": {
324
+ "figure_path": "2312.17046v2_figure_14(b).png",
325
+ "caption": "Figure 15: Diffuse renderings of the impossible object in Figure 14 with ambient occlusion and local shadows. Note how much visual quality improves with subtle local shadows and beveled-edge look caused by ambient occlusion that is obtained from dynamically computed geometry information.",
326
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3diffuse1.png"
327
+ },
328
+ "14(c)": {
329
+ "figure_path": "2312.17046v2_figure_14(c).png",
330
+ "caption": "Figure 15: Diffuse renderings of the impossible object in Figure 14 with ambient occlusion and local shadows. Note how much visual quality improves with subtle local shadows and beveled-edge look caused by ambient occlusion that is obtained from dynamically computed geometry information.",
331
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/3diffuse2.png"
332
+ },
333
+ "15(a)": {
334
+ "figure_path": "2312.17046v2_figure_15(a).png",
335
+ "caption": "Figure 16: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
336
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1SM.png"
337
+ },
338
+ "15(b)": {
339
+ "figure_path": "2312.17046v2_figure_15(b).png",
340
+ "caption": "Figure 16: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
341
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1DI0.png"
342
+ },
343
+ "15(c)": {
344
+ "figure_path": "2312.17046v2_figure_15(c).png",
345
+ "caption": "Figure 16: An example of an impossible object. The shape map is created using our program. The other two images are parameters of a barycentric shader. These three images provide shape and material representations of the impossible object.",
346
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1DI1.png"
347
+ },
348
+ "16(a)": {
349
+ "figure_path": "2312.17046v2_figure_16(a).png",
350
+ "caption": "Figure 17: Diffuse and Transparent renderings of the impossible object in Figure 16. Note that since we have thickness information, we can also obtain realistic-looking transparency and translucency with impossible objects.",
351
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1diffuse.png"
352
+ },
353
+ "16(b)": {
354
+ "figure_path": "2312.17046v2_figure_16(b).png",
355
+ "caption": "Figure 17: Diffuse and Transparent renderings of the impossible object in Figure 16. Note that since we have thickness information, we can also obtain realistic-looking transparency and translucency with impossible objects.",
356
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1result1.jpg"
357
+ },
358
+ "16(c)": {
359
+ "figure_path": "2312.17046v2_figure_16(c).png",
360
+ "caption": "Figure 17: Diffuse and Transparent renderings of the impossible object in Figure 16. Note that since we have thickness information, we can also obtain realistic-looking transparency and translucency with impossible objects.",
361
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/impossible/1result2.jpg"
362
+ },
363
+ "17(a)": {
364
+ "figure_path": "2312.17046v2_figure_17(a).png",
365
+ "caption": "Figure 18: An example of shape map creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
366
+ "url": "http://arxiv.org/html/2312.17046v2/x4.png"
367
+ },
368
+ "17(b)": {
369
+ "figure_path": "2312.17046v2_figure_17(b).png",
370
+ "caption": "Figure 18: An example of shape map creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
371
+ "url": "http://arxiv.org/html/2312.17046v2/x5.png"
372
+ },
373
+ "17(c)": {
374
+ "figure_path": "2312.17046v2_figure_17(c).png",
375
+ "caption": "Figure 18: An example of shape map creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
376
+ "url": "http://arxiv.org/html/2312.17046v2/x6.png"
377
+ },
378
+ "17(d)": {
379
+ "figure_path": "2312.17046v2_figure_17(d).png",
380
+ "caption": "Figure 18: An example of shape map creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
381
+ "url": "http://arxiv.org/html/2312.17046v2/x7.png"
382
+ },
383
+ "18(a)": {
384
+ "figure_path": "2312.17046v2_figure_18(a).png",
385
+ "caption": "Figure 19: An example of gradient field (conservative vector field) creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
386
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a1.png"
387
+ },
388
+ "18(b)": {
389
+ "figure_path": "2312.17046v2_figure_18(b).png",
390
+ "caption": "Figure 19: An example of gradient field (conservative vector field) creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
391
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a2.png"
392
+ },
393
+ "18(c)": {
394
+ "figure_path": "2312.17046v2_figure_18(c).png",
395
+ "caption": "Figure 19: An example of gradient field (conservative vector field) creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
396
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a3.png"
397
+ },
398
+ "18(d)": {
399
+ "figure_path": "2312.17046v2_figure_18(d).png",
400
+ "caption": "Figure 19: An example of gradient field (conservative vector field) creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
401
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/a4.png"
402
+ },
403
+ "19(a)": {
404
+ "figure_path": "2312.17046v2_figure_19(a).png",
405
+ "caption": "Figure 20: An example of non-conservative vector field creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
406
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/vectorfield/c1.png"
407
+ },
408
+ "19(b)": {
409
+ "figure_path": "2312.17046v2_figure_19(b).png",
410
+ "caption": "Figure 20: An example of non-conservative vector field creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
411
+ "url": "http://arxiv.org/html/2312.17046v2/x8.png"
412
+ },
413
+ "19(c)": {
414
+ "figure_path": "2312.17046v2_figure_19(c).png",
415
+ "caption": "Figure 20: An example of non-conservative vector field creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
416
+ "url": "http://arxiv.org/html/2312.17046v2/x9.png"
417
+ },
418
+ "19(d)": {
419
+ "figure_path": "2312.17046v2_figure_19(d).png",
420
+ "caption": "Figure 20: An example of non-conservative vector field creation on a Bezier patch by interpolating control vectors first along the edges, then inside the patch. The thickness is constant",
421
+ "url": "http://arxiv.org/html/2312.17046v2/x10.png"
422
+ },
423
+ "20(a)": {
424
+ "figure_path": "2312.17046v2_figure_20(a).png",
425
+ "caption": "Figure 21: Examples of the operations that create quad-heavy meshes.",
426
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/cr1.png"
427
+ },
428
+ "20(b)": {
429
+ "figure_path": "2312.17046v2_figure_20(b).png",
430
+ "caption": "Figure 21: Examples of the operations that create quad-heavy meshes.",
431
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/cr2.png"
432
+ },
433
+ "20(c)": {
434
+ "figure_path": "2312.17046v2_figure_20(c).png",
435
+ "caption": "Figure 21: Examples of the operations that create quad-heavy meshes.",
436
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/cr3.png"
437
+ },
438
+ "21": {
439
+ "figure_path": "2312.17046v2_figure_21.png",
440
+ "caption": "Figure 22: Two examples of thickening a medial axis to a quad mesh.",
441
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/cr4.png"
442
+ },
443
+ "22": {
444
+ "figure_path": "2312.17046v2_figure_22.png",
445
+ "caption": "Figure 23: New quad creations with split-edge two-edge and split-face operators.",
446
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/op1.png"
447
+ },
448
+ "23(a)": {
449
+ "figure_path": "2312.17046v2_figure_23(a).png",
450
+ "caption": "Figure 24: Two quadrilateral property preserving group operations.",
451
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/op2.png"
452
+ },
453
+ "23(b)": {
454
+ "figure_path": "2312.17046v2_figure_23(b).png",
455
+ "caption": "Figure 24: Two quadrilateral property preserving group operations.",
456
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/quadmeshes/op3.png"
457
+ },
458
+ "24(a)": {
459
+ "figure_path": "2312.17046v2_figure_24(a).png",
460
+ "caption": "Figure 25: An example of an object created using our software that turns a static illustration into a dynamic one. (a) a shape map created from the original drawing using our system; (b,c) are the shader parameters.",
461
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/SM.png"
462
+ },
463
+ "24(b)": {
464
+ "figure_path": "2312.17046v2_figure_24(b).png",
465
+ "caption": "Figure 25: An example of an object created using our software that turns a static illustration into a dynamic one. (a) a shape map created from the original drawing using our system; (b,c) are the shader parameters.",
466
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/DI0.png"
467
+ },
468
+ "24(c)": {
469
+ "figure_path": "2312.17046v2_figure_24(c).png",
470
+ "caption": "Figure 25: An example of an object created using our software that turns a static illustration into a dynamic one. (a) a shape map created from the original drawing using our system; (b,c) are the shader parameters.",
471
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/DI1.png"
472
+ },
473
+ "25(a)": {
474
+ "figure_path": "2312.17046v2_figure_25(a).png",
475
+ "caption": "Figure 26: (a) An artist\u2019s original illustration; (b) a shape map created from the original drawing using our system; (c) a B&W image that provides the combined effect of our shading, shadow, and ambient occlusion computations; (d) a color-rendering image created by using interpolating texture images. The final image is more volumetric in look than the original drawing due to subtle effects provided by shadow and ambient occlusion even though there is no true 3D shape.",
476
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/original.png"
477
+ },
478
+ "25(b)": {
479
+ "figure_path": "2312.17046v2_figure_25(b).png",
480
+ "caption": "Figure 26: (a) An artist\u2019s original illustration; (b) a shape map created from the original drawing using our system; (c) a B&W image that provides the combined effect of our shading, shadow, and ambient occlusion computations; (d) a color-rendering image created by using interpolating texture images. The final image is more volumetric in look than the original drawing due to subtle effects provided by shadow and ambient occlusion even though there is no true 3D shape.",
481
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/result1.png"
482
+ },
483
+ "25(c)": {
484
+ "figure_path": "2312.17046v2_figure_25(c).png",
485
+ "caption": "Figure 26: (a) An artist\u2019s original illustration; (b) a shape map created from the original drawing using our system; (c) a B&W image that provides the combined effect of our shading, shadow, and ambient occlusion computations; (d) a color-rendering image created by using interpolating texture images. The final image is more volumetric in look than the original drawing due to subtle effects provided by shadow and ambient occlusion even though there is no true 3D shape.",
486
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/Bukalemun/result2.png"
487
+ },
488
+ "26(a)": {
489
+ "figure_path": "2312.17046v2_figure_26(a).png",
490
+ "caption": "Figure 27: Reconstruction apples removing one.",
491
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/a.png"
492
+ },
493
+ "26(b)": {
494
+ "figure_path": "2312.17046v2_figure_26(b).png",
495
+ "caption": "Figure 27: Reconstruction apples removing one.",
496
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/aSM.png"
497
+ },
498
+ "26(c)": {
499
+ "figure_path": "2312.17046v2_figure_26(c).png",
500
+ "caption": "Figure 27: Reconstruction apples removing one.",
501
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/a1.png"
502
+ },
503
+ "26(d)": {
504
+ "figure_path": "2312.17046v2_figure_26(d).png",
505
+ "caption": "Figure 27: Reconstruction apples removing one.",
506
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/a4.png"
507
+ },
508
+ "27(a)": {
509
+ "figure_path": "2312.17046v2_figure_27(a).png",
510
+ "caption": "Figure 28: Reconstruction of a horse.",
511
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/h.jpg"
512
+ },
513
+ "27(b)": {
514
+ "figure_path": "2312.17046v2_figure_27(b).png",
515
+ "caption": "Figure 28: Reconstruction of a horse.",
516
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/hSM.png"
517
+ },
518
+ "27(c)": {
519
+ "figure_path": "2312.17046v2_figure_27(c).png",
520
+ "caption": "Figure 28: Reconstruction of a horse.",
521
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/h1.png"
522
+ },
523
+ "27(d)": {
524
+ "figure_path": "2312.17046v2_figure_27(d).png",
525
+ "caption": "Figure 28: Reconstruction of a horse.",
526
+ "url": "http://arxiv.org/html/2312.17046v2/extracted/5325526/reconstruction/h3.png"
527
+ }
528
+ },
529
+ "validation": true,
530
+ "references": [
531
+ {
532
+ "1": {
533
+ "title": "Private conversation.",
534
+ "author": "John Hart.",
535
+ "venue": "According to a market research firm 3D Graphics is only 8% of the whole graphics market. 2D graphics such as vector, image and video is 90% of the graphics market., 2008.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "2": {
541
+ "title": "Du\" cubisme\".",
542
+ "author": "Albert Gleizes and Jean Metzinger.",
543
+ "venue": "RG Fischer, 1947.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "3": {
549
+ "title": "Jean metzinger: At the center of cubism.",
550
+ "author": "Daniel Robbins.",
551
+ "venue": "Jean Metzinger in Retrospect, 1985.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "4": {
557
+ "title": "A non-photorealistic lighting model for automatic technical illustration.",
558
+ "author": "Amy Gooch, Bruce Gooch, Peter Shirley, and Elaine Cohen.",
559
+ "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, SIGGRAPH \u201998, pages 447\u2013452, 1998.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "5": {
565
+ "title": "Computer-generated pen-and-ink illustration.",
566
+ "author": "Georges Winkenbach and David H Salesin.",
567
+ "venue": "In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, pages 91\u2013100. ACM, 1994.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "6": {
573
+ "title": "Line drawing as a dynamic process.",
574
+ "author": "Donald H House and Mayank Singh.",
575
+ "venue": "In Pacific Conference on Computer Graphics and Applications, pages 351\u2013360, 2007.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "7": {
581
+ "title": "Real Drawing: Concepts, Constructions, Capriccios.",
582
+ "author": "Richard Davison.",
583
+ "venue": "Blurb Publisihing, 2007.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "8": {
589
+ "title": "Global illumination for 2d artworks with vector field rendering.",
590
+ "author": "Youyou Wang, Ozgur Gonen, and Ergun Akleman.",
591
+ "venue": "In ACM SIGGRAPH 2014 Posters, SIGGRAPH \u201914, pages 95:1\u201395:1, New York, NY, USA, 2014. ACM.",
592
+ "url": null
593
+ }
594
+ },
595
+ {
596
+ "9": {
597
+ "title": "Quad Dominant 2-Manifold Mesh Modeling.",
598
+ "author": "Ozgur Gonen.",
599
+ "venue": "PhD thesis, Texas A&M University, College Station, TX, 2016.",
600
+ "url": null
601
+ }
602
+ },
603
+ {
604
+ "10": {
605
+ "title": "Qualitative Global Illumination of Mock-3D Scenes.",
606
+ "author": "Wang Youyou.",
607
+ "venue": "PhD thesis, Texas A&M University, College Station, TX, 2014.",
608
+ "url": null
609
+ }
610
+ },
611
+ {
612
+ "11": {
613
+ "title": "Cos shadows: an integrated model for direct illumination, subsurface scattering and shadow computation.",
614
+ "author": "Ergun Akleman, Fermi Perumal, and Youyou Wang.",
615
+ "venue": "In ACM SIGGRAPH 2017 Posters, page 38, New York City, NW, 2017. ACM SIGGRAPH.",
616
+ "url": null
617
+ }
618
+ },
619
+ {
620
+ "12": {
621
+ "title": "Dynamic paintings: Real-time interactive artworks in web.",
622
+ "author": "Ergun Akleman, Anusha Shanker, Yinan Xiong, Ani Barseghyan, and Motahareh Fard.",
623
+ "venue": "In Proceedings of International Society of Electronic Arts 2022 (ISEA\u20192022), pages 1\u201312, June 2022.",
624
+ "url": null
625
+ }
626
+ },
627
+ {
628
+ "13": {
629
+ "title": "Web-based dynamic paintings: Real-time interactive artworks in web using a 2.5d pipeline, 2023.",
630
+ "author": "Ergun Akleman, Youyou wang, Yinan Xiong, Anusha Shanker, Fermi Perumal, Ozgur Gonen, and Motahareh Fard.",
631
+ "venue": null,
632
+ "url": null
633
+ }
634
+ },
635
+ {
636
+ "14": {
637
+ "title": "Digital bas-relief from 3d scenes.",
638
+ "author": "Tim Weyrich, Jia Deng, Connelly Barnes, Szymon Rusinkiewicz, and Adam Finkelstein.",
639
+ "venue": "In ACM SIGGRAPH 2007 papers, SIGGRAPH \u201907, 2007.",
640
+ "url": null
641
+ }
642
+ },
643
+ {
644
+ "15": {
645
+ "title": "http://en.wikipedia.org/wiki/Conservative_vector_field, 2014a.",
646
+ "author": "Wikipedia.",
647
+ "venue": null,
648
+ "url": null
649
+ }
650
+ },
651
+ {
652
+ "16": {
653
+ "title": "Gradient domain high dynamic range compression.",
654
+ "author": "Raanan Fattal, Dani Lischinski, and Michael Werman.",
655
+ "venue": "In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, SIGGRAPH \u201902, pages 249\u2013256, 2002.",
656
+ "url": null
657
+ }
658
+ },
659
+ {
660
+ "17": {
661
+ "title": "http://en.wikipedia.org/wiki/Ascending_and_Descending, 2014b.",
662
+ "author": "Wikipedia.",
663
+ "venue": null,
664
+ "url": null
665
+ }
666
+ },
667
+ {
668
+ "18": {
669
+ "title": "View-dependent geometry.",
670
+ "author": "Paul Rademacher.",
671
+ "venue": "In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH \u201999, pages 439\u2013446, 1999.",
672
+ "url": null
673
+ }
674
+ },
675
+ {
676
+ "19": {
677
+ "title": "Block meshes: Topologically robust shape modeling with graphs embedded on 3-manifolds.",
678
+ "author": "Ergun Akleman, Jianer Chen, and Jonathan L Gross.",
679
+ "venue": "Computers & Graphics, 46:306\u2013326, 2015.",
680
+ "url": null
681
+ }
682
+ },
683
+ {
684
+ "20": {
685
+ "title": "Smi 2011: Full paper: Modeling (seemingly) impossible models.",
686
+ "author": "Gershon Elber.",
687
+ "venue": "Comput. Graph., 35(3):632\u2013638, June 2011.",
688
+ "url": null
689
+ }
690
+ },
691
+ {
692
+ "21": {
693
+ "title": "Local layering.",
694
+ "author": "James McCann and Nancy Pollard.",
695
+ "venue": "ACM Trans. Graph., 28(3):84:1\u201384:7, 2009.",
696
+ "url": null
697
+ }
698
+ },
699
+ {
700
+ "22": {
701
+ "title": "Appearance-preserving simplification.",
702
+ "author": "Jonathan Cohen, Marc Olano, and Dinesh Manocha.",
703
+ "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, SIGGRAPH \u201998, pages 115\u2013122, 1998.",
704
+ "url": null
705
+ }
706
+ },
707
+ {
708
+ "23": {
709
+ "title": "Lumo: illumination for cel animation.",
710
+ "author": "Scott F. Johnston.",
711
+ "venue": "In Proceedings of the 2nd international symposium on Non-photorealistic animation and rendering, NPAR \u201902, pages 45\u201352, 2002.",
712
+ "url": null
713
+ }
714
+ },
715
+ {
716
+ "24": {
717
+ "title": "Single-view relighting with normal map painting.",
718
+ "author": "Makoto Okabe, Gang Zeng, Yasuyuki Matsushita, Takeo Igarashi, Long Quan, and Heung-Yeung Shum.",
719
+ "venue": "Proceedings of Pacific Graphics, pages 27\u201334, 2006.",
720
+ "url": null
721
+ }
722
+ },
723
+ {
724
+ "25": {
725
+ "title": "An image-based shading pipeline for 2d animation.",
726
+ "author": "H. Bezerra, B. Feijo, and L. Velho.",
727
+ "venue": "In Computer Graphics and Image Processing, 2005. SIBGRAPI 2005. 18th Brazilian Symposium on, pages 307\u2013314, 2005.",
728
+ "url": null
729
+ }
730
+ },
731
+ {
732
+ "26": {
733
+ "title": "Texture design and draping in 2d images.",
734
+ "author": "H. Winnemoeller, A. Orzan, L. Boissieux, and J. Thollot.",
735
+ "venue": "Computer Graphics Forum, 28(4):1091\u20131099, 2009.",
736
+ "url": null
737
+ }
738
+ },
739
+ {
740
+ "27": {
741
+ "title": "Crossshade: shading concept sketches using cross-section curves.",
742
+ "author": "Cloud Shao, Adrien Bousseau, Alla Sheffer, and Karan Singh.",
743
+ "venue": "ACM Trans. Graph., 31(4):45:1\u201345:11, 2012.",
744
+ "url": null
745
+ }
746
+ },
747
+ {
748
+ "28": {
749
+ "title": "Image vectorization using optimized gradient meshes.",
750
+ "author": "J. Sun, L. Liang, F. Wen, and H. Shum.",
751
+ "venue": "ACM Transactions on Graphics (TOG), 26(11):11:1\u201311:7, 2007.",
752
+ "url": null
753
+ }
754
+ },
755
+ {
756
+ "29": {
757
+ "title": "Diffusion curves: A vector representation for smooth-shaded images.",
758
+ "author": "Alexandrina Orzan, Adrien Bousseau, Holger Winnemoller, Pascal Barla, Joelle Thollot, and David Salesin.",
759
+ "venue": "ACM Transactions on Graphics (TOG), 27(3):92:1\u201392:8, 2008.",
760
+ "url": null
761
+ }
762
+ },
763
+ {
764
+ "30": {
765
+ "title": "Lazy- brush: Flexible painting tool for hand-drawn cartoons.",
766
+ "author": "Daniel S\u00fdkora, John Dingliana, and Steven Collins.",
767
+ "venue": "Computer Graphics Forum, 28(2):599\u2013608, 2009.",
768
+ "url": null
769
+ }
770
+ },
771
+ {
772
+ "31": {
773
+ "title": "Freeform vector graphics with controlled thin-plate splines.",
774
+ "author": "M. Finch, J. Snyder, and H. Hoppe.",
775
+ "venue": "ACM Transactions on Graphics (TOG), 30:166:1\u2013166:10, 2011.",
776
+ "url": null
777
+ }
778
+ },
779
+ {
780
+ "32": {
781
+ "title": "Shapepalettes: Interactive normal transfer via sketching.",
782
+ "author": "T. Wu, C. Tang, M. Brown, and H. Shum.",
783
+ "venue": "ACM Transactions on Graphics (TOG), 26(3):44:1\u201344:5, 2007.",
784
+ "url": null
785
+ }
786
+ },
787
+ {
788
+ "33": {
789
+ "title": "Surface flows for image-based shading design.",
790
+ "author": "Romain Vergne, Pascal Barla, Roland W. Fleming, and Xavier Granier.",
791
+ "venue": "ACM Transactions on Graphics (TOG), 31(94):94:1\u201394:9, 2012.",
792
+ "url": null
793
+ }
794
+ },
795
+ {
796
+ "34": {
797
+ "title": "Ink-and-ray: Bas-relief meshes for adding global illumination effects to hand-drawn characters.",
798
+ "author": "Daniel S\u00fdkora, Ladislav Kavan, Martin \u010cad\u00edk, Ond\u0159ej Jamri\u0161ka, Alec Jacobson, Brian Whited, Maryann Simmons, and Olga Sorkine-Hornung.",
799
+ "venue": "ACM Transactions on Graphics (TOG), 33, 2014(to appear).",
800
+ "url": null
801
+ }
802
+ },
803
+ {
804
+ "35": {
805
+ "title": "Environment matting and compositing.",
806
+ "author": "Douglas E Zongker, Dawn M Werner, Brian Curless, and David H Salesin.",
807
+ "venue": "In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 205\u2013214, 1999.",
808
+ "url": null
809
+ }
810
+ },
811
+ {
812
+ "36": {
813
+ "title": "An introduction to splines for use in computer graphics and geometric modeling.",
814
+ "author": "John C Beatty and Brian A Barsky.",
815
+ "venue": "Morgan Kaufmann, 1987.",
816
+ "url": null
817
+ }
818
+ },
819
+ {
820
+ "37": {
821
+ "title": "Insight for practical polygonal mesh modeling with discrete gauss-bonnet theorem.",
822
+ "author": "E. Akleman and J. Chen.",
823
+ "venue": "In Proceedings of Geometry Modeling and Processing (GMP\u201906), pages 287\u2013298. Springer Berlin Heidelberg, 2006.",
824
+ "url": null
825
+ }
826
+ }
827
+ ],
828
+ "url": "http://arxiv.org/html/2312.17046v2"
829
+ }
20240101/2312.17660v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00617v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00632v1.json ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "TbDd: A New Trust-based, DRL-driven Framework for Blockchain Sharding in IoT",
3
+ "abstract": "Integrating sharded blockchain with IoT presents a solution for trust issues and optimized data flow. Sharding boosts blockchain scalability by dividing its nodes into parallel shards, yet it\u2019s vulnerable to the attacks where dishonest nodes target a shard to corrupt the entire blockchain. Balancing security with scalability is pivotal for such systems. Deep Reinforcement Learning (DRL) adeptly handles dynamic, complex systems and multi-dimensional optimization.\nThis paper introduces a Trust-based and DRL-driven (TbDd) framework, crafted to counter shard collusion risks and dynamically adjust node allocation, enhancing throughput while maintaining network security. With a comprehensive trust evaluation mechanism, TbDd discerns node types and performs targeted resharding against potential threats. The model maximizes tolerance for dishonest nodes, optimizes node movement frequency, ensures even node distribution in shards, and balances sharding risks. Rigorous evaluations prove TbDd\u2019s superiority over conventional random-, community-, and trust-based sharding methods in shard risk equilibrium and reducing cross-shard transactions.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "By interconnecting devices, vehicles, and appliances,\nthe Internet of Things (IoT) has transformed sectors ranging from smart cities [1 ###reference_1###], which optimize traffic and energy use, to autonomous vehicles [2 ###reference_2###] that aim for safer roads, industrial networks [3 ###reference_3###] that redefine production processes, e-health solutions [4 ###reference_4###] that prioritize patient care, and smart homes [5 ###reference_5###] that enrich daily living. The massive and sensitive nature of the data generated demands strong security and integrity, leading to the integration of IoT and blockchain [6 ###reference_6###, 7 ###reference_7###, 8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###]. This convergence, with its decentralized, immutable, and traceable features, offers a foundational structure for secure data exchanges [11 ###reference_11###]. Sharding, a key strategy in blockchain, addresses vast data needs and ensures scalability [12 ###reference_12###]. By dividing the network into smaller segments, sharding enables simultaneous transaction processing, lightening the load on nodes and increasing transaction throughput.\nAs the IoT landscape expands, managing and securing vast networks becomes more intricate. However, while various sharding techniques like random-based [13 ###reference_13###, 14 ###reference_14###, 15 ###reference_15###, 16 ###reference_16###], community-based [17 ###reference_17###], and trust-based [18 ###reference_18###] methods enhance the scalability and distribution of dishonest nodes, they remain vulnerable to advanced threats such as adaptive collusion attacks. In such attacks, dishonest nodes manipulate sharding systems by modifying their behavior to focus on specific shards, potentially initiating the attack [19 ###reference_19###], which significantly compromises the blockchain\u2019s security, trustworthiness, and immutability.\nDeep Reinforcement Learning (DRL) has become instrumental in the confluence of IoT and blockchain sharding due to its proficiency in handling multiple sharding variables [20 ###reference_20###, 21 ###reference_21###, 22 ###reference_22###, 23 ###reference_23###, 24 ###reference_24###, 25 ###reference_25###]. Amidst the surging blockchain transaction data, DRL provides real-time adaptability, fostering dynamic shard adjustments that enhance resource use and overall system efficiency. However, many DRL-focused studies underestimate the challenges of strategic collusion attacks, where dishonest nodes might disguise their identity or increase cross-shard transactions to hide their intent.\nThis study investigates the expansion and fortification of blockchain sharding by introducing an innovative framework that utilizes trust tables and DRL. The introduced trust table advances the sharding process by incorporating a multi-dimensional approach to gather feedback within the voting mechanism. This multi-faceted feedback includes direct feedback, indirect feedback, and historical behaviors during the voting process, all contributing to a robust defense mechanism against attack risks. Concurrently, the DRL-driven sharding framework is designed to dynamically allocate shards in real-time, which streamlines node synchronization and minimizes cross-shard transactions (CSTs). This approach also ensures a balanced distribution of dishonest nodes, aiming to decrease the necessity for frequent resharding. The study suggests security thresholds for this DRL approach to enhance training and fortify the sharded blockchain\u2019s security.\nThe primary contributions of this study can be summarized as follows:\nWe propose a secure, scalable, sharded blockchain system architecture for IoT scenarios. Our proposed system effectively mitigates the risks associated with attacks and reduces CST, striking a balance between security and scalability.\nWe design a novel blockchain sharding system, TbDd, incorporating trust tables to distinguish between dishonest and honest nodes. We consider multiple perspectives for generating a trust table to identify node properties, such as distributed voting, consensus, and node historical behavior analysis.\nWe leverage the DRL framework to monitor the blockchain sharding process continually. By dynamically balancing scalability and security, we achieve high throughput while maintaining the number of dishonest nodes below the security threshold in each shard, ensuring a robust and secure decentralized system.\nComparative experiments were conducted simulating strategic collusion attacks across different sharding methods. The outcomes highlight that the TbDd system excels in minimizing CST and ensuring a balanced shard trust distribution. Additionally, the TbDd framework, in the absence of collusion attacks and within tolerable dishonest node levels, offers about a throughput advantage over random-based sharding and a enhancement against the trust-based approach.\nThe organization of the remaining paper is as follows: Section II ###reference_### reviews relevant literature. Section III ###reference_### introduces the proposed system model and problem definition. Section IV ###reference_### comprehensively presents the proposed TbDd system. Experiments and assessments are conducted in Section V ###reference_###. The paper is concluded in Section VI ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "In this section, we present the existing related work in the field of blockchain-enabled IoT. Then, we review research on blockchain sharding techniques, including random-based sharding, community-based sharding and trust-based sharding. Additionally, we compare the use of DRL technology in sharded blockchains."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Blockchain and IoT",
21
+ "text": "With the rapid development of blockchain technology, blockchain-enabled IoT has become more secure and reliable. Existing blockchain-enabled IoT systems primarily focus on framework design and consensus protocol design. For instance, Kang et al. [2 ###reference_2###] proposed a solution to address security and privacy challenges in Vehicular Edge Computing and Networks (VECONs) based on consortium blockchain and smart contracts. In the medical scenario, Gadekallu et al. [4 ###reference_4###] presented a blockchain-based solution to enhance the security of datasets generated from IoT devices in e-health applications. The proposed solution utilizes a blockchain platform and a private cloud to secure and verify the datasets, reducing tampering risks and ensuring reliable results.\nXu et al. proposed a blockchain-enabled system for data provenance with 5G and LoRa network [26 ###reference_26###, 27 ###reference_27###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Blockchain Sharding",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "II-B1 Random-based sharding",
33
+ "text": "In the realm of blockchain sharding research, a series of techniques based on random sharding have been proposed to enhance system scalability and fault tolerance. Elastico [13 ###reference_13###] is a pioneering protocol that introduced sharding within a permissionless setting, dynamically scaling the network in tandem with the increasing number of participating nodes. OmniLedger [14 ###reference_14###] took this further, ensuring linear scalability through an effective sharding mechanism that seamlessly manages cross-shard transactions. RapidChain [15 ###reference_15###] built upon these concepts with a full sharding solution that streamlines the partitioning of blockchain states and the processing of transactions. Monoxide [16 ###reference_16###] introduced asynchronous consensus areas to boost transaction efficiency, though its static sharding strategy may fall short under dynamically changing network conditions. These innovations have significantly contributed to system robustness by improving randomness and fault tolerance, yet challenges such as frequent node reassignments remain, leading to non-trivial system overheads."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "II-B2 Community-based sharding",
39
+ "text": "Community-based sharding is a partitioning method in blockchain networks that divides the system into smaller subsets or shards based on the interactions between nodes. Nodes and transactions are depicted as a graph and are partitioned into smaller subgraphs or shards, each comprising a subset of nodes and transactions. Fynn et al. [28 ###reference_28###] investigate the Ethereum blockchain\u2019s scalability through sharding implementation. They model the Ethereum blockchain as a graph and analyze different algorithms for graph partitioning, such as the Kernighan-Lin algorithm [29 ###reference_29###] and METIS [30 ###reference_30###]. Zhang et al. [17 ###reference_17###] proposed community detection-based sharding to enhance sharded blockchain scalability. They used the community detection algorithm to group nodes with frequent trading in the same shard to reduce CSTs. However, this solution demands significant computational resources and high communication costs."
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "II-B3 Trust-based sharding",
45
+ "text": "Trust-based blockchain sharding divides the network into smaller shards based on node trust. Nodes are evaluated by reputation and behavior, grouping trusted nodes to enhance security and performance and evenly distributing dishonest nodes. Yun et al. [18 ###reference_18###] presented a Trust Based Shard Distribution (TBSD) scheme to enhance system security and prevent collusion, specifically aiming to address the attack. However, their approach does not consider shard load balance and trust difference, potentially leading to system delays. Huang et al. [31 ###reference_31###] proposed RepChain, a reputation-based blockchain system with sharding for high throughput, security, and node cooperation. It utilizes a double-chain architecture: transaction chain and reputation chain. However, the double-chain architecture chain increases system complexity and resource needs, resulting in additional overhead. Zhang et al. [32 ###reference_32###] proposed a new blockchain sharding model to enhance security and efficiency. Their approach considers shard trust, latency, and node count differences, reducing the risk of blockchain failure. However, the effectiveness of their proposed model heavily depends on obtaining accurate information, which can be challenging to obtain in dynamic blockchain sharding scenarios."
46
+ },
47
+ {
48
+ "section_id": "2.3",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-C Deep Reinforcement Learning-based Sharding",
51
+ "text": "DRL-based sharding solutions have been making strides in the realm of IoT. Liu et al. [22 ###reference_22###] were pioneers, leveraging DRL in a blockchain framework tailored for Industrial IoT (IIoT). While they dynamically adjusted critical parameters, they missed addressing dishonest attacks. Similarly, another work by Liu et al. [23 ###reference_23###] harnessed Ethereum and DRL for IIoT data security but sidestepped throughput scalability concerns. On the other hand, Qiu et al. presented service-oriented blockchain solutions, using DRL for refined block production and bandwidth allocation [24 ###reference_24###, 25 ###reference_25###]. However, these lacked centralized security measures, making them susceptible to dishonest threats.\nFurther innovations came from Yun et al. [20 ###reference_20###], who introduced a DQN-optimized framework for sharded blockchains. Despite its advancements, the approach overlooked intricate attack strategies in DRL-based sharding and was confined by its central Q-learning reliance. Meanwhile, Yang et al. [21 ###reference_21###] incorporated K-means clustering in a sharded blockchain, utilizing DRL for optimization. Yet, their methodology was marred by the computational intensity and intricacy of DRL.\nMost existing articles [22 ###reference_22###, 23 ###reference_23###, 24 ###reference_24###, 25 ###reference_25###] lack effective analysis of dishonest attacks in IoT using DRL-based blockchain, neglecting the complexities and depth of practical security threats. In contrast, our proposed model analyses the attack problem in the blockchain sharding model and considers strategic collusion attacks involved in existing related studies. By understanding attackers\u2019 motivation, we develop proactive defense mechanisms and risk mitigation strategies. In this paper, we design a trustworthy DRL-based blockchain sharding system under IoT scenarios."
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "III System Model: Mining in Permissioned Shared Blockchain Networks",
57
+ "text": "In this section, we present the architecture of the proposed sharded blockchain designed to support an IoT network, followed by roles involved in the system, providing a workflow overview and elucidating the system assumptions."
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-A System Overview",
63
+ "text": "###figure_1### ###figure_2### System Model. Fig. 1 ###reference_### illustrates the proposed sharding framework used in IoT deployments. This framework aims to enhance the scalability and efficiency of blockchain applications within the vast and interconnected realm of IoT devices. Central to the architecture is the sharding mechanism that partitions the broader network into smaller, more manageable shards. Each shard handles a subset of the overall transactions, allowing for simultaneous processing and increasing throughput. Additionally, the design ensures a balanced distribution of nodes across shards to mitigate adaptive collusion attacks.\nArchitecture. Fig. 2 ###reference_### illustrates TbDd\u2019s architecture. Within each shard managed by an edge server, any node can propose a block as a leader. Each node maintains a local trust table with trust scores assigned to other network nodes. We\u2019ve introduced the TbDd Committee (TC), composed of members democratically selected by the network\u2019s users. Drawing inspiration from Elastico[13 ###reference_13###], the TC ensures decentralized and reliable oversight. This design prevents single points of failure and minimizes risks from a centralized authority.\nThe TC serves as a decentralized, trusted coordinator, overseeing the user list for the entire network and designating nodes to shards. It aggregates nodes\u2019 local trust tables to derive a global trust metric per node, safeguarding trust and node distribution. Essential to TbDd is the resharding mechanism, regularly redistributing nodes among shards. This adaptability ensures the system remains scalable and secure, accommodating a higher transaction volume without compromising on security.\nRoles. In TbDd, the users can take two roles: validator and leader. A user is a node in the network, which represents a server. All nodes within this network can play the role of validators, participating in the verification process of proposed blocks. There are participated nodes in the considered blockchains sharding network . A blockchain sharding network is divided into shards to verify blocks during the episode of . The block proposer is regarded as the leader. The leader broadcasts its proposed block to other peers in the shard for validation. It is important to note that in this system, the role of the leader is trivial because the consensus algorithm is not considered."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-B Workflow Overview",
69
+ "text": "The proposed TbDd framework follows the workflow in Fig. 3 ###reference_###.\n###figure_3### Step-1.\nTrust table updated.\nThe first step in the workflow is updating the trust table. The trust table shows the global trust scores for individual users. Then, the system checks if conditions for triggering Algo. 1 ###reference_### are met.\nStep-2.\nTrain the DRL algorithm. This step trains the DRL algorithm using the collected network data. This involves running simulations to optimize shard allocation policies based on real-time network conditions. After initiating the DRL training process, the system enters a waiting state and dedicates its computational resources to the training procedure. Note that the TC conducts \u201cvirtual resharding\u201d trials to evaluate outcomes and rewards for different actions; however, these trials occur solely within the TC, and the final sharding action is not implemented until it has fully converged.\nStep-3.\nUpdate shard allocation. TC updates shard allocation policies in real time once the DRL model completes training and allocates nodes to shards based on the result.\nStep-4.\nMonitor network performance. As shard allocation policies are updated, monitoring network performance is important. This involves tracking network metrics such as transaction distribution and volume, node location, and colluding risk.\nAfter Step-4, the TC moves on to the retraining phase of the DRL model, incorporating the latest information from the trust table obtained in Step-1. Subsequently, the TC updates the shard allocation strategy in Step-3 based on the new insights gained from the retrained DRL algorithm. This cyclical process follows a similar set of steps, beginning from Step-1 and progressing through Step-4."
70
+ },
71
+ {
72
+ "section_id": "3.3",
73
+ "parent_section_id": "3",
74
+ "section_name": "III-C System Assumptions",
75
+ "text": "We have several assumptions. These assumptions are considered from multiple perspectives, such as attack and system environment."
76
+ },
77
+ {
78
+ "section_id": "3.3.1",
79
+ "parent_section_id": "3.3",
80
+ "section_name": "III-C1 Attack assumption",
81
+ "text": "The collusion behavior of nodes refers to the situation where multiple nodes work together to manipulate the network and gain some unfair advantage. This dishonest behavior can overload the network, leading to delays and service disruptions. We design an attack model focusing on collusion attacks in blockchain sharding, and we consider two types of participants: dishonest nodes and honest nodes.\nDishonest nodes.\nDishonest nodes aim to be allocated in the same shard and then intentionally propose invalid results to honest nodes\u2019 blocks. This is achieved by sending many invalid transactions across dishonest nodes, regardless of whether they are cross-shard or intra-shard transactions. Suppose a dishonest node finds no other teammates in the same shard. In that case, it may hide by pretending to be honest and following honest nodes\u2019 behavior, avoiding suspicion. Dishonest nodes can also utilize the global trust table\u2019s information to enhance their trust by imitating the conduct of honest nodes, selectively endorsing or opposing blocks according to proposers\u2019 trust scores.\nIn (1 ###reference_###), we normalize the trust score of the -th node to a range of . Let and be the minimum and maximum possible raw trust scores. We calculate the current episode\u2019s normalized value by the previous episode\u2019s global trust score.\nWe utilize probabilities related to voting behavior, in which the probability of a node voting for a block is denoted as , while the probability of voting against a block is represented by . We further distinguish probabilities of voting behavior between dishonest and honest nodes. The probability of dishonest nodes engaging in honest voting and dishonest voting is respectively denoted as and . Let be the probability of a node\u2019s vote failing to reach others due to network issues, with . Assume is the proportion of dishonest nodes in the shard, with . We can define a strategy threshold , with , determining when a dishonest node would try to hide by pretending to be honest. If , the dishonest nodes pretend to be honest and follow honest nodes\u2019 behavior; otherwise, they follow the collusion strategy and favor their conspirators. A block verification result, denoted by , takes a if it matches the local version and otherwise. We can define a weighting factor based on the normalized trust score and a weighting factor based on the block verification result in . We can define the probability distribution for dishonest nodes (2 ###reference_###) and (3 ###reference_###) as follows:\nwhere signifies a strategy in which attackers consistently vote in favor of their teammates while voting against honest leaders with a probability denoted by . The collusion strategy (4 ###reference_###) is to give valid results to all dishonest partners, attack honest nodes according to a particular proportion, and give them non-valid.\nHonest nodes.\nHonest nodes rely on the global trust table, providing an overview of high-risk or low-risk users without explicitly identifying dishonest nodes. During the intra-consensus phase, honest nodes rely on block verification, voting for a block only if it matches their local version. We propose a composite probability distribution considering the trust-based voting distribution and the block verification distribution. This is achieved by weighting the probability of voting for a block based on the proposer\u2019s trust score and adjusting the weight according to the block verification result. This composite distribution allows honest nodes to make more informed decisions when voting for or against proposed blocks, considering both the proposer\u2019s trust score and the consistency of the proposed block with their local version. The combined probability distribution for honest nodes (5 ###reference_###) and (6 ###reference_###) can be calculated as follows:\nThe block verification table can be obtained as desired by simulating honest and dishonest nodes\u2019 behavior using these attack models. We can analyze the effects of collusion attacks on the overall system\u2019s security and performance in blockchain sharding systems.\nIn our proposed system, along with honest and dishonest nodes, we introduce the concept of high-risk and low-risk nodes. It is important to clarify that high-risk and low-risk nodes differ from honest and dishonest nodes. Our node evaluation principle classifies nodes as high-risk when their trust scores fall below a specific threshold, while nodes with trust scores above this threshold are categorized as low-risk nodes. These classifications do not necessarily mean high-risk or low-risk nodes are inherently honest or dishonest. Instead, they indicate the level of trustworthiness based on our evaluation criteria. By considering the presence of high-risk and low-risk nodes in the network, our system can better identify and respond to potential collusion attacks, improving the security and integrity of the blockchain sharding framework."
82
+ },
83
+ {
84
+ "section_id": "3.3.2",
85
+ "parent_section_id": "3.3",
86
+ "section_name": "III-C2 Environment assumption",
87
+ "text": "The system operates within permissioned blockchain systems, restricting network access to specific nodes. Despite this limitation, the system remains decentralized as it is maintained by the TC, a committee fairly elected by all peers within the network. For the training process to be considered trustworthy, it is important that the TC is reliable and has a transparent record when training the DRL model. Furthermore, Assumptions involve at least a certain number of members per shard, and the claim that the count of dishonest nodes in each shard does not surpass a certain number of the overall node count is based on the Byzantine Fault Tolerance (BFT) principle [33 ###reference_33###]. In case of node failure or a dishonest node attack, the system continues functioning with the remaining nodes."
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV TbDd: A Trust-Driven and DRL-based Approach to Optimize Throughput and Security",
93
+ "text": "Based on the results and analysis in Section III ###reference_###, we propose TbDd, composed of the Block Verification Table (BVT), Local Trust Table (LTT), and Global Trust Table (GTT), Shard Risk evaluator and Shard reconfiguration.\n###figure_4### Total number of nodes in the network\nNumber of nodes in the -th shard in episode\nThe set of all nodes in the network\nThe -th node in the network\nThe times of the -th node\nbeing elected as the leader in episode\nThe number of shards\nThe trust table update in episode\nThe -th epoch during the DRL iteration\nThe indirected feedback of -th leader from the -th node in episode\nThe directed feedback from the -th node to the -th leader in episode\nThe local trust from the -th node to the -th node in episode\nThe local trust table for the -th node in episode\nThe concatenated local trust tables across all nodes in episode\nThe global trust for the -th node in episode\nThe global trust table in episode\nThe normalized trust score of the -th node\nThe average global trust of -th shard in episode\nThe average value of all\nThe list of high-risk nodes in the -th shard\nThe entire network\u2019s fault tolerance threshold for dishonest nodes\nThe shard\u2019s fault tolerance threshold for dishonest nodes\nThe number of intra-shard transactions (ISTs)\nThe number of cross-shard transactions (CSTs)"
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-A Trust Scheme",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "4.1.1",
103
+ "parent_section_id": "4.1",
104
+ "section_name": "IV-A1 Block Verification Table (BVT)",
105
+ "text": "The BVT aims to record the validation results for each shard. The verification table for the -th shard in the -th episode is denoted as . The size of is determined by the number of nodes in the shard, with dimensions of , where represents the number of nodes in the -th shard. The verification result table can be visualized as a two-dimensional matrix where each cell corresponds to the set of verification results generated by the node that proposed a block. The size of each cell in the matrix corresponds to the number of times the leader produced blocks. During the trust table update episode, assume that there are sufficient votes to guarantee that each node within the shard will be elected as a leader at least once, resulting in block production. In this assumption, the leader can expect to obtain a minimum of one ballot from himself. Furthermore, we assume that the consensus process complies with the weak synchrony assumption [34 ###reference_34###], indicating a finite upper limit on message delays."
106
+ },
107
+ {
108
+ "section_id": "4.1.2",
109
+ "parent_section_id": "4.1",
110
+ "section_name": "IV-A2 Trust Table",
111
+ "text": "In TbDd, two distinct trust tables are utilized: LTT and GTT. These tables are dynamic and regularly refreshed to capture the latest data on each node\u2019s performance and behaviors within the network. By constantly monitoring and evaluating the LTT and GTT, the TbDd system can make strategic and informed shard allocation decisions, ensuring a reliable and secure network operation.\nThe local trust table is denoted as . Each row of this table is specifically assigned to a node within the shard, and these nodes are represented as entries within the table, which we term as the local trust table of the -th node at the epoch, denoted as . When these individual local trust tables are concatenated, we represent it as , forming a comprehensive table where . Within this table, each element represents the trust score sent from the -th node to the -th node, denoted as . The computation of each element in the LTT of -th node is shown in (7 ###reference_###):\nwhen calculating the trust score in the same shard, three feedbacks are included: Indirected feedback , Directed feedback and Global trust score from the last episode . Each component has a distinct proportion represented by . These proportions sum up to (i.e., ). Only the global trust score from the previous episode is considered for calculating the trust scores of nodes in different shards.\nIndirected feedback of each leader.\nThe proportion of verification that the -th node passes when he is the leader at episode is shown in (8 ###reference_###):\nwhere represents the times of the -th node being elected as the leader in the -th episode, and signifies the total number of valid votes cast by the -th node for the -th leader. Then, the indirected feedback is calculated as follows (9 ###reference_###):\nwhere the node, denoted as , provides indirect feedback and participates in the voting process for the current leader . The node does not include block proposer and the voter itself. refers to the ratio of non-empty votes (both valid and non-valid votes) cast from the -th node for the -th leader. is a discount rate similar to that used in reinforcement learning.\nDirected feedback of each leader. The direct feedback of trust score is calculated as shown in (10 ###reference_###):\nwhere denotes the count of valid votes cast by the -th node for the -th node when is leader. indicates that the number of times of the -th node being elected as the leader during the -th episode.\nGlobal trust of each leader from the history. The historical trust of each leader\n is inherited from the last episode.\nThe global trust table is denoted as . Inspired by federated learning principles, the coordinator enhances credibility by updating the LTT according to the GTT\u2019s outcome after every iteration.\nCosine similarity calculation. Comparisons of cosine similarity among LTT rows reflect deviations in node-scoring behavior (11 ###reference_###):\nThe global trust score for each node is determined by computing the mean of the cosine similarity between the node\u2019s scoring behavior in the LTT and the entire LTT (12 ###reference_###):\nis a vector capturing the global trust, where element is the global trust of the -th node in the current shard."
112
+ },
113
+ {
114
+ "section_id": "4.2",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-B Resharding Trigger: The Shard Risk Evaluator",
117
+ "text": "The resharding process is triggered when certain conditions are met, leading to the trigger phase (the green block in Fig. 3 ###reference_###), as shown in Algo. 1 ###reference_###. We define a global trust threshold to differentiate between low-risk and high-risk nodes. Nodes are high-risk and possibly dishonest if their are lower than . Nodes are more likely to be honest if their global trust exceeds .\nWe define the fault tolerance threshold within each shard.\nIf the number of dishonest nodes exceeds the threshold in the shard, the shard is labeled as corrupted and triggers resharding. Furthermore, resharding is also triggered when the ratio of CST surpasses the setting threshold . The ratio of the CST is calculated as follows:\nwhere represents the CST count, as given by:\nwhere represents the transaction count between the -th and -th nodes in the network. represents the transaction count between the -th node and the -th node in the shard, which equals the sum of the Intra-Shard Transaction (IST). The is obtained by subtracting the IST count from the total transactions count among network nodes."
118
+ },
119
+ {
120
+ "section_id": "4.3",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-C DRL Framework",
123
+ "text": ""
124
+ },
125
+ {
126
+ "section_id": "4.3.1",
127
+ "parent_section_id": "4.3",
128
+ "section_name": "IV-C1 Optimizations, Rewards, and DRL",
129
+ "text": "The DRL-based model enhances blockchain sharding systems by dynamically allocating nodes depending on network conditions and automating complex decision-making procedures. The reward function optimizes node allocation while maintaining security constraints through DRL adaptation. In TbDd, the agent aims to maximize earnings by calculating the objective function that is composed of reward components, where , and are all constant.\nShard load balance ().\nAs shown in (15 ###reference_###), we aim to distribute the number of nodes across each shard evenly. If there is a significant disparity in the node count per shard, resharding is augmented to counteract potential attacks [35 ###reference_35###].\nCorrupted shards portion ().\nAs shown in (16 ###reference_###), the agent receives a reward if no shard is occupied. Otherwise, the agent receives a penalty.\nCST ratio ().\nAs shown in (17 ###reference_###), if the CST ratio is less than the specified threshold , the agent gets rewarded; otherwise, it receives a penalty.\nNodes shifting ratio ().\nAs shown in (18 ###reference_###), if a node switches the shard, its action is noted as 1; otherwise, it is 0. The overall count of shifted nodes corresponds with the penalty score. The more nodes relocate, the more computational resources are consumed by shard synchronization, resulting in a higher level of punishment.\nIntra-shard\u2019s trust variance ().\nAs shown in (19 ###reference_###), trust variance represents the trust distribution within each shard. A larger trust variance of a shard indicates a better distinction between trustworthy and untrustworthy participants. Thus, a larger intra-shard trust variance collected by the agent serves as a reward to indicate a clearer differentiation between honest and dishonest nodes.\nwhere is the global trust value of nodes in the -th shard and is the average of global trust in the -th shard.\nCross-shard\u2019s trust variance ().\nAs shown in (20 ###reference_###), it represents the deviation of trust value among different shards. A minor cross-shard trust variance indicates a more uniform distribution of dishonest nodes.\nwhere is the average value of all .\nThus, the objective function can be defined as:\nwhere is the balance reward for shard load; is the reward for the number of corrupted shards; signifies the reward for low-level CST; indicates the reward for the number of shifted nodes; stands for the reward of intra-shard trust variance; and represents the reward of cross-shard trust variance. We define as the set of all rewards. Restrictions of the objective function ensure the dishonest node count stays below the shard\u2019s fault tolerance. denotes the minimum node requirement for each shard. Each shard mandates a minimum of four nodes in our setting, i.e., ."
130
+ },
131
+ {
132
+ "section_id": "4.3.2",
133
+ "parent_section_id": "4.3",
134
+ "section_name": "IV-C2 DRL-based Sharding Optimization Model",
135
+ "text": "We use the DRL model to assist in the blockchain reconfiguration process. The agent of the DRL model is acted by the TC, a committee elected by all peers in the network. The agent obtains the state from the node allocation. Then, the agent gains the reward by virtually resharding, deciding the optimal allocation strategy and executing the action for the new node allocation.\nAgent:\nThe agent is perceived as the TC, which consists of several nodes. These nodes implement the PBFT protocol to achieve consensus, representing the collective action of the TC. The agent executes a learning process and decides shard allocation based on real-time network conditions.\nEnvironment:\nAs illustrated in Fig. 2 ###reference_###, the environment is viewed as a black box that executes the Action of the agents and obtains the State. In this paper, the sharding reconfiguration process in the blockchain sharding network is the environment.\nWe consider that the agent obtains a state based on the node allocation at the current episode . denotes the distribution of all nodes across various shards in the current episode, while precisely identifies the specific node present within a particular shard.\nwhere is the -shard to which the -th node belongs.\nActions ().\nThe agent executes an action based on its decision produced by its local DRL-based learning during the current episode . The valid action space for the agent is formatted identically to , as given by\n with .\nPolicy ().\nA policy determines what action the agent should take next based on a set of rules. In our case, the process of nodes assigned to different shards is based on the policy that is continually updated and trained. i.e., .\nReward function ().\nThe reward function is inherited from the objective function, .\nDRL is a versatile method that allows agents to make decisions in dynamic environments effectively. Its ability to handle complex data and its proactive defense against malicious attacks make it an ideal solution for managing node and transaction interactions in sharding systems. The sharding optimization problem, which involves assigning each node to a specific shard, is a known non-convex, NP-hard problem due to the binary nature of decision variables [36 ###reference_36###]. This makes DRL a more suitable approach to address this challenge than traditional methods such as convex optimization, underlining its significance in managing the complexities of sharded blockchain systems."
136
+ },
137
+ {
138
+ "section_id": "5",
139
+ "parent_section_id": null,
140
+ "section_name": "Experiment and Evaluation",
141
+ "text": "We experimentally assess the DRL-based sharding approach regarding convergence performance and stability under various environment settings, including the number of shards, the number of nodes, the resource distribution, and other practical settings. Note that the parameters in the following experiments align with real-world IoT scenarios, such as Mobile Edge Computing (MEC), where edge servers on vehicles or drones collect data from end devices such as sensors. These servers initiate re-sharding to optimize data processing by redistributing workload when moving across regions."
142
+ },
143
+ {
144
+ "section_id": "5.1",
145
+ "parent_section_id": "5",
146
+ "section_name": "Experiment Framework",
147
+ "text": "An experimental framework is implemented in a local environment with 2.00 Ghz, 2 40 cores, 2 Intel(R) Xeon(R) Gold 6138 CPU, NVIDIA PCIe A100, 8 40GB GPU, 250GB memory and 1000Mb/s to evaluate a proposed blockchain sharding scheme. We create a virtual machine using Ubuntu 18.04.6 LTS in Python 3.7.13 and Pytorch 1.13.1. In this setting, we use the discrete DRL algorithms, DQN and PPO, and run over epochs. We set the range of total nodes from in our environment. The transaction distribution model between nodes follows the normal distribution. The trustworthy coordinator is implemented by deploying the smart contract.\nIn our experimental setup, we assess the blockchain sharding system TBDD against other sharding techniques such as random-based sharding [13 ###reference_13###], community-based sharding [17 ###reference_17###], and trust-based sharding [18 ###reference_18###]. We account for dishonest nodes in the range of , which employ collusion strategies during block verification. These nodes may produce deceptive block verifications and send CSTs across different shards. Initially, these dishonest nodes are scattered randomly across shards. To model interactions between dishonest nodes in distinct shards, positive noises are introduced to their transaction counts. The resulting transaction distribution table is influenced by normally distributed transactions. The experiments\u2019 hyperparameters are detailed in Table II ###reference_###.\nAs the intra-shard fault tolerance threshold is set to , we also evaluate the relationship between the total number of nodes in the network, the number of shards , and the total fault tolerance , as given by\nby which one can realize whether the entire network has failed."
148
+ },
149
+ {
150
+ "section_id": "5.2",
151
+ "parent_section_id": "5",
152
+ "section_name": "Experiment Results",
153
+ "text": "In the experimental evaluations, we compared the performance of the TBDD system with random-based, community-based, and trust-based sharding approaches using metrics such as CST ratio, shard risk variance, corrupted shard number, and convergence speed. Through these evaluations, we validate the effectiveness and robustness of our proposed approach. In the following figures, we label each figure in terms of the number of nodes , shards , and dishonest nodes , respectively.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### Fig. 5(a) ###reference_f1### illustrates the average rewards achieved in a single epoch with different sharding schemes. The community-based sharding technique recorded the lowest reward. While this scheme can significantly minimize cross-shard transactions, it is vulnerable to adaptive collusion attacks. This is because it tends to cluster a high number of dishonest nodes within the same shard, compromising the shard\u2019s security. The random-based scheme fares slightly better, with its rewards hovering around zero. The trust-based sharding technique is more proficient at evenly distributing dishonest nodes across different shards, mitigating the risk of a single shard being dominated. However, it leads to a higher number of cross-shard transactions. Consequently, its rewards are marginally less than those of DQN and PPO. Upon reaching the epoch, both TBDD-PPO and TBDD-DQN consistently outperform the other sharding schemes, exhibiting consistently high rewards. This indicates that the agent received the maximum reward associated with the action.\nFigs. 5(b) ###reference_f2###\u20135(c) ###reference_f3### depict the convergence of rewards under various node numbers for TBDD-DQN and TBDD-PPO, respectively. The reward becomes more stable as the number of nodes increases. Across Figs. 5(a) ###reference_f1###\u20135(c) ###reference_f3###, a consistent trend emerges caused by the constrained approach in the policy update process. PPO achieves more stable rewards. The instability of DQN is because it combines the direct value function estimation method and greedy exploration strategy.\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### Figs. 6(a) ###reference_f1###\u20136(e) ###reference_f5### depict the relationship between the CST and shard risk variance. The shard risk variance closer to indicates a more balanced distribution of malicious nodes, enhancing shard security. The CST closer to suggests better shard scalability. The results from random-based sharding demonstrate scattered and unpredictable in the scatter plot. Community-based sharding focuses on scalability, overlooking security, leading to a relatively higher shard risk ratio. Trust-based sharding emphasizes security at the expense of scalability, resulting in a relatively larger cross-shard transaction ratio. In contrast, the proposed TbDd-DQN and TbDd-PPO consider both security and scalability. Compared to community-based and trust-based methods, TbDd-DQN and TbDd-PPO are closer to the bottom-left corner, which means that these two sharding methods improve system security while ensuring that the system scalability. This proves the effectiveness of this method, thereby approving the effectiveness of the TbDd framework. Furthermore, consistent with the findings from Fig. 5 ###reference_###, the TbDd-PPO method demonstrates higher stability than the TbDd-DQN method.\nFigs. 6(f) ###reference_f6###\u20136(j) ###reference_f10### illustrate the relationship between CST and node movement ratio. A node movement ratio closer to indicates fewer node movements, thus reducing system overhead. The outcomes from random-based sharding are unpredictable and uncontrollable. Both Community-based sharding and trust-based sharding methods have a higher number of node movements compared to TbDd-DQN and TbDd-PPO, leading to higher system overhead. Among these, the PPO method results in the fewest node movements. Through Fig. 6 ###reference_###, we can deduce the following: the TbDd-DQN and TbDd-PPO sharding proposed in this paper achieve an optimal balance between security, scalability, and system overhead, thereby affirming their effectiveness.\nIn Fig. 7 ###reference_###, it is shown that the average trust of dishonest nodes surpasses that of honest nodes as the number of dishonest nodes increases. The uppermost horizontal line in each column represents the maximum trust value among all nodes, while the bottom horizontal line represents the minimum trust value. The horizontal line at the midpoint signifies the median trust value of all nodes. When the total number of nodes is , and the total shard number is , the overall fault tolerance threshold is based on (13 ###reference_###). Consequently, a shard becomes vulnerable and is corrupted when the number of dishonest nodes .\nAs shown in Figs. 8(a) ###reference_f1###\u20138(l) ###reference_f12###, the blue curves represent the average trust of honest nodes, and the red curves are dishonest nodes. Consistent results from Fig. 7 ###reference_### reveal that the system can effectively perform sharding if the proportion of dishonest nodes remains below of the total node count. However, if the number of dishonest nodes exceeds the threshold, any sharding approaches, including the proposed schemes TbDd-DQN and TbDd-PPO, cannot safely execute sharding and are vulnerable to the attack, which is beyond the scope of our investigation. On the other hand, having a lower count of dishonest nodes still enables the implementation of more shards within the system, leading to improved overall performance and scalability.\nTo evaluate throughput across different sharding methods influenced by dishonest nodes, we use the normalized throughput metric. This metric compares the throughput with and without dishonest nodes. A higher ratio suggests dishonest nodes have minimal impact on transaction throughput, while a lower one indicates a significant negative effect. Additionally, we highlight the benefits of our system by comparing the shard corruption ratio over the last rounds among various sharding approaches. As depicted in Figs. 9(a) ###reference_f1###\u20139(b) ###reference_f2###, random-based sharding exhibits the lowest throughput and compromised security. As dishonest node counts rise, there\u2019s a pronounced decline in scalability, coupled with an increased number of corrupted shards. The Community-based sharding method has the best scalability and maintains a high throughput. Yet, its oversight on the security front results in a higher rate of shard corruption. In comparison, the trust-based sharding method reduces the chances of shard corruption but lags in throughput, signaling its scalability constraints. Without collusion attacks and within the tolerable limit of dishonest nodes, the proposed TbDd-DQN and TbDd-PPO in this article achieves approximately a throughput improvement over random-based sharding method and a increase compared to the trust-based method."
154
+ },
155
+ {
156
+ "section_id": "5.3",
157
+ "parent_section_id": "5",
158
+ "section_name": "Discussion",
159
+ "text": "Latency.\nIn TbDd, managing latency is crucial across block verification, sharding strategy optimization, and resharding phases. Tolerating delay during block verification is a necessary trade-off to prevent double-spending issues [37 ###reference_37###]. However, a prolonged offline DRL learning phase for the sharding strategy is undesirable.\nThis could be mitigated by synchronizing online sharding strategy learning with resharding execution. Latency concerns during resharding, due to extensive node synchronization, can be alleviated through state channels [38 ###reference_38###] that expedite off-chain transactions, reducing blockchain load and hastening resharding.\nIntegrating state channels into TbDd enables certain IoT transactions to be executed off-chain during proposed block verification, which lessens the node verification load and boosts overall system responsiveness.\nEdge-Driven Protocol.\nThe architecture of TbDd uniquely operates on edge servers, not directly on IoT sensor end nodes. This strategic placement ensures that the system can manage extensive computations associated with blockchain operations without overwhelming individual IoT devices. As a result, the scaling exhibited in our experiments is consistent and realistic, representing a practical implementation in real-world IoT networks. This edge-driven approach aligns with the broader move towards edge computing in IoT, capitalizing on its benefits to improve scalability and responsiveness.\nDecentralized Coordination.\nOur design leverages a decentralized TC, acting as a trustworthy third-party coordinator, eliminating the vulnerabilities associated with a single centralized coordinator. This approach significantly mitigates the risks of a single point of failure in the system. The intra-consensus security within the decentralized TC ensures robust and reliable decision-making processes. Such a structure has been mirrored in existing studies [13 ###reference_13###, 14 ###reference_14###], affirming its practicality and security. By embedding this design, TbDd further ensures system robustness, sustaining its promises of trustworthiness and integrity in diverse IoT settings."
160
+ },
161
+ {
162
+ "section_id": "6",
163
+ "parent_section_id": null,
164
+ "section_name": "VI Conclusion",
165
+ "text": "We introduced TbDd, a novel trust and DRL-based sharding framework for IoT contexts. TbDd surpasses random, community, and trust-based methods by optimally balancing security and scalability, and robustly combating strategic collusion attacks. We devised a trust table mechanism that evaluates collusion, drawing from distributed voting, consensus, and node behavior analysis. Our framework sets dynamic security thresholds, with the decentralized committee acting as an agent that optimizes node allocation by maximizing rewards. Our comprehensive experiments substantiate TbDd\u2019s potential as a robust solution for enhancing security and scalability in real-world IoT environments."
166
+ }
167
+ ],
168
+ "appendix": [],
169
+ "tables": {
170
+ "1": {
171
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Notation and Definition Used in The Trust Table</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.50\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.50.51.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.50.51.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.50.51.1.1.1\">Notation</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.51.1.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.50.51.1.2.1\">Description</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.1.1.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.2.1\">Total number of nodes in the network</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.3\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.3.2.2\">Number of nodes in the -th shard in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.5.5.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.2.1\">The set of all nodes in the network</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.6.6.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.7.7.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.7.7.2.1.1\">The -th node in the network</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.8.8.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.10.10.3\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.10.10.3.2.2\">The times of the -th node\nbeing elected as the leader in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.11.11.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.11.11.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.11.11.2.1\">The number of shards</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.12.12.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.13.13.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.13.13.2.1.1\">The trust table update in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.14.14.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.15.15.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.15.15.2.1.1\">The -th epoch during the DRL iteration</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.16.16.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.19.19.4\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.19.19.4.3.3\">The indirected feedback of -th leader from the -th node in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.23.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.20.20.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.23.23.4\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.23.23.4.3.3\">The directed feedback from the -th node to the -th leader in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.24.24.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.27.27.4\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.27.27.4.3.3\">The local trust from the -th node to the -th node in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.28.28.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.30.30.3\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.30.30.3.2.2\">The local trust table for the -th node in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.32.32\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.31.31.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.32.32.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.32.32.2.1.1\">The concatenated local trust tables across all nodes in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.35.35\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.33.33.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.35.35.3\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.35.35.3.2.2\">The global trust for the -th node in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.37.37\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.36.36.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.37.37.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.37.37.2.1.1\">The global trust table in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.39.39\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.38.38.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.39.39.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.39.39.2.1.1\">The normalized trust score of the -th node</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.42.42\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.40.40.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.42.42.3\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.42.42.3.2.2\">The average global trust of -th shard in episode </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.44.44\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.43.43.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.44.44.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.44.44.2.1.1\">The average value of all </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.46.46\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.45.45.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.46.46.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.46.46.2.1.1\">The list of high-risk nodes in the -th shard</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.47.47\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.47.47.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.47.47.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.47.47.2.1\">The entire network\u2019s fault tolerance threshold for dishonest nodes</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.48.48\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.48.48.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.48.48.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.48.48.2.1\">The shard\u2019s fault tolerance threshold for dishonest nodes</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.49.49\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.49.49.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.49.49.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.49.49.2.1\">The number of intra-shard transactions (ISTs)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.50.50\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.50.50.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></th>\n<td class=\"ltx_td ltx_align_justify ltx_border_b\" id=\"S4.T1.50.50.2\" style=\"width:170.7pt;padding-top:1pt;padding-bottom:1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.50.50.2.1\">The number of cross-shard transactions (CSTs)</p>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
172
+ "capture": "TABLE I: Notation and Definition Used in The Trust Table"
173
+ },
174
+ "2": {
175
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Hyperparameters</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.15\" style=\"width:433.6pt;height:218pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(37.8pt,-19.0pt) scale(1.21134783830593,1.21134783830593) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.15.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.15.15.16.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.15.15.16.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.15.15.16.1.1.1\">Notation</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.15.15.16.1.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.15.15.16.1.2.1\">Description</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.15.15.16.1.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.15.15.16.1.3.1\">Value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The number of epoches in DRL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.4.4.4.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The probability of node failing to vote</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.6.6.6.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The fraction of dishonest nodes within one shard</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.8.8.8.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The threshold for triggering colluding strategy</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.8.8.8.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.11.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.9.9.9.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.10.10.10.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The discount factor for calculating of each leader</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.11.11.11.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.12.12.12.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.12.12.12.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The threshold of differentiating low-risky nodes and</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.12.12.12.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.15.15.17.1\">\n<td class=\"ltx_td ltx_border_r\" id=\"S5.T2.15.15.17.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.15.15.17.1.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">high-risky nodes in terms of trust</td>\n<td class=\"ltx_td\" id=\"S5.T2.15.15.17.1.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.13.13.13.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T2.14.14.14.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The threshold of faulty tolerance within one shard</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.14.14.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.15.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.15.15.15.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S5.T2.15.15.15.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">The threshold of triggering resharding in terms of CST ratio</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.15.15.15.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.4</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
176
+ "capture": "TABLE II: Hyperparameters"
177
+ }
178
+ },
179
+ "image_paths": {
180
+ "1": {
181
+ "figure_path": "2401.00632v1_figure_1.png",
182
+ "caption": "Figure 1: System model.",
183
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/system_modelr.png"
184
+ },
185
+ "2": {
186
+ "figure_path": "2401.00632v1_figure_2.png",
187
+ "caption": "Figure 2: The proposed blockchain sharding system architecture - TbDd.",
188
+ "url": "http://arxiv.org/html/2401.00632v1/x1.png"
189
+ },
190
+ "3": {
191
+ "figure_path": "2401.00632v1_figure_3.png",
192
+ "caption": "Figure 3: The proposed blockchain sharding system flowchart - TbDd, including four steps. 1\u20dd Trust table updated: update the Block Verification Table (BVT) and Global Trust Table (GTT), checking whether triggering Algo. 1. 2\u20dd Train the DRL algorithm: use the DRL-based model to virtually resharding through several epochs and output a new node allocation result. 3\u20dd Update shard allocation: allocate nodes to the shards according to the DRL-based training result. 4\u20dd Monitor network performance: monitor and step into retraining.",
193
+ "url": "http://arxiv.org/html/2401.00632v1/x2.png"
194
+ },
195
+ "4": {
196
+ "figure_path": "2401.00632v1_figure_4.png",
197
+ "caption": "Figure 4: The flow diagram of the proposed computing trust score system comprises BVT, LTT and GTT.",
198
+ "url": "http://arxiv.org/html/2401.00632v1/x3.png"
199
+ },
200
+ "5(a)": {
201
+ "figure_path": "2401.00632v1_figure_5(a).png",
202
+ "caption": "(a) N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4.\nFigure 5: Fig. 5(a) represents the comparison among Random-based, Community-based, Trust-based, TBDD-DQN and TBDD-PPO across 100100100100 epochs. Figs. 5(b) and 5(c) represent the reward performance between TBDD-DQN and TBDD-PPO across varying node numbers in the environment settings across 30303030 epochs, respectively.",
203
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp1.png"
204
+ },
205
+ "5(b)": {
206
+ "figure_path": "2401.00632v1_figure_5(b).png",
207
+ "caption": "(b) N={10,12,14,16},D=2,h=2formulae-sequence\ud835\udc4110121416formulae-sequence\ud835\udc372\u210e2N=\\{10,12,14,16\\},D=2,h=2italic_N = { 10 , 12 , 14 , 16 } , italic_D = 2 , italic_h = 2.\nFigure 5: Fig. 5(a) represents the comparison among Random-based, Community-based, Trust-based, TBDD-DQN and TBDD-PPO across 100100100100 epochs. Figs. 5(b) and 5(c) represent the reward performance between TBDD-DQN and TBDD-PPO across varying node numbers in the environment settings across 30303030 epochs, respectively.",
208
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp1_DQN.png"
209
+ },
210
+ "5(c)": {
211
+ "figure_path": "2401.00632v1_figure_5(c).png",
212
+ "caption": "(c) N={10,12,14,16},D=2,h=2formulae-sequence\ud835\udc4110121416formulae-sequence\ud835\udc372\u210e2N=\\{10,12,14,16\\},D=2,h=2italic_N = { 10 , 12 , 14 , 16 } , italic_D = 2 , italic_h = 2.\nFigure 5: Fig. 5(a) represents the comparison among Random-based, Community-based, Trust-based, TBDD-DQN and TBDD-PPO across 100100100100 epochs. Figs. 5(b) and 5(c) represent the reward performance between TBDD-DQN and TBDD-PPO across varying node numbers in the environment settings across 30303030 epochs, respectively.",
213
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp1_PPO.png"
214
+ },
215
+ "6(a)": {
216
+ "figure_path": "2401.00632v1_figure_6(a).png",
217
+ "caption": "(a) Random-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
218
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2sr_random.png"
219
+ },
220
+ "6(b)": {
221
+ "figure_path": "2401.00632v1_figure_6(b).png",
222
+ "caption": "(b) Community-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
223
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2sr_graph.png"
224
+ },
225
+ "6(c)": {
226
+ "figure_path": "2401.00632v1_figure_6(c).png",
227
+ "caption": "(c) Trust-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
228
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2sr_trust.png"
229
+ },
230
+ "6(d)": {
231
+ "figure_path": "2401.00632v1_figure_6(d).png",
232
+ "caption": "(d) TBDD-DQN\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
233
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2sr_dqn.png"
234
+ },
235
+ "6(e)": {
236
+ "figure_path": "2401.00632v1_figure_6(e).png",
237
+ "caption": "(e) TBDD-PPO\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
238
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2sr_ppo.png"
239
+ },
240
+ "6(f)": {
241
+ "figure_path": "2401.00632v1_figure_6(f).png",
242
+ "caption": "(f) Random-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
243
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2nmr_random.png"
244
+ },
245
+ "6(g)": {
246
+ "figure_path": "2401.00632v1_figure_6(g).png",
247
+ "caption": "(g) Community-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
248
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2nmr_graph.png"
249
+ },
250
+ "6(h)": {
251
+ "figure_path": "2401.00632v1_figure_6(h).png",
252
+ "caption": "(h) Trust-based\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
253
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2nmr_trust.png"
254
+ },
255
+ "6(i)": {
256
+ "figure_path": "2401.00632v1_figure_6(i).png",
257
+ "caption": "(i) TBDD-DQN\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
258
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2nmr_dqn.png"
259
+ },
260
+ "6(j)": {
261
+ "figure_path": "2401.00632v1_figure_6(j).png",
262
+ "caption": "(j) TBDD-PPO\nFigure 6: The comparison of different sharding schemes with same setting N=16,D=2,h=4formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e4N=16,D=2,h=4italic_N = 16 , italic_D = 2 , italic_h = 4 across 100100100100 epochs.",
263
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp2nmr_ppo.png"
264
+ },
265
+ "7(a)": {
266
+ "figure_path": "2401.00632v1_figure_7(a).png",
267
+ "caption": "(a) N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }, TBDD-DQN\nFigure 7: Tracing of the impact of the number of dishonest nodes for node trust with DRL approach, N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }. The left subfigure uses the DQN algorithm. The right subfigure uses the PPO algorithm.",
268
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3_DQN.png"
269
+ },
270
+ "7(b)": {
271
+ "figure_path": "2401.00632v1_figure_7(b).png",
272
+ "caption": "(b) N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }, TBDD-PPO\nFigure 7: Tracing of the impact of the number of dishonest nodes for node trust with DRL approach, N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }. The left subfigure uses the DQN algorithm. The right subfigure uses the PPO algorithm.",
273
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3_PPO.png"
274
+ },
275
+ "8(a)": {
276
+ "figure_path": "2401.00632v1_figure_8(a).png",
277
+ "caption": "(a) h=0\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
278
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3a_DQN.png"
279
+ },
280
+ "8(b)": {
281
+ "figure_path": "2401.00632v1_figure_8(b).png",
282
+ "caption": "(b) h=1\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
283
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3b_DQN.png"
284
+ },
285
+ "8(c)": {
286
+ "figure_path": "2401.00632v1_figure_8(c).png",
287
+ "caption": "(c) h=2\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
288
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3c_DQN.png"
289
+ },
290
+ "8(d)": {
291
+ "figure_path": "2401.00632v1_figure_8(d).png",
292
+ "caption": "(d) h=3\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
293
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3d_DQN.png"
294
+ },
295
+ "8(e)": {
296
+ "figure_path": "2401.00632v1_figure_8(e).png",
297
+ "caption": "(e) h=4\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
298
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3e_DQN.png"
299
+ },
300
+ "8(f)": {
301
+ "figure_path": "2401.00632v1_figure_8(f).png",
302
+ "caption": "(f) h=5\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
303
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3f_DQN.png"
304
+ },
305
+ "8(g)": {
306
+ "figure_path": "2401.00632v1_figure_8(g).png",
307
+ "caption": "(g) h=0\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
308
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3a_PPO.png"
309
+ },
310
+ "8(h)": {
311
+ "figure_path": "2401.00632v1_figure_8(h).png",
312
+ "caption": "(h) h=1\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
313
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3b_PPO.png"
314
+ },
315
+ "8(i)": {
316
+ "figure_path": "2401.00632v1_figure_8(i).png",
317
+ "caption": "(i) h=2\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
318
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3c_PPO.png"
319
+ },
320
+ "8(j)": {
321
+ "figure_path": "2401.00632v1_figure_8(j).png",
322
+ "caption": "(j) h=3\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
323
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3d_PPO.png"
324
+ },
325
+ "8(k)": {
326
+ "figure_path": "2401.00632v1_figure_8(k).png",
327
+ "caption": "(k) h=4\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
328
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3e_PPO.png"
329
+ },
330
+ "8(l)": {
331
+ "figure_path": "2401.00632v1_figure_8(l).png",
332
+ "caption": "(l) h=5\nFigure 8: Tracking global trusts \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G between honest and dishonest nodes as the number of dishonest nodes escalates within the DRL scheme. Figs. 8(a)\u20138(f) represent the trust variation in the DQN algorithm, while Figs. 8(g)\u20138(l) represent the trust variation in the PPO algorithm. N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }.",
333
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp3f_PPO.png"
334
+ },
335
+ "9(a)": {
336
+ "figure_path": "2401.00632v1_figure_9(a).png",
337
+ "caption": "(a) N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }\nFigure 9: Fig. 9(a) shows the relationship between the number of dishonest nodes, system throughput, and corrupted shards is interconnected. Fig. 9(b) compares corrupted shard ratio with different sharding techniques across 100100100100 epochs.",
338
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp4.png"
339
+ },
340
+ "9(b)": {
341
+ "figure_path": "2401.00632v1_figure_9(b).png",
342
+ "caption": "(b) N=16,D=2,h={0,1,2,3,4,5}formulae-sequence\ud835\udc4116formulae-sequence\ud835\udc372\u210e012345N=16,D=2,h=\\{0,1,2,3,4,5\\}italic_N = 16 , italic_D = 2 , italic_h = { 0 , 1 , 2 , 3 , 4 , 5 }\nFigure 9: Fig. 9(a) shows the relationship between the number of dishonest nodes, system throughput, and corrupted shards is interconnected. Fig. 9(b) compares corrupted shard ratio with different sharding techniques across 100100100100 epochs.",
343
+ "url": "http://arxiv.org/html/2401.00632v1/extracted/5324825/fig/exp5.png"
344
+ }
345
+ },
346
+ "validation": true,
347
+ "references": [],
348
+ "url": "http://arxiv.org/html/2401.00632v1"
349
+ }
20240101/2401.00633v1.json ADDED
@@ -0,0 +1,544 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On Discprecncies between Perturbation Evaluations of Graph Neural Network Attributions",
3
+ "abstract": "Neural networks are increasingly finding their way into the realm of graphs and modeling relationships between features. Concurrently graph neural network explanation approaches are being invented to uncover relationships between the nodes of the graphs. However, there is a disparity between the existing attribution methods, and it is unclear which attribution to trust. Therefore research has introduced evaluation experiments that assess them from different perspectives. In this work, we assess attribution methods from a perspective not previously explored in the graph domain: retraining. The core idea is to retrain the network on important (or not important) relationships as identified by the attributions and evaluate how networks can generalize based on these relationships. We reformulate the retraining framework to sidestep issues lurking in the previous formulation and propose guidelines for correct analysis. We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets. The analysis reveals that attributions perform variably depending on the dataset and the network. Most importantly, we observe that the famous GNNExplainer performs similarly to an arbitrary designation of edge importance. The study concludes that the retraining evaluation cannot be used as a generalized benchmark and recommends it as a toolset to evaluate attributions on a specifically addressed network, dataset, and sparsity. Our code is publically available.111\nhttps://github.com/alirezadizaji/GraphROAR",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### Attribution methods for Graph Neural Networks (GNNs) are coming into the spotlight to alleviate the issues arising from the black-box behavior of GNNs (Yuan et al. 2022 ###reference_30###; Ying et al. 2019 ###reference_28###; Yuan et al. 2020 ###reference_29###; Luo et al. 2020 ###reference_17###; Yuan et al. 2021 ###reference_31###; Sanchez-Lengeling et al. 2020 ###reference_23###; Agarwal, Zitnik, and Lakkaraju 2022 ###reference_2###).\nThe attribution helps establish trust, debug failure modes, and extract insights regarding predictive node relationships and features within the data. Feature attribution research has been ongoing before finding its way to graph domain(Ribeiro, Singh, and Guestrin 2016 ###reference_21###; Dabkowski and Gal 2017 ###reference_4###; Bach et al. 2015 ###reference_3###; Fong and Vedaldi 2017 ###reference_6###; Montavon et al. 2017 ###reference_19###; Zhang et al. 2021 ###reference_33###; Selvaraju et al. 2017 ###reference_25###; Khakzar et al. 2021 ###reference_9###). Some of these methods such as GradCAM (Pope et al. 2019 ###reference_20###) are directly transferred to the graph domain. Some are directly proposed for graphs considering the properties of graphs: For instance, optimization-based methods such as GNNExplainer (Ying et al. 2019 ###reference_28###), PGExplainer (Luo et al. 2020 ###reference_17###), and search algorithms (SubGraphX (Yuan et al. 2021 ###reference_31###)) to identify the important subset of nodes and edges for the classification GNN model.\nDespite the progress in explainability research, the problem remains unsolved, and there is a disagreement between different attribution method outputs (Krishna et al. 2022 ###reference_12###; Khakzar et al. 2022 ###reference_10###). It is not clear which attribution to trust. The same phenomenon exists in graph attribution (See Figure 1 ###reference_###). Some evaluation methodologies exist to check the sanity of explanations (Samek et al. 2016 ###reference_22###; Hooker et al. 2019 ###reference_7###; Adebayo et al. 2018 ###reference_1###; Sanchez-Lengeling et al. 2020 ###reference_23###; Agarwal, Zitnik, and Lakkaraju 2022 ###reference_2###; Zhang et al. 2023 ###reference_34###). Some approaches solely rely on human judgment to evaluate interpretability (Maddison, Mnih, and Teh 2016 ###reference_18###; Jang, Gu, and Poole 2016 ###reference_8###; Ying et al. 2019 ###reference_28###; Yuan et al. 2020 ###reference_29###). These approaches compare the attribution with ground truth features. For instance, in the image domain, several metrics (pointing game (Zhang et al. 2018 ###reference_32###), EHR (Zhang et al. 2021 ###reference_33###), IoU (Selvaraju et al. 2017 ###reference_25###)) compare attribution (saliency maps) against bounding box annotations. There are counterpart approaches in the graph domain that compare attributions with human-selected subgraphs (Maddison, Mnih, and Teh 2016 ###reference_18###; Jang, Gu, and Poole 2016 ###reference_8###; Ying et al. 2019 ###reference_28###; Yuan et al. 2020 ###reference_29###). However, these ground truths are provided by humans, and there is no guarantee that the model would use the same features as humans use (Samek et al. 2016 ###reference_22###; Hooker et al. 2019 ###reference_7###; Adebayo et al. 2018 ###reference_1###). Such metric can lead to false explanations as the model may choose a different combination of features than a human expert to achieve the same accuracy. In this work, we investigate the significance of the edges in the graph.\nBut, how do we know if an edge in the graph is important for the GNN? Remove it and observe how the output changes. This notion is the motivation behind the perturbation-based analysis of attributions. Originally (Samek et al. 2016 ###reference_22###) proposed a framework based on perturbation, and this notion was later incarnated in the graph domain as the fidelity vs. sparsity score (Yuan et al. 2021 ###reference_31###; Li et al. 2022 ###reference_14###). In simple terms, observing the GNN\u2019s output (through fidelity) for various levels of sparsity as we keep removing edges in the order of their importance. Despite being insightful, there is always the uncertainty that the resulting output change from removing edges is due to the new perturbed input being out of distribution since the model has never seen such graph subsets during training.\nThe idea of retraining on attributions is proposed in (Hooker et al. 2019 ###reference_7###) emerged to overcome this issue. However, it never found its way to the graph domain, and the evaluations are still based on variants of fidelity vs. sparsity. The retraining evaluation unveiled novel insights regarding the behavior of several attributions. For instance, it is shown that the features identified as important network\u2019s gradient are not generalizable features, although removing these features can affect the output significantly.\nIn this work, we first adapt the evaluation by retraining strategy to the graph domain. Moreover, we discuss issues lurking in the original formulation of the retraining framework and provide a different approach to side-step these issues. We outline how to reliably interpret the retraining evaluation results. We demonstrate the evaluation pipeline on four popular and renowned explanation methods: GradCAM (Pope et al. 2019 ###reference_20###), GNNExplainer (Ying et al. 2019 ###reference_28###), PGExplainer (Luo et al. 2020 ###reference_17###), and SubgraphX (Yuan et al. 2021 ###reference_31###). We also add a random explainer (random edge importance assignment) to evaluate how the explanations performance deviates from a random assignment, and surprisingly, GNNExplainer (the most renowned method) never does. We leverage five datasets: two synthetic datasets (BA2Motifs (Ying et al. 2019 ###reference_28###), BA3Motifs created by us), two biology datasets (MUTAG (Debnath et al. 1991 ###reference_5###), ENZYME (Schomburg et al. 2004 ###reference_24###)), and a social-networking dataset (REDDIT-BINARY (V\u00f6lske et al. 2017 ###reference_26###)). We show how explainers behave differently in different datasets and networks (GIN (Xu et al. 2018 ###reference_27###) and GCN (Kipf and Welling 2016 ###reference_11###)). By demonstrating the variability in performance, we show that retraining evaluation is required whenever an attribution is used. Our contributions can be summarized as follows:\nWe innovate by adapting the concept of retraining, previously applied in the vision domain, and reformulating it into the realm of graph data and GNNs.\nThrough extensive experiments across five datasets, we evaluate the performance of four renowned graph attribution methods and also a random setting.\nWe propose a guidebook outlining a procedural approach for interpreting attribution results and comparing different methods.\nOur results challenge previous claims, demonstrating that no single model exhibits consistent superiority across all experiments. Instead, we show calling a model the best-performing depends on specific conditions which we later elaborate on."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background and Setup",
15
+ "text": "In this section, we introduce the notations concerning graph data, classification GNN, the employed attribution methods, and our retraining strategy. Followingly, we provide a concise overview of the background regarding the functioning of the four used attribution methods. For a more detailed elaboration on these methods, please refer to the supplementary materials."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Attributions under Evaluation",
21
+ "text": "GNNExplainer(Ying et al. 2019 ###reference_28###).\nIn this paper, the model learns a feature mask proportional to the importance of the features. The importance of the input features is evaluated based on model\u2019s sensitivity w.r.t its presence and absence. Therefore GNNexplainer gives more weight to the features which affects models behavior. For backpropagating the gradients in mutual information maximization equation, reparameterization trick was leveraged.\nPGExplainer (Luo et al. 2020 ###reference_17###). In this method, the objective is again to maximize the mutual information between the GNN\u2019s prediction when and are the input to the model. In this equation stands for entropy. This can be seen as a minimization problem since the term in the equation 1 ###reference_### is fixed, and as a result, the objective turns into minimizing the conditional entropy . Two relaxation assumptions have been used for this model: i) is a gilbert graph with independently selected edges from , and ii) probabilities of an edge from the original graph existing in the explanatory graph is approximated with Bernoulli distribution .\nAlso, reparameterization trick (Jang, Gu, and Poole 2016 ###reference_8###) is leveraged to optimize the objective function with gradient-based methods. Finally, binary concrete distribution was utilized to approximate the sampling process from the Bernoulli distribution and forming . Considering several subsets and classes, Monte Carlo is used to aproximately oprimize the objective function with as the total number of sampled graph, and as the number of labels, and as the k-th sampled graph:\nGradCAM (Pope et al. 2019 ###reference_20###).\nLet the output of layer be and the gradient be where, is the number of neurons in the layer. GradCAM proposes to compute feature weights signifying the importance of the feature as:\n where,\n and denotes the neurons. Inspired from GradCAM and Dive into graphs (DiG)(Liu et al. 2021 ###reference_15###), we rank the edges in the input graph by applying a normalization of the gradients throughout the network and adding the activation function to that. Here and denote and node. The rank of is denoted by and can be mathematically formulated as: .\nSubgraphX (Yuan et al. 2021 ###reference_31###).\nThis method uses the concept in game theory to gain different graph structures as players. To explore the important subgraph, Monte Carlo Search Tree has been used. In this search tree, root is the input graph, and each of other nodes corresponds to a connected subgraph. To reduce the search space MCTS is leveraged by selecting a path from a root to a leaf node with highest reward. is a scoring function for evaluating the importance of a subgraph with the upper bound for its size given the trained GNNs and the original input graph . After several iterations of search, the subgraph with the highest score from the leaves is identified as the explanation. For both MCTS and selection of explanation, Shapley value (Kuhn and Tucker 1953 ###reference_13###; Lundberg and Lee 2017 ###reference_16###) from the cooperative game theory was use as the score function. Set of the players is defined as where the subgraph is one player with nodes and is the number of its L-hop neighboring nodes. To have an even more efficient computing, Monte Carlo sampling is applied to calculate Shapley values with considering a as a coalition set of nodes. Finally, the approximation of Shapley values with total number of sampling steps and as the trained GNN network, can be written as:"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Attribution Evaluation",
27
+ "text": "The core idea behind evaluation by retraining is simple:\nfirst Retrain the network on the subgraph (a subset of nodes and edges of the original graph) determined by the attribution method, and then check the network\u2019s generalization ability. We can do this in two variations which give complementary insights:\nRoMie: Retrain On the Most Important Edges.\nFor each input graph in the dataset, an attributed graph subset (subgraph) is extracted. The subgraphs form a new training set consisting of edges identified as important by the employed attribution method. Then the network is trained again from a random initialization state on this new training set. Finally, the network generalization ability on the test set is evaluated. If the network exhibits satisfactory generalization performance by exclusively relying on the attributed edges during training, it can be inferred that selected edges possess highly predictive value for that particular network under investigation. Additionally, RoMie must be applied to varying degrees of sparsity in graphs in order to have a fair valuation. For a given sparsity level of , the most important edges, or the top determined by the attribution method, are chosen to form a subgraph. By altering the percentage N, an extensive and fair analysis of the attribution method\u2019s performance across different sparsity levels can be conducted. Of particular note is that the original test set will be used for testing purposes. We will provide further explanation for this decision in a separate subsection later.\nRoLie: Retrain On the Least Important Edges.\nFor each input graph in the dataset, a non-attributed graph subset (subgraph) is extracted. In this setting, the training set is formed of subgraphs that include edges with the Least Importance; meaning, for a given sparsity level of , the least important edges, or the lowest determined by the attribution method, are chosen to form a subgraph. The network is then trained on these subgraphs. In a similar fashion, the test set will be the original unperturbed set. Further elaboration on this, will be provided later on. Finally, the performance of the attribution method is again studied across different levels of sparsity for a fair comparison.\n###figure_2### How to correctly interpret RoMie and RoLie?\nThe two variations give complementary views: In a nutshell, RoMie tells us if an attribution method succeeds in identifying sufficiently predictive edges, while RoLie tells us if an attribution method misses to mark all predictive edges. In both scenarios, the attribution checks whether the remaining edges are generalizable by the network. Note that we cannot claim anything for certain in case the network does not generalize using the remaining edges (as opposed to what is claimed in [7]) because the inability to generalize might stem from factors such as optimization, OOD effects, network properties, and other unforeseen elements. However, if the network generalizes, we can confidently state the remaining edges were predictive. Therefore, we need to exercise caution in interpreting RoMie and RoLie, and we propose the following rulebook for each approach:\nInterpreting RoMie:\nfor a sparsity under consideration, when Retraining on the Most important edges selected by the attribution results in the original accuracy, that means those edges have a high contribution to network predictivity. If the accuracy is high specifically for smaller subgraphs, when the sparsity level is low, we can say an attribution method is a top performer in terms of precisely identifying these predictive edges at such low sparsity. Clearly, the accuracy of a desirable attribution method rises to the original more quickly as we add more edges.\nInterpreting RoLie:\nfor a sparsity under consideration, if we get the original accuracy even though the Retraining is done on the Least important edges instead of the most important ones, that is a sign of the subgraph\u2019s undesirable predictive power. It means that the attribution method is missing to select all predictive edges and some still remain in the subgraph of the least importance. In the specific case of low sparsity, if the network still generalizes, we can say the attribution method is not desirable. Therefore in the RoLie plots, we expect to observe a start from a low point followed by a smooth rise in accuracy for a top-performing method.\n###figure_3### ###figure_4###"
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Perturbed vs. Unperturbed Test Set",
33
+ "text": "Earlier, we mentioned using the unperturbed test set for evaluation of RoMie and RoLie. The motivation behind using the original, unperturbed test set is two-fold. Firstly, to ensure a fair comparison of network accuracy across various levels of sparsity, it\u2019s essential to maintain consistency in the test set used across all sparsity conditions. Comparison of accuracies loses its reliability when the test sets differ in each evaluation due to varying perturbations. As a result, the impact of adding or removing important edges into the subgraph in accuracy plots will not be readily trackable (Figure 2 ###reference_###).\nSecondly, based on our observations in Figure 19 ###reference_### (b) and (c), some specific structural patterns (cycle or house for BA2Motif) might form during perturbation in graph datasets. These patterns could potentially introduce a classification bias. The intention behind attribution methods is to support the network to make classification decisions based on the existing features of nodes and edges within a subgraph. The patterns, however, might serve as favorable but dishonest cues for the network to correctly classify without genuinely considering those features. To avoid these patterns in the test set, we once again recommend sticking to the original test set.\nHow about Out-of-distribution (OOD) effects?\nThe distribution mismatch refers to the situation when train and test sets come from different distributions. Usually, training data used to develop the model does not entirely represent the real-world data (test set) it will eventually encounter. An example would be urban training areas for autonomous driving while the vehicle is going to be tested in off-the-road regions with challenges outside of the distribution of urban areas. In the context of interpretability, however, the methods are expected to accurately identify important features within the input data that cover the challenges to contribute to the network\u2019s classification. while there might be an apparent mismatch between the perturbed training set and the unperturbed test set, it can be deduced that ODD will not occur as long as the network shows generalization. Thus, in this study, we only focus on analyzing attribution methods when they show generalizability. By doing so, we circumvent the above-mentioned problems related to a perturbed test set and also have taken into account the OOD effect.\n###figure_5### ###figure_6###"
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Elimination of Isolated Nodes",
39
+ "text": "After edge removal by the attribution method, certain nodes may persist in the subgraph in isolation, i.e., without any connectivity to other nodes. While these isolated nodes lack neighboring nodes and thus will not have an influence on their aggregation and updating steps, they may still impact the network. In graph datasets where nodes possess features, the impact of individual nodes on the network\u2019s performance regardless of the neighbors is expected. As a result, even though the isolated nodes do not play a role in updating other nodes, their inherent features still affect the overall network dynamics.\nThe impact of node elimination when nodes do not possess features, and therefore have no predictive information, can be observed in Figure 3 ###reference_###. We show the difference between RoMie applied on REDDIT-BINARY and BA3Motifs datasets with the absence or presence of isolated nodes. No significant disparity was observed. Hence, in datasets with non-existent node features, the elimination of isolated nodes has negligible influence on the retraining process. It is not the case for datasets with node features, however.\nAs demonstrated in Figure 4 ###reference_### and Figure 5 ###reference_###,\nwhen applying RoMie to MUTAG and ENZYME, the isolated nodes exhibit discriminative capabilities. Including these isolated nodes during retraining leads to a relatively high network accuracy at low sparsity, , and , respectively. This suggests that despite keeping only a few of the most important edges, the presence of isolated nodes reinforces the network. Conversely, when these isolated nodes are eliminated, the accuracy drops to for MUTAG and for ENZYME, showing artificially high network accuracy.\nBased on this observation, we propose to eliminate any isolated nodes that arise after the attribution-finding process: Firstly, in the context of chemical and biological real-world datasets, a subgraph is meaningful upon the interconnectivity of all its consisting nodes. An isolated, non-bonding atom in a molecule or enzyme is not sensible. Secondly, the primary objective of our analysis centers on discovering the importance of edges. A node turns isolated because all its edges are already moved, therefore the edge removal process implies the removal of related isolated nodes emerging during this process as well.\n###figure_7### ###figure_8###"
40
+ },
41
+ {
42
+ "section_id": "3.3",
43
+ "parent_section_id": "3",
44
+ "section_name": "Implementation Details",
45
+ "text": "All attributions provide probability edge weighting. Therefore, the attribution is the subset of the graph selected based on the edge weights. However, SubgraphX does not provide probability weights for edges. Therefore we generated the binary edge mask for each percentage in our KAR and ROAR experiments. E.g., for the x%, we search for a x% size subgraph. We perform retraining by applying edge removing/keeping using probability importance weightings provided by the attributions, albeit as mentioned before, for SubgraphX, we directly compute the subgraph. In general, we have implemented our experiments based on seven percentages of 0, 10, 30, 50, 70, 90, and 100 defined by the number of removing/keeping edges. During random experiments, each one is performed three times with a separate set of random probability weights with different seeds."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Discussions",
51
+ "text": "###figure_9### ###figure_10### ###figure_11###"
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "RoMie and RoLie are Complementary",
57
+ "text": "Earlier we mentioned it is insufficient to base the attribution evaluation solely on one retraining strategy and parallel evaluation of Most and Least important edges is crucial. Four scenarios come into play when looking at methods\u2019 performance across different sparsities:\ni) When there is a sharp rise in low levels of sparsity for RoMie, the behavior of RoLie is a pivotal factor:\n a) if RoLie also displays a sharp rise at the beginning, the overall method seems to behave unpredictably, lacking a discernible pattern. We could see this in GradCAM in Figure 19 ###reference_###. Accuracy in the RoMie curve quickly rises to the original accuracy. However, as the method ignores some discriminative edges, during RoLie experiments, some discriminative edges keep the accuracy high, and thus GradCAM is undesirable from this perspective. From the visual explanations in Figure 19 ###reference_### (b) and (c), we can relate it to the fact that GradCam ignores either cycle or house structure in RoMie, and therefore will be present in the least important edges with discrimination power for the network. On the other hand, b) if RoLie exhibits a smooth rise, it suggests the method\u2019s ability for generalization. PGExplainer and SubgraphX both show this behavior in the two retraining settings. With a look at the visual examples, we can see that PGExplainer captures the whole part of both cycle and house structures, giving us a piece of strong evidence that GCN potentially focuses on these features during its retraining. SubgraphX returns the most but not the entire part of the house pattern, which has a high overlapping part with the cycle one. Hence, removing them could disrupt network training. Therefore, it does not show a bad performance in the ROAR experiment.\nii) In cases where the early sparsity ascent for RoMie lacks sharpness, likewise, RoLie\u2019s characteristics further determine the method\u2019s behavior: a) if RoLie remains sharp in such a context, it signals an inability to generalize. In Figure 19 ###reference_### we see that GNNExplainer nearly performs on par with the random setting in both retraining evaluations, and its visualized outputs demonstrate a lack of confidence behind both cycle and house patterns. However, b) in case of a smooth RoLie accompanied by a method that appears random implies a lack of consistency. The method does not pick the most important edges at low sparsities, yet it does not leave them behind in the least important list. This is a case where the model is not very smart in identifying attributions. For BA2Motif dataset, non of the methods exhibited this behavior. For such examples please look at supplementary.\nThus, the takeaway here is that the merit of an attribution method is not merely determined by the success of one retraining strategy; RoMie and RoLie must be analyzed jointly for a well-rounded and sound judgment."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Explainers Depend on Datasets and Networks",
63
+ "text": "A central point of concern in the evaluation of attributions could revolve around whether the behavior of methods remains consistent when applied to different datasets and network architectures, or if their effectiveness in terms of generalizability varies from one setting to another. In Figure 19 ###reference_###, we employed the GCN and evaluated the outcomes of retraining. Shifting our focus to Figure 8 ###reference_###, we aim to trace the results of the four methods on the same dataset (BA2Motif), but this time employing the GIN architecture. This as a comparison point allows us to monitor the influence of a different network on the outcomes.\nPreviously for GCN, we observed PGExplainer can precisely identify circle-house motif pair in low sparsity. This implied the edges related to the motif would not appear among the least important edges. That is why a smooth RoLie was observed. For the GIN\n(shown in Supplementary),\nPGExplainer overlooks either Cycle or house motif. This will let some edges related to the motifs remain in the subgraph when retraining on the least importants. Therefore, it will lead to undesirable high accuracy at low sparsity for RoLie.\nUnlike PGExplainer, GradCAM captures the whole Cycle-house pair in the new GIN setting. In Figure 19 ###reference_### (b) and (c) we noted GradCAM was unable to capture house motifs in contrast to Figure 8 ###reference_### (b) and (c) where we see the pair is discernable for the method. Overall, results for GradCam look fairly consistent in both architectures. This is also the case for GNNExplainer and SubgraphX. GNNExplainer continues to exhibit similar behavior close to random as before. We only observe an increase in accuracy for it once the sparsity level reaches 50%. What sets this method different in the context of GIN is that it shows a wide range of variability, making it less robust in the GIN setting. For subgraphX in RoLie plot, we observe a peculiar behavior. There is a sudden jump in the curve. This behavior can be justified by the fact that SubgraphX results do not necessarily overlap if we increase the search space (e.g., from 30% to 50%). All in all, results in Figure 7 ###reference_### prove the inconsistency of methods when employed across different networks.\nFrom Binary to Trinary Classification:\nBinary datasets like BA2Motifs have a notable limitation. Achieving high accuracy is feasible by focusing on a single discriminative motif and disregarding the features of the other motif. To address this, we introduce BA3Motifs, an extended multi-class variant of BA2Motifs, where the triangle motif replaces the house motif to create a third class. Figure 8 ###reference_### shows experiments to evaluate attribution method\u2019s data dependency using this dataset. In contrast to BA2Motifs, retaining PGExplainer features has no positive impact on network retraining, showing a performance similar to random settings. Retraining the least important features also does not reduce the performance. This is because PGExplainer fails to capture either cycle, house, or triangle motifs, as can be seen in the sample visualization. This evidence indicates that methods such as PGExplainer might exhibit inconsistent behavior in BA2Motifs and BA3Motifs, implying that the method\u2019s performance is contingent on the dataset."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Which Method to Recommend?",
69
+ "text": "So far, our observations have highlighted the dependence of attribution methods on both the dataset and the network. The next question that arises is whether, in a specific setting, we can make a general recommendation for a method. Our attempts in this study were not directed at creating a benchmark; rather, our aim was to introduce a framework for fair evaluation. However, as we employed the RoMie and RoLie approaches across various settings, evident trends could be seen:\nPGExplainer shows to be a method that has the most variable performance among different settings, while GradCAM is relatively more stable. GNNExplainer was the method that performed similarly to the random setting in most experiences. SubgraphX shows the disadvantage of non-overlapping subgraphs as sparsity was increasing. For instance, in sample visualization as in Figure 1 ###reference_### for the BA2Motif, the selected nodes in sparsity level of were not necessarily appearing also in . This can be associated with the fact that subgraphX has a different search span each time it looked for attributes. This lesd to huge jumps both in RoMie and RoLie plots. Therefore we recommend using subgraphX for datasets of small graph size where the search space can be limited.\nIn addition to the trend, a key observation can be drawn from both the RoMie and RoLie plots: at a specific sparsity, a method can demonstrate superior performance compared to another one, while underperforming the same method at a different sparsity level. Keeping this in mind, it becomes essential to initially determine the desired sparsity level and then proceed to method comparison. What holds particularly important here is the alignment of the sparsity level of in RoMie with the sparsity level of in RoLie, as these two subsets Combined represent the complete original graph sample . Thus, for one method to be considered more reliable than another, it must exhibit higher accuracy in RoMie and lower accuracy in the complementary sparsity within RoLie. When coupled with the visualization of attributions, these insights provide us with a robust basis for comparison.\n###figure_12### ###figure_13### ###figure_14###"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "In this work, we introduce guidelines on how to objectively evaluate graph attributions through the perspective of retraining. We reformulate the previous retraining paradigm with a focus on issues regarding the train-test distribution mismatch. The evaluations reveal that a mainstream method, GNNExplainer, is consistently failing to highlight predictive attributions. The idea of benchmarking attribution methods is not reliable, as they usually show different behaviors on different datasets, networks, and also at different levels of sparsity. Therefore we recommend first specifying a clear problem condition and then using the proposed evaluation strategies for a fair comparison of attribution methods."
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "Experiments\u2019 Details",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "6.1",
85
+ "parent_section_id": "6",
86
+ "section_name": "Networks and Training",
87
+ "text": "Initially, for each dataset a three-layer GIN and GCN were trained; all three layers have an identical number of neurons though their number might vary from 20 to 300 depending on the dataset and network. BA-2Motifs was trained for 100 epochs with batch size 64 for GCN and 32 for GIN, BA-3Motifs for 100 epochs with batch size 64, REDDIT-BINARY, IMDB-BINARY, ENZYME, and MTAG for 500 epochs and early stopping of 100 with batch size 64, though for MUTAG batch size was 32. Adam optimizer with a learning rate of 0.001 was employed. We applied the same configuration during retraining strategies i.e. RoLie and RoLie."
88
+ },
89
+ {
90
+ "section_id": "6.2",
91
+ "parent_section_id": "6",
92
+ "section_name": "Hardware",
93
+ "text": "We used two Nvidia GPUs \u201dQuadro P6000\u201d and \u201dGeForce GTX 1080 Ti\u201d on an \u201dAMD Ryzen 7 1080X Eight-Core Processor\u201d CPU, Python version 3.10.4, Pytorch 1.12, Torch-geometric 1.7.2 and Dive-Into-Graphs (DIG) 0.1.2 packages."
94
+ },
95
+ {
96
+ "section_id": "6.3",
97
+ "parent_section_id": "6",
98
+ "section_name": "Datasets",
99
+ "text": "###figure_15### ###figure_16###"
100
+ },
101
+ {
102
+ "section_id": "6.4",
103
+ "parent_section_id": "6",
104
+ "section_name": "Parameters of Attribution Methods",
105
+ "text": "Alongside random setting, four attributions were used to provide probability edge weighting. For both GNNExplainer and PGExplainer, we applied Adam optimizer with different learning rates of 0.01 and 0.003, and a range number of epochs from 300 to 500 and 10 to 100 were considered to depend on the dataset respectively. For PGExplainer, we have selected a two-layer fully connected network to provide edge weights. For SubgraphX, the configuration is 10 iterations with 14 expansions to extend the child nodes of the tree search. Despite other attributions, SubgraphX does not provide probability weights for edges, therefore we introduced a trick instead in which we have separately generated the binary edge mask for each percentage in our RoMie and RoLie experiments.\nBA-2Motifs: A synthetic binary graph classification dataset that each graph possesses either a five-node cycle or house motif [(ref2)] and we expect the attribution to find these patterns. BA-3Motifs: For further examination of our experiments on multi-class datasets, we prepared an extended version of BA-2Motifs where we propose a set of newly generated 500 graphs. We produced the third class by replacing the triangle motif with the house one in the second class. REDDIT-BINARY: A real social networking dataset, each graph belongs to either answer-question or discussion-based threads [(ref1)]. We expect that explainer methods should mostly provide focal-like and sub-group patterns for these two classes respectively.\nIMDB-BINARY: It is a real social networking dataset representing movie collaboration between actors/actresses [(ref3)]. Each actor/actress is represented by a node and there is an edge between them if they appeared in the same movie. Each graph is labeled as either action or romance genre, though action has a higher priority.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### MUTAG: A real biological binary classification dataset determining whether a given chemical compound has a mutagenic effect on a bacterium or not [(ref4)]. Nodes and edges are representing atoms and bonds between them respectively.\nENZYME: It contains enzyme graphs from the BRENDA\nenzyme database which is a dataset of protein tertiary structures [(ref5)]. The main goal is to classify each enzyme into\none of the 6 EC top-level classes.\n###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30###"
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "Attribution Depend of Datasets and Networks",
111
+ "text": "Figure 3 represents an evaluation of the GIN network on the BA-3Motifs dataset, where keeping the edges provided by GradCAM and SubgraphX can generally give better discrimination power for the network. By considering their outputs in 70%, both GradCAM and SubgraphX capture the whole cycle house-triangle trio though SubgraphX could not exceed GradCAM in performance during RoMie. Hence, this means GIN focuses on other features which are not the same as this trio.\nFigure 4 represents RoMie-RoLie evaluation on the REDDIT-BINARY dataset using the GIN network, where keeping edges returned by GradCAM can highly assist the network during retraining, and also removing them and keeping the other parts could significantly degrade the re-training performance leading to poor accuracy. Based on the outputs of explanation methods, for question-based samples, GradCAM provides a denser focal motif than the others, which is identical to having a user ask a question and the other ones respond to it; In addition, for the discussion-based samples, GradCAM vividly provides subgroup motifs showing there is a topic and users discuss it in smaller groups with each other. Therefore, we can conclude that the GIN network mostly focuses on the same edges of the input graph proposed by the GradCAM. Due to the numerous nodes per graph on average, SubgraphX analysis takes 40-50X more than other methods, making it impossible to take its output."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {},
116
+ "image_paths": {
117
+ "1": {
118
+ "figure_path": "2401.00633v1_figure_1.png",
119
+ "caption": "Figure 1: Attribution Disagreement: Important edges (top 10, 30, 50, 70, 90 %percent\\%%) as identified by different methods (each row) are different (BA2Motif dataset)",
120
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/teaser.png"
121
+ },
122
+ "2": {
123
+ "figure_path": "2401.00633v1_figure_2.png",
124
+ "caption": "Figure 2: \nunperturbed (left) vs perturbed (right) test sets: There is a sudden rise in accuracy in the unperturbed version as the network performs well even in very small percentages of sparsity, on BA2Motif, for GCN.",
125
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig2/spa_RoMie_perUnper_BA2Motif.jpg"
126
+ },
127
+ "3(a)": {
128
+ "figure_path": "2401.00633v1_figure_3(a).png",
129
+ "caption": "(a) BA3Motif\nFigure 3: Elimination of Isolated Nodes:\nwith (left), and without (right) isolated nodes.\nNo significant change in performance of attribution methods is observed while eliminating isolated nodes in synthetic datasets.\na on BA3Motif, and b on Reddit, both for GCN",
130
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig3/spa_BA2Motif_nodeEli.jpg"
131
+ },
132
+ "3(b)": {
133
+ "figure_path": "2401.00633v1_figure_3(b).png",
134
+ "caption": "(b) Reddit\nFigure 3: Elimination of Isolated Nodes:\nwith (left), and without (right) isolated nodes.\nNo significant change in performance of attribution methods is observed while eliminating isolated nodes in synthetic datasets.\na on BA3Motif, and b on Reddit, both for GCN",
135
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig3/spa_Reddit_nodeEli.jpg"
136
+ },
137
+ "4(a)": {
138
+ "figure_path": "2401.00633v1_figure_4(a).png",
139
+ "caption": "(a) Mutag\nFigure 4: Elimination of Isolated Nodes:\nwith (left), and without (right) isolated nodes.\nIsolated atoms do not preserve any properties for a molecule, isolated nodes artificially cause high accuracy at low sparsity.\na on Mutag, and b on Enzyme, both for GCN",
140
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig4/spa_Mutag.jpg"
141
+ },
142
+ "4(b)": {
143
+ "figure_path": "2401.00633v1_figure_4(b).png",
144
+ "caption": "(b) Enzyme\nFigure 4: Elimination of Isolated Nodes:\nwith (left), and without (right) isolated nodes.\nIsolated atoms do not preserve any properties for a molecule, isolated nodes artificially cause high accuracy at low sparsity.\na on Mutag, and b on Enzyme, both for GCN",
145
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig4/spa_Enzyme.jpg"
146
+ },
147
+ "5(a)": {
148
+ "figure_path": "2401.00633v1_figure_5(a).png",
149
+ "caption": "(a) Keeping Isolated Nodes\nFigure 5: Sample Visialization for Isolated Node Elimination:.\nWhen isolated nodes are not eliminated, although there is no edge at low sparsity, still high accuracy is obtained due to lasting node features. on Mutag, for GCN.",
150
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig4/mutagVis1.png"
151
+ },
152
+ "5(b)": {
153
+ "figure_path": "2401.00633v1_figure_5(b).png",
154
+ "caption": "(b) Elimination Isolated Nodes\nFigure 5: Sample Visialization for Isolated Node Elimination:.\nWhen isolated nodes are not eliminated, although there is no edge at low sparsity, still high accuracy is obtained due to lasting node features. on Mutag, for GCN.",
155
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig4/mutagVis2.png"
156
+ },
157
+ "6(a)": {
158
+ "figure_path": "2401.00633v1_figure_6(a).png",
159
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 6: Complementary RoMie and RoLie:\nImportant and Least important edges should be studied together. A visualization of attribution can help understand the inner workings of a method. on BA2Motif, for GCN.",
160
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig5/spa_RoMie_RoLie_BA2Motif.jpg"
161
+ },
162
+ "6(b)": {
163
+ "figure_path": "2401.00633v1_figure_6(b).png",
164
+ "caption": "(b) Label zero (cycle)\nFigure 6: Complementary RoMie and RoLie:\nImportant and Least important edges should be studied together. A visualization of attribution can help understand the inner workings of a method. on BA2Motif, for GCN.",
165
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig5/s_vis_kar_BA2Motif_cycle.png"
166
+ },
167
+ "6(c)": {
168
+ "figure_path": "2401.00633v1_figure_6(c).png",
169
+ "caption": "(c) Label one (house)\nFigure 6: Complementary RoMie and RoLie:\nImportant and Least important edges should be studied together. A visualization of attribution can help understand the inner workings of a method. on BA2Motif, for GCN.",
170
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig5/vis_kar_BA2Motif_house.png"
171
+ },
172
+ "7(a)": {
173
+ "figure_path": "2401.00633v1_figure_7(a).png",
174
+ "caption": "(a) GIN: RoMie and RoLie\nFigure 7: Attribitions Dependency on Network (up) and Dataset (down). When compared to 19 the performance of attributions changes. Up: on BA2Motif, for GIN. Down: on BA3Motif, for GCN",
175
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig6/spa_RoMieRoLie_BA2motif.jpg"
176
+ },
177
+ "7(b)": {
178
+ "figure_path": "2401.00633v1_figure_7(b).png",
179
+ "caption": "(b) BA3Motif: RoMie and RoLie\nFigure 7: Attribitions Dependency on Network (up) and Dataset (down). When compared to 19 the performance of attributions changes. Up: on BA2Motif, for GIN. Down: on BA3Motif, for GCN",
180
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig7/spa_RoMieRoLie_BA3.jpg"
181
+ },
182
+ "8(a)": {
183
+ "figure_path": "2401.00633v1_figure_8(a).png",
184
+ "caption": "(a) Label zero ()\nFigure 8: Visualization of a Sample with Triangle.\nWhen the dataset is different from 19 the performance of attribution methods also changes. on BA3Motif, for GCN",
185
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/fig7/vis_BA3_tri.png"
186
+ },
187
+ "9(a)": {
188
+ "figure_path": "2401.00633v1_figure_9(a).png",
189
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 9: \nRoMie performance on unperturbed (up) vs. perturbed (down) evaluation set. Unperturbed version unjustifiably performs better even in the lowest percentages, while informative patterns in samples like discussion-based ones appear in big motifs. on REDDIT-BINARY dataset, using GIN network",
190
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/1.png"
191
+ },
192
+ "10(a)": {
193
+ "figure_path": "2401.00633v1_figure_10(a).png",
194
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 10: \nRoMie evaluation on both GIN and GCN networks and BA-2Motifs dataset using the GradCAM explanation outputs on the GIN network. There is a significant gap in accuracy, particularly in 30%, meaning GCN uses the features while GIN itself does not. on BA-2Motifs, using GIN an GCN",
195
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/2.png"
196
+ },
197
+ "11(a)": {
198
+ "figure_path": "2401.00633v1_figure_11(a).png",
199
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 11: Complementary RoMie and RoLie: RoMie and RoLie evaluation (a) and outputs of attribition methods (b, c, d). Keeping edges returned by GradCAM and SubgraphX can highly aid the network during retraining while removing them can highly defect the network. Based on the explanation outputs, both methods capture the cycle-house-triangle trio at 50%, while SubgraphX cannot surpass GradCAM in performance during RoMie. This shows that, unlike GCN, the GIN network replaces other parts of the graph with this trio for its efficient training. on BA3Motifs using GIN network",
200
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_3a.png"
201
+ },
202
+ "11(b)": {
203
+ "figure_path": "2401.00633v1_figure_11(b).png",
204
+ "caption": "(b) Label zero (cycle)\nFigure 11: Complementary RoMie and RoLie: RoMie and RoLie evaluation (a) and outputs of attribition methods (b, c, d). Keeping edges returned by GradCAM and SubgraphX can highly aid the network during retraining while removing them can highly defect the network. Based on the explanation outputs, both methods capture the cycle-house-triangle trio at 50%, while SubgraphX cannot surpass GradCAM in performance during RoMie. This shows that, unlike GCN, the GIN network replaces other parts of the graph with this trio for its efficient training. on BA3Motifs using GIN network",
205
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/3b.png"
206
+ },
207
+ "11(c)": {
208
+ "figure_path": "2401.00633v1_figure_11(c).png",
209
+ "caption": "(c) Label one (house)\nFigure 11: Complementary RoMie and RoLie: RoMie and RoLie evaluation (a) and outputs of attribition methods (b, c, d). Keeping edges returned by GradCAM and SubgraphX can highly aid the network during retraining while removing them can highly defect the network. Based on the explanation outputs, both methods capture the cycle-house-triangle trio at 50%, while SubgraphX cannot surpass GradCAM in performance during RoMie. This shows that, unlike GCN, the GIN network replaces other parts of the graph with this trio for its efficient training. on BA3Motifs using GIN network",
210
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/3c.png"
211
+ },
212
+ "11(d)": {
213
+ "figure_path": "2401.00633v1_figure_11(d).png",
214
+ "caption": "(d) Label one (house)\nFigure 11: Complementary RoMie and RoLie: RoMie and RoLie evaluation (a) and outputs of attribition methods (b, c, d). Keeping edges returned by GradCAM and SubgraphX can highly aid the network during retraining while removing them can highly defect the network. Based on the explanation outputs, both methods capture the cycle-house-triangle trio at 50%, while SubgraphX cannot surpass GradCAM in performance during RoMie. This shows that, unlike GCN, the GIN network replaces other parts of the graph with this trio for its efficient training. on BA3Motifs using GIN network",
215
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/3d.png"
216
+ },
217
+ "12(a)": {
218
+ "figure_path": "2401.00633v1_figure_12(a).png",
219
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 12: \nRoMie-RoLie evaluation (a) and outputs of explanation methods (b, c) on REDDIT-BINARY dataset, using a GIN network. Keeping the edges provided by GradCAM can help the network during the retraining than the other explainers, while removing them highly disorders the network performance, particularly between 0-50%. Based on their explanation outputs, GradCAM returns the focal motifs for question-based samples and multiple sub-group patterns for the discussion-based samples more concretely. Therefore, we can conclude that GIN network focuses on these edges for this dataset.",
220
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_4a.png"
221
+ },
222
+ "12(b)": {
223
+ "figure_path": "2401.00633v1_figure_12(b).png",
224
+ "caption": "(b) Label zero (cycle)\nFigure 12: \nRoMie-RoLie evaluation (a) and outputs of explanation methods (b, c) on REDDIT-BINARY dataset, using a GIN network. Keeping the edges provided by GradCAM can help the network during the retraining than the other explainers, while removing them highly disorders the network performance, particularly between 0-50%. Based on their explanation outputs, GradCAM returns the focal motifs for question-based samples and multiple sub-group patterns for the discussion-based samples more concretely. Therefore, we can conclude that GIN network focuses on these edges for this dataset.",
225
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/4b.png"
226
+ },
227
+ "12(c)": {
228
+ "figure_path": "2401.00633v1_figure_12(c).png",
229
+ "caption": "(c) Label one (house)\nFigure 12: \nRoMie-RoLie evaluation (a) and outputs of explanation methods (b, c) on REDDIT-BINARY dataset, using a GIN network. Keeping the edges provided by GradCAM can help the network during the retraining than the other explainers, while removing them highly disorders the network performance, particularly between 0-50%. Based on their explanation outputs, GradCAM returns the focal motifs for question-based samples and multiple sub-group patterns for the discussion-based samples more concretely. Therefore, we can conclude that GIN network focuses on these edges for this dataset.",
230
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/4c.png"
231
+ },
232
+ "13(a)": {
233
+ "figure_path": "2401.00633v1_figure_13(a).png",
234
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 13: RoMie-RoLie evaluation on REDDIT-BINARY dataset using GCN network",
235
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_5.png"
236
+ },
237
+ "14(a)": {
238
+ "figure_path": "2401.00633v1_figure_14(a).png",
239
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 14: Complementary RoMie and RoLie:\nRoMie-RoLiw evaluation on IMDB-BINARY dataset using GIN network.",
240
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_6.png"
241
+ },
242
+ "15(a)": {
243
+ "figure_path": "2401.00633v1_figure_15(a).png",
244
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 15: \nRoMie-RoLievaluation on IMDB-BINARY dataset using GCN network.",
245
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_7.png"
246
+ },
247
+ "16(a)": {
248
+ "figure_path": "2401.00633v1_figure_16(a).png",
249
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 16: \nRoMie-RoLieevaluation on ENZYME dataset using GCN network.",
250
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_8.png"
251
+ },
252
+ "17(a)": {
253
+ "figure_path": "2401.00633v1_figure_17(a).png",
254
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 17: \nRoMie-RoLie evaluation on ENZYME dataset using GIN network.",
255
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_9.png"
256
+ },
257
+ "18(a)": {
258
+ "figure_path": "2401.00633v1_figure_18(a).png",
259
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 18: \nRoMie-RoLie evaluation on MUTAG dataset using GIN network.",
260
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_10.png"
261
+ },
262
+ "19(a)": {
263
+ "figure_path": "2401.00633v1_figure_19(a).png",
264
+ "caption": "(a) RoMie and RoLie Bahavior\nFigure 19: \nRoMie-RoLie evaluation on MUTAG dataset using GCN network.",
265
+ "url": "http://arxiv.org/html/2401.00633v1/extracted/5324846/figs/supp/comp_11.png"
266
+ }
267
+ },
268
+ "validation": true,
269
+ "references": [
270
+ {
271
+ "1": {
272
+ "title": "Sanity checks for saliency maps.",
273
+ "author": "Adebayo, J.; Gilmer, J.; Muelly, M.; Goodfellow, I.; Hardt, M.; and Kim, B.\n2018.",
274
+ "venue": "Advances in neural information processing systems, 31.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "2": {
280
+ "title": "Probing gnn explainers: A rigorous theoretical and empirical analysis\nof gnn explanation methods.",
281
+ "author": "Agarwal, C.; Zitnik, M.; and Lakkaraju, H. 2022.",
282
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, 8969\u20138996. PMLR.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "3": {
288
+ "title": "On pixel-wise explanations for non-linear classifier decisions by\nlayer-wise relevance propagation.",
289
+ "author": "Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; M\u00fcller, K.-R.; and\nSamek, W. 2015.",
290
+ "venue": "PloS one, 10(7): e0130140.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "4": {
296
+ "title": "Real time image saliency for black box classifiers.",
297
+ "author": "Dabkowski, P.; and Gal, Y. 2017.",
298
+ "venue": "Advances in neural information processing systems, 30.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "5": {
304
+ "title": "Structure-activity relationship of mutagenic aromatic and\nheteroaromatic nitro compounds. correlation with molecular orbital energies\nand hydrophobicity.",
305
+ "author": "Debnath, A. K.; Lopez de Compadre, R. L.; Debnath, G.; Shusterman, A. J.; and\nHansch, C. 1991.",
306
+ "venue": "Journal of medicinal chemistry, 34(2): 786\u2013797.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "6": {
312
+ "title": "Interpretable explanations of black boxes by meaningful perturbation.",
313
+ "author": "Fong, R. C.; and Vedaldi, A. 2017.",
314
+ "venue": "In Proceedings of the IEEE international conference on computer\nvision, 3429\u20133437.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "7": {
320
+ "title": "A benchmark for interpretability methods in deep neural networks.",
321
+ "author": "Hooker, S.; Erhan, D.; Kindermans, P.-J.; and Kim, B. 2019.",
322
+ "venue": "Advances in neural information processing systems, 32.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "8": {
328
+ "title": "Categorical reparameterization with gumbel-softmax.",
329
+ "author": "Jang, E.; Gu, S.; and Poole, B. 2016.",
330
+ "venue": "arXiv preprint arXiv:1611.01144.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "9": {
336
+ "title": "Neural Response Interpretation Through the Lens of Critical Pathways.",
337
+ "author": "Khakzar, A.; Baselizadeh, S.; Khanduja, S.; Rupprecht, C.; Kim, S. T.; and\nNavab, N. 2021.",
338
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition (CVPR), 13528\u201313538.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "10": {
344
+ "title": "Do explanations explain? model knows best.",
345
+ "author": "Khakzar, A.; Khorsandi, P.; Nobahari, R.; and Navab, N. 2022.",
346
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 10244\u201310253.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "11": {
352
+ "title": "Semi-supervised classification with graph convolutional networks.",
353
+ "author": "Kipf, T. N.; and Welling, M. 2016.",
354
+ "venue": "arXiv preprint arXiv:1609.02907.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "12": {
360
+ "title": "The Disagreement Problem in Explainable Machine Learning: A\nPractitioner\u2019s Perspective.",
361
+ "author": "Krishna, S.; Han, T.; Gu, A.; Pombra, J.; Jabbari, S.; Wu, S.; and Lakkaraju,\nH. 2022.",
362
+ "venue": "arXiv preprint arXiv:2202.01602.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "13": {
368
+ "title": "Contributions to the Theory of Games.",
369
+ "author": "Kuhn, H. W.; and Tucker, A. W. 1953.",
370
+ "venue": "28. Princeton University Press.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "14": {
376
+ "title": "Explainability in Graph Neural Networks: An Experimental Survey.",
377
+ "author": "Li, P.; Yang, Y.; Pagnucco, M.; and Song, Y. 2022.",
378
+ "venue": "arXiv preprint arXiv:2203.09258.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "15": {
384
+ "title": "Dig: A turnkey library for diving into graph deep learning research.",
385
+ "author": "Liu, M.; Luo, Y.; Wang, L.; Xie, Y.; Yuan, H.; Gui, S.; Yu, H.; Xu, Z.; Zhang,\nJ.; Liu, Y.; et al. 2021.",
386
+ "venue": "arXiv preprint arXiv:2103.12608.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "16": {
392
+ "title": "A unified approach to interpreting model predictions.",
393
+ "author": "Lundberg, S. M.; and Lee, S.-I. 2017.",
394
+ "venue": "Advances in neural information processing systems, 30.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "17": {
400
+ "title": "Parameterized explainer for graph neural network.",
401
+ "author": "Luo, D.; Cheng, W.; Xu, D.; Yu, W.; Zong, B.; Chen, H.; and Zhang, X. 2020.",
402
+ "venue": "Advances in neural information processing systems, 33:\n19620\u201319631.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "18": {
408
+ "title": "The concrete distribution: A continuous relaxation of discrete random\nvariables.",
409
+ "author": "Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2016.",
410
+ "venue": "arXiv preprint arXiv:1611.00712.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "19": {
416
+ "title": "Explaining nonlinear classification decisions with deep taylor\ndecomposition.",
417
+ "author": "Montavon, G.; Lapuschkin, S.; Binder, A.; Samek, W.; and M\u00fcller, K.-R.\n2017.",
418
+ "venue": "Pattern recognition, 65: 211\u2013222.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "20": {
424
+ "title": "Explainability methods for graph convolutional neural networks.",
425
+ "author": "Pope, P. E.; Kolouri, S.; Rostami, M.; Martin, C. E.; and Hoffmann, H. 2019.",
426
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 10772\u201310781.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "21": {
432
+ "title": "\u201dWhy should i trust you?\u201d Explaining the predictions of any\nclassifier.",
433
+ "author": "Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016.",
434
+ "venue": "In Proceedings of the 22nd ACM SIGKDD international conference\non knowledge discovery and data mining, 1135\u20131144.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "22": {
440
+ "title": "Evaluating the visualization of what a deep neural network has\nlearned.",
441
+ "author": "Samek, W.; Binder, A.; Montavon, G.; Lapuschkin, S.; and M\u00fcller, K.-R.\n2016.",
442
+ "venue": "IEEE transactions on neural networks and learning systems,\n28(11): 2660\u20132673.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "23": {
448
+ "title": "Evaluating attribution for graph neural networks.",
449
+ "author": "Sanchez-Lengeling, B.; Wei, J.; Lee, B.; Reif, E.; Wang, P.; Qian, W.;\nMcCloskey, K.; Colwell, L.; and Wiltschko, A. 2020.",
450
+ "venue": "Advances in neural information processing systems, 33:\n5898\u20135910.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "24": {
456
+ "title": "BRENDA, the enzyme database: updates and major new developments.",
457
+ "author": "Schomburg, I.; Chang, A.; Ebeling, C.; Gremse, M.; Heldt, C.; Huhn, G.; and\nSchomburg, D. 2004.",
458
+ "venue": "Nucleic acids research, 32(suppl_1): D431\u2013D433.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "25": {
464
+ "title": "Grad-cam: Visual explanations from deep networks via gradient-based\nlocalization.",
465
+ "author": "Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra,\nD. 2017.",
466
+ "venue": "In Proceedings of the IEEE international conference on computer\nvision, 618\u2013626.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "26": {
472
+ "title": "TL;DR: Mining Reddit to Learn Automatic Summarization.",
473
+ "author": "V\u00f6lske, M.; Potthast, M.; Syed, S.; and Stein, B. 2017.",
474
+ "venue": "In Proceedings of the Workshop on New Frontiers in\nSummarization, 59\u201363. Copenhagen, Denmark: Association for Computational\nLinguistics.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "27": {
480
+ "title": "How powerful are graph neural networks?",
481
+ "author": "Xu, K.; Hu, W.; Leskovec, J.; and Jegelka, S. 2018.",
482
+ "venue": "arXiv preprint arXiv:1810.00826.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "28": {
488
+ "title": "Gnnexplainer: Generating explanations for graph neural networks.",
489
+ "author": "Ying, Z.; Bourgeois, D.; You, J.; Zitnik, M.; and Leskovec, J. 2019.",
490
+ "venue": "Advances in neural information processing systems, 32.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "29": {
496
+ "title": "Xgnn: Towards model-level explanations of graph neural networks.",
497
+ "author": "Yuan, H.; Tang, J.; Hu, X.; and Ji, S. 2020.",
498
+ "venue": "In Proceedings of the 26th ACM SIGKDD International Conference\non Knowledge Discovery & Data Mining, 430\u2013438.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "30": {
504
+ "title": "Explainability in graph neural networks: A taxonomic survey.",
505
+ "author": "Yuan, H.; Yu, H.; Gui, S.; and Ji, S. 2022.",
506
+ "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "31": {
512
+ "title": "On explainability of graph neural networks via subgraph explorations.",
513
+ "author": "Yuan, H.; Yu, H.; Wang, J.; Li, K.; and Ji, S. 2021.",
514
+ "venue": "In International Conference on Machine Learning, 12241\u201312252.\nPMLR.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "32": {
520
+ "title": "Top-down neural attention by excitation backprop.",
521
+ "author": "Zhang, J.; Bargal, S. A.; Lin, Z.; Brandt, J.; Shen, X.; and Sclaroff, S. 2018.",
522
+ "venue": "International Journal of Computer Vision, 126(10): 1084\u20131102.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "33": {
528
+ "title": "Fine-grained neural network explanation by identifying input features\nwith predictive information.",
529
+ "author": "Zhang, Y.; Khakzar, A.; Li, Y.; Farshad, A.; Kim, S. T.; and Navab, N. 2021.",
530
+ "venue": "Advances in Neural Information Processing Systems, 34:\n20040\u201320051.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "34": {
536
+ "title": "AttributionLab: Faithfulness of Feature Attribution Under\nControllable Environments.",
537
+ "author": "Zhang, Y.; Li, Y.; Brown, H.; Rezaei, M.; Bischl, B.; Torr, P.; Khakzar, A.;\nand Kawaguchi, K. 2023.",
538
+ "venue": "arXiv:2310.06514.",
539
+ "url": null
540
+ }
541
+ }
542
+ ],
543
+ "url": "http://arxiv.org/html/2401.00633v1"
544
+ }
20240101/2401.00642v1.json ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Predicting Anti-microbial Resistance using Large Language Models",
3
+ "abstract": "During times of increasing antibiotic resistance and the spread of infectious diseases like COVID-19, it is important to classify genes related to antibiotic resistance. As natural language processing has advanced with transformer-based language models, many language models that learn characteristics of nucleotide sequences have also emerged. These models show good performance in classifying various features of nucleotide sequences. When classifying nucleotide sequences, not only the sequence itself, but also various background knowledge is utilized. In this study, we use not only a nucleotide sequence-based language model but also a text language model based on PubMed articles to reflect more biological background knowledge in the model. We propose a method to fine-tune the nucleotide sequence language model and the text language model based on various databases of antibiotic resistance genes. We also propose an LLM-based augmentation technique to supplement the data and an ensemble method to effectively combine the two models. We also propose a benchmark for evaluating the model. Our method achieved better performance than the nucleotide sequence language model in the drug resistance class prediction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The genes for antibiotic resistance have increased rapidly over the past 10 years and have become a threat to human health Zhang et al. (2022 ###reference_19###). Moreover, dangerous infectious diseases like COVID-19 can also spread. In such times, it is important to classify the DNA sequences of antibiotic resistance genes. In bioinformatics, the main method for classifying DNA sequences has been to find similar sequences by aligning two DNA sequences using text alignment Bonin et al. (2023 ###reference_2###). Recently, there have been methods that use language models created from the nucleotide or protein sequences of various species and fine-tune them to create classifiersBrandes et al. (2022 ###reference_3###); Ji et al. (2021 ###reference_9###); Zhou et al. (2023 ###reference_21###). These methods have the advantage of being able to identify which parts of the nucleotide sequence are important. To fine-tune, databases containing information on antibiotic resistance genes must be used. The main databases are CARD Jia et al. (2017 ###reference_10###) and MEGARes Doster et al. (2020 ###reference_6###). Existing methods use the labels associated with antibiotic resistance genes, such as the class to which the resistance gene belongs, for example, the label of the antibiotic to which resistance is present. It is a prediction of a single label from a single gene sequence Kang et al. (2022 ###reference_11###). However, if we look at the CARD or MEGARes databases, there are several attributes that describe a particular gene. There are Gene Family and Resistance Mechanism. If we use this information when predicting the antibiotic to which resistance is present, it could be helpful for prediction. Here, we get an idea and propose a model that uses human-readable information to predict antibiotic resistance genes. We also provide a method to merge the different classification systems of CARD and MEGARes. We will also explain the LLM-based data augmentation technique for rare classes with few samples."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Approaches",
15
+ "text": "Our approaches include fine-tuning a pre-trained language model with various species\u2019 gene nucleotide sequence data to predict antibiotic resistance genes and their classes. We also fine-tune a pre-trained language model trained on a corpus containing diverse papers from the fields of biology and medicine to predict the names of antibiotic resistance gene properties. We provide an effective ensemble model Kumari et al. (2021 ###reference_12###) using the above two models in a weighted soft voting method. To integrate the classes, we combine the DNA sequences and the concepts that describe them from CARD and MEGARes into one. We use the EBI ARO ontology Cook et al. (2016 ###reference_4###) to combine CARD tagging and MEGARes tagging into one class system. For rare classes with few samples, we use BioGPT Luo et al. (2022 ###reference_15###) prompting to perform data augmentation."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Nucleotide Sequence Based Antibiotic Resistance Drug Class Classification",
21
+ "text": "Following the structure of Dalla-Torre et al. (2023 ###reference_5###), we uses a large pre-training language model based on nucleotide sequences and fine-tune a classifier based on Drug Class data. The nucleotide sequence input is limited to a length of 1000, the input size of the pre-training model. The tokenizer uses a 6-mer tokenizer. A 6-mer tokenizer is a type of k-mer tokenizer. A k-mer tokenizer is a technique used in genome analysis and bioinformatics research that splits a biological sequence into substrings of length k Mej\u00eda-Guerra and Buckler (2019 ###reference_17###). The pre-training model uses NT, which is pre-trained on multi-species including bacteria, fungi, inverterbate, protozoa, verterbate gene sequences. Unlike other nucleotide sequence-based pre-training models that mostly use human genes, this model is trained on multi-species genes, providing a better representation. Fine-tuning is done using LoRA tuning. LoRA tuning is a method that fixes the weights of a pre-trained large-scale language model and inserts a low-rank decomposed matrix into each transformer layer, dramatically reducing the number of trainable parameters for the downstream task Hu et al. (2021 ###reference_7###). This allows for more effective fine-tuning."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Text Information Based Antibiotic Resistance Drug Class Classification",
27
+ "text": "Text information based antibiotic resistance drug class classification uses a BioBERT language model pre-trained on a large medical and biological text corpus as the pre-training model. BioBERT is a pre-trained biomedical language representation model that uses a large-scale biomedical text corpus including PubMed abstracts, PMC full-text articles, and the Genia corpus. Lee et al. (2020 ###reference_14###) We fine-tune this model to extract antibiotic resistance drug classes, such as Drug Class or Gene Family, from text that describes antibiotic resistance genes. We aim to improve the performance of the classifier by utilizing a pre-trained biomedical text-based model. Instead of using multiple classification layers, we create a single classification layer and fine-tune it. The training data is structured as [Resistance Mechanism] followed by a description of the attribute, such as Antibiotic inactivation. To further improve performance, we create a format that encloses special characters Zhou and Chen (2021 ###reference_20###), such as *[Gene Family]: Beta-lactamases*, #[Resistance Mechanism]: Antibiotic inactivation#."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Weighted Soft-voting Ensemble",
33
+ "text": "To combine the pre-trained nucleotide sequence-based language model and the pre-trained text-based language model mentioned earlier, we use a soft-voting ensemble model. Additionally, we find the optimal weights through validation data and apply them to create a weighted soft voting ensemble model. A more detailed explanation of the validation data will be provided in the Experiment section. This data is a third dataset separate from the training and test data. This allows us to use both nucleotide sequence information and the text information that describes it. This model requires both types of input. It receives the nucleotide sequence and information about Gene Family and Resistance Mechanism in the format [Resistance Mechanism]: Antibiotic Effuls, #[Gene Family]: Bata-Lactamases#."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Integrating Classes Based on Antibiotic Resistance Ontology",
39
+ "text": "The databases provided in the literature (CARD, MEGARes) have different classification systems and hierarchical relationships. EBI ARO provides hierarchical information on antibiotic resistance genes. EBI stands for European Bioinformatics Institute. These diverse antibiotic resistance classification systems, gene groupings, and resistance mechanisms can be combined through the EBI ontology, and the model can store integrated concept representations. Each database\u2019s header is read and the EBI API is searched. The mapped items are used as new Gene Family. Rather than using very small and specific hierarchical classes, more general hierarchical classes are employed. The third level from the top in the EBI ARO hierarchy is used as the basis.\n###figure_1###"
40
+ },
41
+ {
42
+ "section_id": "2.5",
43
+ "parent_section_id": "2",
44
+ "section_name": "Data Augmentation Using a Large Language Model",
45
+ "text": "The categories were integrated based on the EBI ARO Ontology\u2019s gene group and CARD Resistance Mechanism. However, there are still cases where the number of samples corresponding to a class is small. Data augmentation was conducted for these cases. BioGPT was used for data augmentation. Similar data were created through prompting. Through this, it was possible to see that performance improved as follows: In particular, the accuracy in classes with a small number of samples increased."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiments",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "Datasets",
57
+ "text": "The CARD and MEGARes v3 datasets are used for training and evaluation.\nClasses with fewer than 15 samples are removed because obtaining meaningful results from the data split is difficult.\nThe remaining data is split into 75% for training data, 20% for test data, and 5% for validation data.\nEBI ARO ontology search is used to integrate the data, which is then split similarly to the above.\nClasses with difficult-to-obtain meaningful results are also removed.\nThe MEGARes dataset consists of 9733 Reference Sequences, 1088 SNPs, 4 antibiotic types, 59 resistance classes, and 233 mechanisms.\nThe CARD dataset consists of 5194 Reference Sequences and 2005 SNPs, 142 Drug Classes, 331 Gene Families, and 10 Resistance Mechanisms.\nThe EBI ARO ontology provides hierarchical group information for genes.\nUsing the EBI ARO Ontology, Gene Family class information can be integrated into a higher-level hierarchy.\nThe number of Gene Family text information classes in the case of MEGARes is 589, while for CARD, it is 331.\nThere are 300 and 166 datasets with only one sample in their respective classes for Gene Family in the case of MEGARes and CARD, respectively.\nResistance Mechanism is integrated based on the 6 categories of CARD.\nThe original 8 categories were reduced to 6, excluding cases of various class combinations and those with very few samples.\nDrug Class is integrated using 9 common Drug Classes found in competing models.\nIntegration is done based on names and theories and has been verified.\nMacro f1 score, accuracy, balanced accuracy, and precision are used as performance metrics, and the results are listed in the table 3."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "Implementation Details",
63
+ "text": "Basic structure of the model and fine-tuning follow the methods proposed by BioBERT and Nucleotide Transformer.\nThe layers and information of the model are in the Appendix."
64
+ },
65
+ {
66
+ "section_id": "3.3",
67
+ "parent_section_id": "3",
68
+ "section_name": "Main Results",
69
+ "text": "Tables 3 show metrics using our method with the latest techniques (SOTA) in the text-based information model for the CARD and MEGARes experiments, showing that our method surpasses previous SOTA.\nAdditionally, the method using integrated data shows superiority over previous SOTA.\nOur method also demonstrates competitive results compared to other competing models and SOTA."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Discussion",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Related Work",
81
+ "text": "AMR-meta is a method for classifying antibiotic resistance in high-speed metagenomic data.\nThis method uses a sequence alignment-free approach based on k-mers and meta-features, and it utilizes both resistant and non-resistant genes as training data.\nAs a result, AMR-meta can more accurately identify antibiotic resistance genes and reduce false-positive rates for non-resistant genes.\nHowever, it uses a complex matrix decomposition method to generate meta-features, which can be computationally intensive.\nAdditionally, the prediction performance of AMR-meta may vary depending on the type of antibiotic used or the diversity of the resistance genes.\nThese characteristics make AMR-meta useful for analyzing high-speed metagenomic data, but at the same time, they suggest that it may be limited in certain situations.\nAMR++ is a customized bioinformatics pipeline that uses high-throughput sequencing data to predict the diversity and abundance of antibiotic resistance genes (ARGs).\nThis pipeline is integrated with the MEGARes database, allowing for efficient analysis of ARGs in large-scale metagenomic sequencing data.\nThe main advantage of AMR++ is its high throughput and efficiency, enabling users to quickly and accurately analyze complex datasets.\nIn addition, this software can distinguish between types of ARGs, including cases where resistance genes require specific mutations.\nHowever, this pipeline requires high-quality assembled and/or translated data, which may cause difficulties or limitations in generating metagenomic datasets.\nFurthermore, AMR++ may require advanced bioinformatics skills and resources, potentially limiting accessibility for some researchers.\nMeta-MARC is a machine learning classifier developed to enhance the detection and classification of antibiotic resistance genes.\nThis system is based on the MEGARes database and uses DNA-based hierarchical Hidden Markov Models (HMMs) to classify antibiotic resistance genes in high-throughput sequencing data.\nMeta-MARC is robust against various gene mutations, which is particularly useful for non-standard databases and sequences.\nThis tool provides high sensitivity and specificity, playing a crucial role in accurate antibiotic resistance detection.\nHowever, Meta-MARC is computationally demanding, particularly when dealing with large datasets, which can result in increased processing time and memory usage.\nAdditionally, high sensitivity settings may potentially increase false positives, so users must carefully interpret the results.\nDeepARG is a deep learning-based system used for predicting antibiotic resistance genes (ARGs) in metagenomic data.\nIt utilizes two models, DeepARG-SS and DeepARG-LS, for classifying short and full-length gene sequences.\nCompared to the traditional \u2019best hit\u2019 approach, it has the advantage of identifying a wider range of ARG diversity with lower false negative rates.\nHowever, the performance of this system heavily depends on the quality of the training database, and it has limitations when it comes to predicting new categories of ARGs.\nDespite these limitations, DeepARG is a useful tool for evaluating the presence and diversity of ARGs in environmental samples."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "As far as we know, our work is the first to combine natural language models and biological sequence models to predict antibiotic resistance genes.\nWe proposed a model that combines two different attribute language models into an ensemble.\nBy using both nucleotide sequence information and its description, including Gene family and resistance mechanism information, it enables more accurate drug class predictions.\nWe also integrated various databases using the EBI ontology and used a large language model (LLM) for data augmentation in classes with insufficient data.\nAs a result, we achieved performance close to the state-of-the-art.\nWe believe this fusion has significant meaning.\nMoreover, we tested the structure we trained using only nucleotide sequences and obtained acceptable results.\nThis seems promising for future research."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S0.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S0.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S0.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S0.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.1.1\">Output</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S0.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.2.1\">Input Example</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S0.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.3.1\">BioBERT</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S0.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S0.T1.1.2.1.1\">Base</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S0.T1.1.2.1.2\">Gene Family: Beta-lactamases, Resistance Mechanism: Antibiotic incativation</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S0.T1.1.2.1.3\">78.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.3.2.1\">Entity marker (punct)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.3.2.2\">[Gene Family]: Beta-lactamases, [Resistance Mechanism]: Antibiotic incativation</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.3.2.3\">77.41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.4.3.1\">Typed entity marker</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.4.3.2\">*Beta-lactamases*, #Resistance Mechanism#</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.1.4.3.3\">77.70</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S0.T1.1.5.4.1\">Typed entity marker (punct)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S0.T1.1.5.4.2\">*[Gene Family]: Beta-lactamases*, #[Resistance Mechanism]: Antibiotic incativation#</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S0.T1.1.5.4.3\">78.46</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>\nTest macro F1 score of different entity representation techniques in Antibiotic Resistance Drug Class Prediction with BioBERT.\n</figcaption>\n</figure>",
94
+ "capture": "Table 1: \nTest macro F1 score of different entity representation techniques in Antibiotic Resistance Drug Class Prediction with BioBERT.\n"
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.2.1\">Accuracy</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.3.1\">Macro F1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.4.1\">Precision</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.5.1\">Recall</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.2.1.1\">NT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.2.1.2\">84.15</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.2.1.3\">64.04</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.2.1.4\">72.78</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.2.1.5\">59.28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T2.1.3.2.1\">NT with data augmentation</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T2.1.3.2.2\">83.42</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T2.1.3.2.3\">64.85</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T2.1.3.2.4\">80.15</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T2.1.3.2.5\">58.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.4.3.1\">NT with reads</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.4.3.2\">82.85</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.4.3.3\">61.02</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.4.3.4\">68.32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T2.1.4.3.5\">57.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T2.1.5.4.1\">NT with reads and data augmentation</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T2.1.5.4.2\">83.11</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T2.1.5.4.3\">62.82</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T2.1.5.4.4\">74.81</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T2.1.5.4.5\">57.32</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>\nResult of data augmentation for the class which has small samples. Data augmentation increases the F1 score.\n</figcaption>\n</figure>",
98
+ "capture": "Table 2: \nResult of data augmentation for the class which has small samples. Data augmentation increases the F1 score.\n"
99
+ },
100
+ "3": {
101
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.3.1\">Accuracy</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.4.1\">Macro F1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.5.1\">Precision</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.6.1\">Recall</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.1\">CARD</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.2\">NT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.3\">87.92</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.4\">63.08</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.5\">66.46</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.2.1.6\">61.51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.1\">CARD</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.2\">BB</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.3\">97.22</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.4\">89.68</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.5\">92.09</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.3.2.6\">90.54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.1\">CARD</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.2\">Ensemble</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.3\">97.55</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.4\">93.44</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.5\">95.72</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.4.3.6\">92.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.1\">MEGARes</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.2\">NT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.3\">89.61</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.4\">46.42</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.5\">54.92</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.5.4.6\">43.94</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.1\">MEGARes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.2\">BB</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.3\">99.64</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.4\">99.47</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.5\">99.96</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.6.5.6\">99.03</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.1\">MEGARes</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.2\">Ensemble</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.3\">99.99</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.4\">99.99</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.5\">99.99</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.7.6.6\">99.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.1\">Integrated</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.2\">NT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.3\">82.89</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.4\">65.79</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.5\">81.84</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.8.7.6\">58.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.9.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.1\">Integrated</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.2\">BB</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.3\">90.26</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.4\">79.34</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.5\">84.05</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.9.8.6\">77.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.10.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.1\">Integrated</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.2\">Ensemble</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.3\">92.11</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.4\">80.95</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.5\">83.52</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.10.9.6\">78.94</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.11.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.1\">Integrated with reads</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.2\">NT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.3\">83.11</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.4\">62.82</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.5\">74.81</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.1.11.10.6\">57.32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.12.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.1\">Integrated with reads</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.2\">BB</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.3\">90.24</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.4\">79.34</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.5\">84.05</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.1.12.11.6\">77.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.13.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.1\">Integrated with reads</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.2\">Ensemble</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.3\">93.40</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.4\">81.85</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.5\">84.34</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T3.1.13.12.6\">80.25</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>\nResult of using the CARD, MEGARes, and Integrated databases for antibiotic resistance drug class prediction using Nucleotide Transformer(NT), BioBERT(BB), and a weighted ensemble of both. The weighted ensemble with Nucleotide Transformer(NT) and BioBERT(BB) shows better performance in every datasets.\n</figcaption>\n</figure>",
102
+ "capture": "Table 3: \nResult of using the CARD, MEGARes, and Integrated databases for antibiotic resistance drug class prediction using Nucleotide Transformer(NT), BioBERT(BB), and a weighted ensemble of both. The weighted ensemble with Nucleotide Transformer(NT) and BioBERT(BB) shows better performance in every datasets.\n"
103
+ }
104
+ },
105
+ "image_paths": {
106
+ "1": {
107
+ "figure_path": "2401.00642v1_figure_1.png",
108
+ "caption": "Figure 1: Overview of our approach",
109
+ "url": "http://arxiv.org/html/2401.00642v1/extracted/5324493/latex/figure1.png"
110
+ },
111
+ "2": {
112
+ "figure_path": "2401.00642v1_figure_2.png",
113
+ "caption": "Figure 2: EBI ARO Gene Family mapping: search to find mapping information with header and ontology by using API.",
114
+ "url": "http://arxiv.org/html/2401.00642v1/extracted/5324493/figure2.png"
115
+ }
116
+ },
117
+ "validation": true,
118
+ "references": [
119
+ {
120
+ "1": {
121
+ "title": "Deeparg: A deep learning approach for predicting antibiotic resistance genes from metagenomic data.",
122
+ "author": "Gustavo Arango-Argoty, Emily Garner, Amy Pruden, Lenwood S. Heath, Peter Vikesland, and Liqing Zhang. 2018.",
123
+ "venue": "Microbiome, 6(1):23.",
124
+ "url": "https://doi.org/10.1186/s40168-018-0401-z"
125
+ }
126
+ },
127
+ {
128
+ "2": {
129
+ "title": "Megares and amr++, v3.0: an updated comprehensive database of antimicrobial resistance determinants and an improved software pipeline for classification using high-throughput sequencing.",
130
+ "author": "Nathalie Bonin, Enrique Doster, Hannah Worley, Lee J Pinnell, Jonathan E Bravo, Peter Ferm, Simone Marini, Mattia Prosperi, Noelle Noyes, Paul S Morley, and Christina Boucher. 2023.",
131
+ "venue": "Nucleic Acids Research, 51(D1):D744\u2013D752.",
132
+ "url": "https://doi.org/10.1093/nar/gkac1047"
133
+ }
134
+ },
135
+ {
136
+ "3": {
137
+ "title": "Proteinbert: a universal deep-learning model of protein sequence and function.",
138
+ "author": "Nadav Brandes, Dan Ofer, Yam Peleg, Nadav Rappoport, and Michal Linial. 2022.",
139
+ "venue": "Bioinformatics, 38(8):2102\u20132110.",
140
+ "url": "https://doi.org/10.1093/bioinformatics/btac020"
141
+ }
142
+ },
143
+ {
144
+ "4": {
145
+ "title": "The european bioinformatics institute in 2016: Data growth and integration.",
146
+ "author": "Charles E. Cook, Mary Todd Bergman, Robert D. Finn, Guy Cochrane, Ewan Birney, and Rolf Apweiler. 2016.",
147
+ "venue": "Nucleic Acids Research, 44(D1):D20\u2013D26.",
148
+ "url": "https://doi.org/10.1093/nar/gkv1352"
149
+ }
150
+ },
151
+ {
152
+ "5": {
153
+ "title": "The nucleotide transformer: Building and evaluating robust foundation models for human genomics.",
154
+ "author": "Hugo Dalla-Torre, Liam Gonzalez, Javier Mendoza-Revilla, Nicolas Lopez Carranza, Adam Henryk Grzywaczewski, Francesco Oteri, Christian Dallago, et al. 2023.",
155
+ "venue": "Genomics.",
156
+ "url": "https://doi.org/10.1101/2023.01.11.523679"
157
+ }
158
+ },
159
+ {
160
+ "6": {
161
+ "title": "Megares 2.0: a database for classification of antimicrobial drug, biocide and metal resistance determinants in metagenomic sequence data.",
162
+ "author": "Enrique Doster, Steven M Lakin, Christopher J Dean, Cory Wolfe, Jared G Young, Christina Boucher, Keith E Belk, Noelle R Noyes, and Paul S Morley. 2020.",
163
+ "venue": "Nucleic Acids Research, 48(D1):D561\u2013D569.",
164
+ "url": "https://doi.org/10.1093/nar/gkz1010"
165
+ }
166
+ },
167
+ {
168
+ "7": {
169
+ "title": "Lora: Low-rank adaptation of large language models.",
170
+ "author": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021.",
171
+ "venue": "arXiv.",
172
+ "url": "https://arxiv.org/abs/2106.09685v2"
173
+ }
174
+ },
175
+ {
176
+ "8": {
177
+ "title": "Art: A next-generation sequencing read simulator.",
178
+ "author": "Weichun Huang, Leping Li, Jason R. Myers, and Gabor T. Marth. 2012.",
179
+ "venue": "Bioinformatics, 28(4):593\u2013594.",
180
+ "url": "https://doi.org/10.1093/bioinformatics/btr708"
181
+ }
182
+ },
183
+ {
184
+ "9": {
185
+ "title": "Dnabert: pre-trained bidirectional encoder representations from transformers model for dna-language in genome.",
186
+ "author": "Yanrong Ji, Zhihan Zhou, Han Liu, and Ramana V Davuluri. 2021.",
187
+ "venue": "Bioinformatics, 37(15):2112\u20132120.",
188
+ "url": "https://doi.org/10.1093/bioinformatics/btab083"
189
+ }
190
+ },
191
+ {
192
+ "10": {
193
+ "title": "Card 2017: expansion and model-centric curation of the comprehensive antibiotic resistance database.",
194
+ "author": "Baofeng Jia, Amogelang R. Raphenya, Brian Alcock, Nicholas Waglechner, Peiyao Guo, Kara K. Tsang, Briony A. Lago, Biren M. Dave, Sheldon Pereira, Arjun N. Sharma, Sachin Doshi, M\u00e9lanie Courtot, Raymond Lo, Laura E. Williams, Jonathan G. Frye, Tariq Elsayegh, Daim Sardar, Erin L. Westman, Andrew C. Pawlowski, Timothy A. Johnson, Fiona S.L. Brinkman, Gerard D. Wright, and Andrew G. McArthur. 2017.",
195
+ "venue": "Nucleic Acids Research, 45(D1):D566\u2013D573.",
196
+ "url": "https://doi.org/10.1093/nar/gkw1004"
197
+ }
198
+ },
199
+ {
200
+ "11": {
201
+ "title": "Fine-tuning of bert model to accurately predict drug-target interactions.",
202
+ "author": "Hyeunseok Kang, Sungwoo Goo, Hyunjung Lee, Jung-Woo Chae, Hwi-Yeol Yun, and Sangkeun Jung. 2022.",
203
+ "venue": "Pharmaceutics, 14(8):1710.",
204
+ "url": "https://doi.org/10.3390/pharmaceutics14081710"
205
+ }
206
+ },
207
+ {
208
+ "12": {
209
+ "title": "An ensemble approach for classification and prediction of diabetes mellitus using soft voting classifier.",
210
+ "author": "Saloni Kumari, Deepika Kumar, and Mamta Mittal. 2021.",
211
+ "venue": "International Journal of Cognitive Computing in Engineering, 2:40\u201346.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "13": {
217
+ "title": "Hierarchical hidden markov models enable accurate and diverse detection of antimicrobial resistance sequences.",
218
+ "author": "Steven M. Lakin, Alan Kuhnle, Bahar Alipanahi, Noelle R. Noyes, Chris Dean, Martin Muggli, Rob Raymond, et al. 2019.",
219
+ "venue": "Communications Biology, 2(1):294.",
220
+ "url": "https://doi.org/10.1038/s42003-019-0545-9"
221
+ }
222
+ },
223
+ {
224
+ "14": {
225
+ "title": "Biobert: A pre-trained biomedical language representation model for biomedical text mining.",
226
+ "author": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020.",
227
+ "venue": "Bioinformatics, 36(4):1234\u20131240.",
228
+ "url": "https://doi.org/10.1093/bioinformatics/btz682"
229
+ }
230
+ },
231
+ {
232
+ "15": {
233
+ "title": "Biogpt: Generative pre-trained transformer for biomedical text generation and mining.",
234
+ "author": "Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. 2022.",
235
+ "venue": "Briefings in Bioinformatics, 23(6):bbac409.",
236
+ "url": "https://doi.org/10.1093/bib/bbac409"
237
+ }
238
+ },
239
+ {
240
+ "16": {
241
+ "title": "Amr-meta: A k -mer and metafeature approach to classify antimicrobial resistance from high-throughput short-read metagenomics data.",
242
+ "author": "Simone Marini, Marco Oliva, Ilya B Slizovskiy, Rishabh A Das, Noelle Robertson Noyes, Tamer Kahveci, Christina Boucher, and Mattia Prosperi. 2022.",
243
+ "venue": "GigaScience, 11.",
244
+ "url": "https://doi.org/10.1093/gigascience/giac029"
245
+ }
246
+ },
247
+ {
248
+ "17": {
249
+ "title": "A k-mer grammar analysis to uncover maize regulatory architecture.",
250
+ "author": "Mar\u00eda Katherine Mej\u00eda-Guerra and Edward S. Buckler. 2019.",
251
+ "venue": "BMC Plant Biology, 19(1):103.",
252
+ "url": "https://doi.org/10.1186/s12870-019-1693-2"
253
+ }
254
+ },
255
+ {
256
+ "18": {
257
+ "title": "Performance evaluation of six popular short-read simulators.",
258
+ "author": "Mark Milhaven and Susanne P. Pfeifer. 2023.",
259
+ "venue": "Heredity, 130(2):55\u201363.",
260
+ "url": "https://doi.org/10.1038/s41437-022-00577-3"
261
+ }
262
+ },
263
+ {
264
+ "19": {
265
+ "title": "Assessment of global health risk of antibiotic resistance genes.",
266
+ "author": "Zhenyan Zhang, Qi Zhang, Tingzhang Wang, Nuohan Xu, Tao Lu, Wenjie Hong, Josep Penuelas, Michael Gillings, Meixia Wang, Wenwen Gao, and Haifeng Qian. 2022.",
267
+ "venue": "Nature Communications, 13.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "20": {
273
+ "title": "An improved baseline for sentence-level relation extraction.",
274
+ "author": "Wenxuan Zhou and Muhao Chen. 2021.",
275
+ "venue": "arXiv.",
276
+ "url": "https://arxiv.org/abs/2102.01373v4"
277
+ }
278
+ },
279
+ {
280
+ "21": {
281
+ "title": "Dnabert-2: Efficient foundation model and benchmark for multi-species genome.",
282
+ "author": "Zhihan Zhou, Yanrong Ji, Weijian Li, Pratik Dutta, Ramana Davuluri, and Han Liu. 2023.",
283
+ "venue": "arXiv.",
284
+ "url": "https://arxiv.org/abs/2306.15006v1"
285
+ }
286
+ }
287
+ ],
288
+ "url": "http://arxiv.org/html/2401.00642v1"
289
+ }
20240101/2401.00644v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00650v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00652v1.json ADDED
@@ -0,0 +1,808 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "From Covert Hiding to Visual Editing: Robust Generative Video Steganography",
3
+ "abstract": "Traditional video steganography methods are based on modifying the covert space for embedding, whereas we propose an innovative approach that embeds secret message within semantic feature for steganography during the video editing process.\nAlthough existing traditional video steganography methods display a certain level of security and embedding capacity, they lack adequate robustness against common distortions in online social networks (OSNs).\nIn this paper, we introduce an end-to-end robust generative video steganography network (RoGVS), which achieves visual editing by modifying semantic feature of videos to embed secret message. We employ face-swapping scenario to showcase the visual editing effects. We first design a secret message embedding module to adaptively hide secret message into the semantic feature of videos.\nExtensive experiments display that the proposed RoGVS method applied to facial video datasets demonstrate its superiority over existing video and image steganography techniques in terms of both robustness and capacity.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Steganography is the science and technology of embedding secret message into natural digital carriers, such as image, video, text, etc.\nGenerally, the natural digital carriers are called \u201ccover\u201d and the digital media with secret message are called \u201cstego\u201d.\nConventional image steganography methods [49 ###reference_49###, 12 ###reference_12###, 31 ###reference_31###] primarily modify high-frequency components to embed secret message.\nThey commonly utilize methodologies such as pixel value manipulation or integrating secret message into the cover image before inputting it into an encoder for steganographic purposes.\n###figure_1### In the past few years, as the rise of short video software applications like TikTok, YouTube, Snapchat, etc., video has become a suitable carrier for steganography.\nTraditional video steganographic methods, utilizing direct pixel value manipulation [32 ###reference_32###], coding mapping [34 ###reference_34###], or adaptive distortion function [36 ###reference_36###], exploit video data redundancy for information hiding. Nevertheless, while successful in security and embedding capacity, these methods on modifying covert space can be erased by common post-processing operations easily. So they are vulnerable to mitigate diverse distortions that may occur in lossy channel transmission.\n###figure_2### Visual editing on videos can be seen as the process of modifying the semantic information of objects within them.\nInstead of hiding secret message in covert space, we embed secret message within semantic feature of videos for visual edition. The advanced semantic feature is less susceptible to distortions, making this method inherently robust.\nIn order to improve the robustness of video steganography, we propose an end-to-end robust generative video steganography network (RoGVS), which consists of four modules, containing information encoding module, secret message embedding model, attacking layer, and secret message extraction module.\nFor evaluation, we use face-swapping technology as an example to show the effectiveness of our method, while it can be easily extended to other applications.\nComprehensive experiments have showcased that our method surpasses state-of-the-art techniques, attaining commendable robustness and generalization capabilities.\nThe main contributions of our work are as follows:\n1) We are the first to explore a novel generative video steganography method, which modifies semantic feature to embed secret message during visual editing instead of modify the covert space. This framework exhibits strong extensibility, serving as a new topic for the future development of the steganography field.\n2) The proposed method is robust against common distortions in social network platform and the secret message can be extracted with high accuracy.\n3) Our method achieves better security for anti-steganalysis than other state-of-the-art methods, which can effectively evade the detection of steganalysis system."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Image Steganography.\nConventional image steganography methods primarily modify high-frequency components to embed secret message. The LSB substitution method [80 ###reference_80###] operates under the assumption that human eyes cannot perceive changes in the least significant bit of pixel values.\nHiDDeN [12 ###reference_12###] introduces an end-to-end trainable framework through an encoder-decoder architecture.\nSteganoGAN [31 ###reference_31###] employs dense encoders to enhance payload capacity. Wei et al [16 ###reference_16###] propose an advanced generative steganography network that can generate realistic stego images without using cover images.\nHowever, alterations in high-frequency components can be obliterated by common post-processing operations, such as JPEG compression or Gaussian Blur.\nVideo Steganography. Early video steganography usually modifies RGB or YUV color spaces for embedding secret message.\nDong et al [33 ###reference_33###] observed that altering intra-frame modes in HEVC significantly affected video coding efficiency, while modifications to multilevel recursive coding units had minimal distortion impact.\nPWRN [35 ###reference_35###] employs a super-resolution CNN, the Wide Residual-Net filter (PWRN), to replace HEVC\u2019s loop filter.\nRecently, He et al [36 ###reference_36###] devised an adaptive distortion function using enhanced Rate Distortion Optimization (RDO) and Syndrome-Trellis Code (STC) to minimize embedding distortion.\nHowever, these methods are struggle to handle various distortions that may arise in lossy channel transmission.\nVisual Editing.\nVisual editing can encompass color correction on a single image, deletion, addition, or alteration of objects within the image, or even merging two photos to create an entirely new scene. In videos, visual editing might involve adding effects to specific frames, removing elements from the video to alter the scene, replacing one person\u2019s face with another [26 ###reference_26###], also called face-swapping."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Proposed Approach",
21
+ "text": "Our method aims to embed secret message using semantic feature extracted from reference image into cover video , generating stego video . As illustrated in Fig. 2 ###reference_###, our approach comprises four modules: Information Encoding Module, Secret Message Embedding Module, Attacking Layer, Secret Message Extraction Module."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Information Encoding Module",
27
+ "text": "The information encoding module consists of three parts:\nThe first is identity extractor which utilizes a facial recognition network to extract identity feature tailored for the reference image .\nThe second is video feature extractor . It acquires the latent representation of cover video with frames, employing an encoder [26 ###reference_26###] for video feature extraction.\nThe third is secret message encoder which is a one-layer dense Multi-layer Perceptron (MLP).\nThe above three parts are formulated as follows:\nwhere is the -th frame of the cover video.\n represents the latent feature representation of -th frame.\n is the identity feature of the reference image.\n is the secret message.\n and represents the learnable weights and biases."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Secret Message Embedding and Extraction Module",
33
+ "text": "This module aims to embed the secret message during face swapping.\nThe key problem is how to implement face swapping under the guidance of secret message.\nTo our understanding, the latent features of the cover video encompass both identity and attribute feature.\nFace swapping essentially involves replacing the cover video\u2019s identity with that of the reference image.\nConsequently, we embed the secret message into the identity feature of the reference image, formulated as follows:\nwhere is a hyper-parameter adjusting the influence of secret message on identity feature.\nDue to strong coupling between identity and attribute features, direct extraction of attribute feature from the latent representation by is unfeasible. To ensure better attribute preservation, we design a Secret-ID block, consisting of the modified version of the residual block and AdaIN to inject into .\nThe Secret-ID block is formulated as follows:\nwhere and represent the channel-wise mean and standard deviation of the input feature , respectively. Meanwhile, and correspond to two variables derived from the secret-identity feature .\nAfter N Secret-ID blocks, the identity feature in is replaced by and then we get . Subsequently, we use an video Decoder to recover the -th frame of the stego video from . The Decoder contains four upsample blocks, a ReflectionPad layer and a convolutional layer. Each upsample block consists of a upsample layer, a convolutional layer and a BatchNorm layer. The process to get can be expressed as .\nWe design an extraction module to retrieve secret message from the stego videos, featuring seven convolutional layers using ReLU activation. Ultimately, a sigmoid activation function and binarization are applied to extract the embedded secret message. This module\u2019s formulation is as ."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Attacking Layer",
39
+ "text": "To bolster the robustness of our method for face-swapping videos in real-world scenarios, we design a attacking layer. This module simulates prevalent distortions encountered across social network platforms.\nJPEG Compression. JPEG compression involves a non-differentiable quantization step due to rounding. To mitigate this, we apply Shin et al.\u2019s method [53 ###reference_53###] to approximate the near-zero quantization step using function Eq. (6 ###reference_###):\nwhere denotes pixels of the input image.\nWe uniformly sample the JPEG quality from within the range of [50, 100].\nColor Distortions. We consider two general color distortions: brightness and contrast.\nWe perform a linear transformation on the pixels of each channel as the formula Eq. (7 ###reference_###):\nwhere and refers to the distorted and the original image. The parameters and regulate contrast and brightness, respectively.\nColor Saturation. We perform random linear interpolation between RGB and gray images equivalent to simulate the distortion.\nAdditive Noise.\nWe use Gaussian noise to simulate any other distortions that are not considered in the attacking layer. We employ a Gaussian noise model (sampling the standard deviation to simulate imaging noise."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Loss Function",
45
+ "text": "The proposed method ensures both high stego video quality and precise extraction of secret message. We achieve this by training the modules using the following losses.\nIdentity Loss.\nThe identity loss minimizes the variance between the identity features () of the reference image and the -th frame () in the stego video, reducing alterations caused by secret message. Cosine similarity is used to measure this loss by the formula Eq (8 ###reference_###).\nAttribute Loss.\nWe use the weak feature matching loss [26 ###reference_26###] to constrain attribute difference before and after embedding secret message.\nThe loss function is defined as follows:\nwhere refers to the feature extractor of Discriminator D for the j-th layer, is the number of elements in the j-th layer, and is the total number of layers. Additionally, represents the starting layer for computing the weak feature matching loss.\nAdversarial Loss. To enhance performance, we use multi-scale Discriminator with gradient penalty. We adopt the Hinge version of adversarial loss defined as follows:\nwhere denotes the Discriminator, and in our method is respectively and\nSecret Loss. To address this, we use the Binary Cross-Entropy loss (BCE) as defined in Eq. (11 ###reference_###).\nTotal loss. The total loss is defined as follows:"
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiments",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Experimental Setups",
57
+ "text": "Datasets. We use Vggface2 [61 ###reference_61###] for training and FFHQ [15 ###reference_15###] for validation. We crop and resize facial areas to a fixed 224 224 resolution for input images.\nTo analyze quality and performance, we randomly select 100 videos from DeepFake MNIST+ [65 ###reference_65###] to evaluate the performance.\nImplementation Details.\nWe train the model to encode a binary message of length = 9 or 18 bits in a frame.\nDuring training, we employ Adam optimizer with a learning rate of and a batch size of 4.\nWe set , , , and .\nThe networks train for 1 million steps, integrating the Attacking Layer after the initial 800k steps for stability. We use an NVIDIA GeForce RTX 3090 GPU for our experiments.\n###figure_3### ###figure_4### Evaluation Metrics.\nWe employ Bits Per Frame (BPF), quantifying the bits number of secret message per frame in the stego video.\nTo assess robustness, we evaluate secret message extraction accuracy under various scenarios.\nFor security assessment, we use three steganalysis methods [62 ###reference_62###, 63 ###reference_63###, 64 ###reference_64###] to demonstrate our method\u2019s anti-detection capability.\nBaselines.\nTo ensure fair comparison in our experiments, we align HiDDeN and LSB to this capacity. Detailed methods of HiDDeN and LSB are available in the supplementary materials. Additionally, due to its PU-based design, PWRN has a limited capacity of 15 BPF when resizing input images to ."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Performance Analysis",
63
+ "text": "We compare the performance of our RoGVS with image-level steganography including HiDDeN [12 ###reference_12###] and LSB [80 ###reference_80###] and video-level steganography including PWRN [35 ###reference_35###].\nVideo Quality Assessment. Fig 4 ###reference_### shows qualitative results on the integrity of generated video frames. We perform tests within and across datasets, each containing 16 test samples. The generated faces effectively change individual identities while retaining attributes like expressions and poses. More findings are available in the supplementary materials. Fig 3 ###reference_### illustrates the visual effects of certain intermittent frames within the stego videos.\nComparisons on Extraction Accuracy & Robustness.\nWe conduct extensive experiments with multiple types of distortions.\nDetailed distortion implementations are provided in the supplement.\nThe quantitative comparison results in terms of accuracy are reported in Table 1 ###reference_###.\nThe results show that our method can successfully extract secret message with high accuracy even after severe distortions.\nLSB [80 ###reference_80###] struggles even with PNG (quantization) and HiDDeN [12 ###reference_12###], though trained with a distortion module, can not generalize well to video-level distortions.\nPWRN [35 ###reference_35###] demonstrates robustness across numerous distortions, yet its performance remains constrained under operations such as motion blur or contrast adjustment.\nThe proposed RoGVS method shows superior robustness to these distortions while maintaining high extraction accuracy.\nSecurity Analysis. We use three video steganalysis tools to evaluate the security of our method.\nThe detection performance of these three steganalysis schemes is presented in Table 4 ###reference_###. Table 4 ###reference_### demonstrates that our method exhibits slightly superior security compared to the three counterparts."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Ablation Study",
69
+ "text": "Embedding Position of Secret Message.\nIn our generation network with 9 Secret-ID blocks, we explore different positions for embedding the secret message. We divide the secret message into two 9-bit segments and allocate their positions.\nIn detail, Setting (a): 1st-4th blocks and 5th-9th blocks.\nSetting (b): 1st-2nd blocks and 3rd-4th blocks.\nSetting (c): 5th-6th blocks and 7th-8th blocks.\nThey are in comparison of the standard setting of RoGVS: 1st-3rd blocks and 4th-6th blocks.\nTable 2 ###reference_### displays the performance for these four setups. Both Settings b and c show a considerable decrease compared to Settings a and d, suggesting that adding more Secret-ID blocks improves performance. Notably, Setting c outperforms Setting b, indicating the higher influence of subsequent blocks on the generated image.\nAblation on Attacking Layer, & Discriminator.\nFig 5 ###reference_### shows even without the module, our method demonstrates considerable robustness, surpassing the three comparative methods. The addition of attacking layer improves accuracy by an average of 6%.\n###figure_5### Table 3 ###reference_### presents the impact of on the extraction accuracy. More ablation results on and the discriminator are displayed in the supplement."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusions",
75
+ "text": "We propose a robust generative video steganography method based on visual editing, which modifies semantic feature to embed secret message.\nWe use face-swapping scenario as an example to show the effectiveness of our RoGVS. The results showcase that our method can generate high-quality visually edited stego videos. What\u2019s more, RoGVS outperforms existing video and image steganography methods in robustness and capacity."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.8.1.1\">Table 1</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.9.2\">Comparison Results on Extraction Accuracy.</span> \u201c-\u201d means \u201cWithout Distortion\u201d. () represents Bits Per Frame (BPF). Under different distortion scenarios, our method demonstrates superior performance in comparison.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.10\" style=\"width:506.5pt;height:96.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-29.3pt,5.6pt) scale(0.896311440768934,0.896311440768934) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.10.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.1.1\" style=\"font-size:90%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.2.1\" style=\"font-size:90%;\">-</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.3.1\" style=\"font-size:90%;\">PNG</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.4.1\" style=\"font-size:90%;\">Resize\u00a0(0.5)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.5.1\" style=\"font-size:90%;\">Bit Error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.6.1\" style=\"font-size:90%;\">Brightness</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.7.1\" style=\"font-size:90%;\">Contrast</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.8.1\" style=\"font-size:90%;\">H.264 ABR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.9.1\" style=\"font-size:90%;\">H.264 CRF</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.10.1\" style=\"font-size:90%;\">Motion Blur</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.11.1\" style=\"font-size:90%;\">Rain</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.12.1\" style=\"font-size:90%;\">Saturate</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.10.1.1.1.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.1.1.13.1\" style=\"font-size:90%;\">Shot Noise</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.1.1\" style=\"font-size:90%;\">HiDDeN </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib12\" title=\"\">12</a><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.2.1\" style=\"font-size:90%;\">0.9633</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.3.1\" style=\"font-size:90%;\">0.8342</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.4.1\" style=\"font-size:90%;\">0.6516</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.5.1\" style=\"font-size:90%;\">0.7543</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.6.1\" style=\"font-size:90%;\">0.7939</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.7.1\" style=\"font-size:90%;\">0.7813</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.8.1\" style=\"font-size:90%;\">0.7901</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.9.1\" style=\"font-size:90%;\">0.7813</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.10.1\" style=\"font-size:90%;\">0.7635</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.11.1\" style=\"font-size:90%;\">0.7624</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.12.1\" style=\"font-size:90%;\">0.7927</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.10.1.2.1.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.2.1.13.1\" style=\"font-size:90%;\">0.6310</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.1.1\" style=\"font-size:90%;\">LSB </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib80\" title=\"\">80</a><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.3.2.2.1\" style=\"font-size:90%;\">1.0000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.3.1\" style=\"font-size:90%;\">0.4988</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.4.1\" style=\"font-size:90%;\">0.4932</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.5.1\" style=\"font-size:90%;\">0.4533</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.6.1\" style=\"font-size:90%;\">0.4685</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.7.1\" style=\"font-size:90%;\">0.4985</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.8.1\" style=\"font-size:90%;\">0.4921</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.9.1\" style=\"font-size:90%;\">0.4932</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.10.1\" style=\"font-size:90%;\">0.4935</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.11.1\" style=\"font-size:90%;\">0.5085</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.12.1\" style=\"font-size:90%;\">0.4885</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.3.2.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.3.2.13.1\" style=\"font-size:90%;\">0.5012</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.1.1\" style=\"font-size:90%;\">PWRN </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib35\" title=\"\">35</a><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.4.3.2.1\" style=\"font-size:90%;\">1.0000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.3.1\" style=\"font-size:90%;\">0.8473</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.4.1\" style=\"font-size:90%;\">0.6392</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.5.1\" style=\"font-size:90%;\">0.8082</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.6.1\" style=\"font-size:90%;\">0.7959</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.7.1\" style=\"font-size:90%;\">0.4470</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.8.1\" style=\"font-size:90%;\">0.7430</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.9.1\" style=\"font-size:90%;\">0.7907</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.10.1\" style=\"font-size:90%;\">0.6004</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.11.1\" style=\"font-size:90%;\">0.7255</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.12.1\" style=\"font-size:90%;\">0.7743</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.4.3.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.4.3.13.1\" style=\"font-size:90%;\">0.8291</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.5.4.1.1\" style=\"font-size:90%;\">Ours\u00a0(9)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.2.1\" style=\"font-size:90%;\">0.9737</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.3.1\" style=\"font-size:90%;\">0.9650</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.4.1\" style=\"font-size:90%;\">0.8510</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.5.1\" style=\"font-size:90%;\">0.9393</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.6.1\" style=\"font-size:90%;\">0.9409</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.7.1\" style=\"font-size:90%;\">0.8959</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.8.1\" style=\"font-size:90%;\">0.8792</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.9.1\" style=\"font-size:90%;\">0.9566</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.10.1\" style=\"font-size:90%;\">0.9414</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.11.1\" style=\"font-size:90%;\">0.9374</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.12.1\" style=\"font-size:90%;\">0.9521</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.1.5.4.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.5.4.13.1\" style=\"font-size:90%;\">0.9059</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.1.1\" style=\"font-size:90%;\">Ours\u00a0(18)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S3.T1.10.1.6.5.2.1\" style=\"font-size:90%;\">0.9942</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.3.1\" style=\"font-size:90%;\">0.9665</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.4.1\" style=\"font-size:90%;\">0.9486</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.5.1\" style=\"font-size:90%;\">0.9565</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.6\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.6.1\" style=\"font-size:90%;\">0.9605</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.7\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.7.1\" style=\"font-size:90%;\">0.9544</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.8\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.8.1\" style=\"font-size:90%;\">0.9634</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.9\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.9.1\" style=\"font-size:90%;\">0.9642</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.10\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.10.1\" style=\"font-size:90%;\">0.9587</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.11\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.11.1\" style=\"font-size:90%;\">0.9623</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.12\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.12.1\" style=\"font-size:90%;\">0.9612</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.10.1.6.5.13\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.10.1.6.5.13.1\" style=\"font-size:90%;\">0.9588</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
82
+ "capture": "Table 1: Comparison Results on Extraction Accuracy. \u201c-\u201d means \u201cWithout Distortion\u201d. () represents Bits Per Frame (BPF). Under different distortion scenarios, our method demonstrates superior performance in comparison."
83
+ },
84
+ "2": {
85
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.1.1\">Table 2</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.2\">Ablation Study on Different Embedding Positions of Secret Message.</span> Evaluation metric: Accuracy.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.5\" style=\"width:253.2pt;height:87pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-4.4pt,1.5pt) scale(0.966228113126762,0.966228113126762) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">PNG</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">Resize\u00a0(0.5)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">H.264 CRF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">Motion Blur</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.1.6\" style=\"padding-top:1pt;padding-bottom:1pt;\">Shot Noise</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">(a)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.965</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.918</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.951</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.941</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.1.6\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.924</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">(b)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.875</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.722</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.858</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.848</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.3.2.6\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.820</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">(c)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.939</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.856</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.894</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.861</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.3.6\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.893</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">RoGVS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.4.2.1\">0.967</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.4.3.1\">0.949</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.4\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.4.4.1\">0.963</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.5\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.4.5.1\">0.959</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.1.5.4.6\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.4.6.1\">0.959</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
86
+ "capture": "Table 2: Ablation Study on Different Embedding Positions of Secret Message. Evaluation metric: Accuracy."
87
+ },
88
+ "3": {
89
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.9.2.1\">Table 3</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1\">Ablation Study for on Extraction Accuracy. </span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.6\" style=\"width:238.0pt;height:48.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(31.3pt,-6.4pt) scale(1.35613646287726,1.35613646287726) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.6.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.6.4.4\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.4.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.4.2.2.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.5.3.3.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.4.4.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.6.4.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.5.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.4.5.1.1.1\" style=\"font-size:90%;\">Accuracy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.5.1.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.4.5.1.2.1\" style=\"font-size:90%;\">0.8414</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.5.1.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.4.5.1.3.1\" style=\"font-size:90%;\">0.5885</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.5.1.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.4.5.1.4.1\" style=\"font-size:90%;\">0.8607</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.6.4.5.1.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.4.5.1.5.1\" style=\"font-size:90%;\">0.5160</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
90
+ "capture": "Table 3: Ablation Study for on Extraction Accuracy. "
91
+ },
92
+ "4": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.1.1\">Table 4</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.2\">Quantitative Security Analysis. </span>Evaluation metric: AUC. Closer to 0.5 indicates higher performance.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.7\" style=\"width:227.9pt;height:62.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-17.8pt,4.9pt) scale(0.864676923312054,0.864676923312054) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.7.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.7.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.1.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.1.1.1.1\" style=\"font-size:90%;\">Detection method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.1.1.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.1.1.2.1\" style=\"font-size:90%;\">HiDDeN</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.1.1.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.1.1.3.1\" style=\"font-size:90%;\">LSB</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.1.1.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.1.1.4.1\" style=\"font-size:90%;\">PWRN</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.7.1.1.1.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.1.1.5.1\" style=\"font-size:90%;\">ours</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.7.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.2.1.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.1.1\" style=\"font-size:90%;\">Zhai et al. \u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib62\" title=\"\">62</a><span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.2.1.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.2.1\" style=\"font-size:90%;\">0.5312</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.2.1.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.3.1\" style=\"font-size:90%;\">0.5423</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.1.2.1.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.2.1.4.1\" style=\"font-size:90%;\">0.5456</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.1.2.1.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.7.1.2.1.5.1\" style=\"font-size:90%;\">0.5245</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.7.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.7.1.3.2.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.1.1\" style=\"font-size:90%;\">Li et al. \u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib63\" title=\"\">63</a><span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.7.1.3.2.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.2.1\" style=\"font-size:90%;\">0.5416</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.7.1.3.2.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.3.1\" style=\"font-size:90%;\">0.5467</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.7.1.3.2.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.3.2.4.1\" style=\"font-size:90%;\">0.5411</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.7.1.3.2.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.7.1.3.2.5.1\" style=\"font-size:90%;\">0.5178</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.7.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.7.1.4.3.1\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\">\n<span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.1.1\" style=\"font-size:90%;\">Sheng et al. \u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib64\" title=\"\">64</a><span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.7.1.4.3.2\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.2.1\" style=\"font-size:90%;\">0.5309</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.7.1.4.3.3\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.3.1\" style=\"font-size:90%;\">0.5189</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.7.1.4.3.4\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text\" id=\"S4.T4.7.1.4.3.4.1\" style=\"font-size:90%;\">0.5167</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.7.1.4.3.5\" style=\"padding-top:0.9pt;padding-bottom:0.9pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.7.1.4.3.5.1\" style=\"font-size:90%;\">0.5146</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
94
+ "capture": "Table 4: Quantitative Security Analysis. Evaluation metric: AUC. Closer to 0.5 indicates higher performance."
95
+ }
96
+ },
97
+ "image_paths": {
98
+ "1": {
99
+ "figure_path": "2401.00652v1_figure_1.png",
100
+ "caption": "Fig. 1: Methodology of RoGVS. We modulate semantic feature with secret message to edit videos, such as the identity feature in facial videos. Our RoGVS can generate high-quality stego videos even in the presence of various distortions.",
101
+ "url": "http://arxiv.org/html/2401.00652v1/x1.png"
102
+ },
103
+ "2": {
104
+ "figure_path": "2401.00652v1_figure_2.png",
105
+ "caption": "Fig. 2: The Framework of the Proposed RoGVS. \ud835\udc6cmsubscript\ud835\udc6c\ud835\udc5a\\bm{E}_{m}bold_italic_E start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT is secret message encoder. \ud835\udc6ci\u2062dsubscript\ud835\udc6c\ud835\udc56\ud835\udc51\\bm{E}_{id}bold_italic_E start_POSTSUBSCRIPT italic_i italic_d end_POSTSUBSCRIPT is identity feature extractor. \ud835\udc6c\u03d5subscript\ud835\udc6citalic-\u03d5\\bm{E}_{\\phi}bold_italic_E start_POSTSUBSCRIPT italic_\u03d5 end_POSTSUBSCRIPT is video feature extractor. \ud835\udc6b\u03c6subscript\ud835\udc6b\ud835\udf11\\bm{D}_{\\varphi}bold_italic_D start_POSTSUBSCRIPT italic_\u03c6 end_POSTSUBSCRIPT represents a video decoder. \ud835\udc6ce\u2062x\u2062tsubscript\ud835\udc6c\ud835\udc52\ud835\udc65\ud835\udc61\\bm{E}_{ext}bold_italic_E start_POSTSUBSCRIPT italic_e italic_x italic_t end_POSTSUBSCRIPT represents secret message extractor.",
106
+ "url": "http://arxiv.org/html/2401.00652v1/x2.png"
107
+ },
108
+ "3": {
109
+ "figure_path": "2401.00652v1_figure_3.png",
110
+ "caption": "Fig. 3: Qualitative Analysis of Stego Videos. Original represents frames within the cover videos.",
111
+ "url": "http://arxiv.org/html/2401.00652v1/x3.png"
112
+ },
113
+ "4": {
114
+ "figure_path": "2401.00652v1_figure_4.png",
115
+ "caption": "Fig. 4: Exampled Generated Stego Video Frames. Left: Vggface2. Right: FFHQ.",
116
+ "url": "http://arxiv.org/html/2401.00652v1/extracted/5324870/figure/1.jpg"
117
+ },
118
+ "5": {
119
+ "figure_path": "2401.00652v1_figure_5.png",
120
+ "caption": "Fig. 5: Ablation Results on Attacking Layer. The horizontal axis represents distortion types, corresponding to the order listed in Table 1.",
121
+ "url": "http://arxiv.org/html/2401.00652v1/x4.png"
122
+ }
123
+ },
124
+ "validation": true,
125
+ "references": [
126
+ {
127
+ "1": {
128
+ "title": "\u201cThe frobnicatable foo filter,\u201d ACM MM 2013 submission ID 324. Supplied as additional material acmmm13.pdf.",
129
+ "author": "Authors,",
130
+ "venue": null,
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "2": {
136
+ "title": "\u201cFrobnication tutorial,\u201d 2012,",
137
+ "author": "Authors,",
138
+ "venue": "Supplied as additional material tr.pdf.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "3": {
144
+ "title": "\u201cAn algorithm for the machine computation of complex Fourier series,\u201d",
145
+ "author": "J. W. Cooley and J. W. Tukey,",
146
+ "venue": "Math. Comp., vol. 19, pp. 297\u2013301, Apr. 1965.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "4": {
152
+ "title": "\u201cAdaptive filter theory,\u201d",
153
+ "author": "S. Haykin,",
154
+ "venue": "Information and System Science. Prentice Hall, 4th edition, 2002.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "5": {
160
+ "title": "\u201cDos and don\u2019ts of technical writing,\u201d",
161
+ "author": "Dennis R. Morgan,",
162
+ "venue": "IEEE Potentials, vol. 24, no. 3, pp. 22\u201325, Aug. 2005.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "6": {
168
+ "title": "\u201cDesigning steganographic distortion using directional filters,\u201d",
169
+ "author": "Vojt\u011bch Holub and Jessica Fridrich,",
170
+ "venue": "in 2012 IEEE International workshop on information forensics and security (WIFS). IEEE, 2012, pp. 234\u2013239.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "7": {
176
+ "title": "\u201cUniversal distortion function for steganography in an arbitrary domain,\u201d",
177
+ "author": "Vojt\u011bch Holub, Jessica Fridrich, and Tom\u00e1\u0161 Denemark,",
178
+ "venue": "EURASIP Journal on Information Security, vol. 2014, pp. 1\u201313, 2014.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "8": {
184
+ "title": "\u201cA new cost function for spatial image steganography,\u201d",
185
+ "author": "Bin Li, Ming Wang, Jiwu Huang, and Xiaolong Li,",
186
+ "venue": "in 2014 IEEE International conference on image processing (ICIP). IEEE, 2014, pp. 4206\u20134210.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "9": {
192
+ "title": "\u201cRedmark: Framework for residual diffusion watermarking based on deep networks,\u201d",
193
+ "author": "Mahdi Ahmadi, Alireza Norouzi, Nader Karimi, Shadrokh Samavi, and Ali Emami,",
194
+ "venue": "Expert Systems with Applications, vol. 146, pp. 113157, 2020.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "10": {
200
+ "title": "\u201cHinet: deep image hiding by invertible network,\u201d",
201
+ "author": "Junpeng Jing, Xin Deng, Mai Xu, Jianyi Wang, and Zhenyu Guan,",
202
+ "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 4733\u20134742.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "11": {
208
+ "title": "\u201cLarge-capacity image steganography based on invertible neural networks,\u201d",
209
+ "author": "Shao-Ping Lu, Rong Wang, Tao Zhong, and Paul L Rosin,",
210
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10816\u201310825.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "12": {
216
+ "title": "\u201cHidden: Hiding data with deep networks,\u201d",
217
+ "author": "Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei,",
218
+ "venue": "in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 657\u2013672.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "13": {
224
+ "title": "\u201cLarge scale gan training for high fidelity natural image synthesis,\u201d",
225
+ "author": "Andrew Brock, Jeff Donahue, and Karen Simonyan,",
226
+ "venue": "arXiv preprint arXiv:1809.11096, 2018.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "14": {
232
+ "title": "\u201cProgressive growing of gans for improved quality, stability, and variation,\u201d",
233
+ "author": "Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen,",
234
+ "venue": "arXiv preprint arXiv:1710.10196, 2017.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "15": {
240
+ "title": "\u201cA style-based generator architecture for generative adversarial networks,\u201d",
241
+ "author": "Tero Karras, Samuli Laine, and Timo Aila,",
242
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4401\u20134410.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "16": {
248
+ "title": "\u201cGenerative steganography network,\u201d",
249
+ "author": "Ping Wei, Sheng Li, Xinpeng Zhang, Ge Luo, Zhenxing Qian, and Qing Zhou,",
250
+ "venue": "in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1621\u20131629.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "17": {
256
+ "title": "\u201cSelf-attention generative adversarial networks,\u201d",
257
+ "author": "Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena,",
258
+ "venue": "in International conference on machine learning. PMLR, 2019, pp. 7354\u20137363.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "18": {
264
+ "title": "\u201cA novel image steganography method via deep convolutional generative adversarial networks,\u201d",
265
+ "author": "Donghui Hu, Liang Wang, Wenjie Jiang, Shuli Zheng, and Bin Li,",
266
+ "venue": "IEEE access, vol. 6, pp. 38303\u201338314, 2018.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "19": {
272
+ "title": "\u201cA generative method for steganography by cover synthesis with auxiliary semantics,\u201d",
273
+ "author": "Zhuo Zhang, Guangyuan Fu, Rongrong Ni, Jia Liu, and Xiaoyuan Yang,",
274
+ "venue": "Tsinghua Science and Technology, vol. 25, no. 4, pp. 516\u2013527, 2020.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "20": {
280
+ "title": "\u201cGenerative steganography by sampling,\u201d",
281
+ "author": "Zhuo Zhang, Jia Liu, Yan Ke, Yu Lei, Jun Li, Minqing Zhang, and Xiaoyuan Yang,",
282
+ "venue": "IEEE access, vol. 7, pp. 118586\u2013118597, 2019.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "21": {
288
+ "title": "\u201cFine-grained face swapping via regional gan inversion,\u201d",
289
+ "author": "Zhian Liu, Maomao Li, Yong Zhang, Cairong Wang, Qi Zhang, Jue Wang, and Yongwei Nie,",
290
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8578\u20138587.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "22": {
296
+ "title": "\u201c3d-model-based face replacement in video,\u201d",
297
+ "author": "Yi-Ting Cheng, Virginia Tzeng, Yu Liang, Chuan-Chang Wang, Bing-Yu Chen, Yung-Yu Chuang, and Ming Ouhyoung,",
298
+ "venue": "in SIGGRAPH\u201909: Posters, pp. 1\u20131. 2009.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "23": {
304
+ "title": "\u201cFace2face: Real-time face capture and reenactment of rgb videos,\u201d",
305
+ "author": "Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nie\u00dfner,",
306
+ "venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2387\u20132395.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "24": {
312
+ "title": "\u201cFsgan: Subject agnostic face swapping and reenactment,\u201d",
313
+ "author": "Yuval Nirkin, Yosi Keller, and Tal Hassner,",
314
+ "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 7184\u20137193.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "25": {
320
+ "title": "\u201cFaceshifter: Towards high fidelity and occlusion aware face swapping,\u201d",
321
+ "author": "Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, and Fang Wen,",
322
+ "venue": "arXiv preprint arXiv:1912.13457, 2019.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "26": {
328
+ "title": "\u201cSimswap: An efficient framework for high fidelity face swapping,\u201d",
329
+ "author": "Renwang Chen, Xuanhong Chen, Bingbing Ni, and Yanhao Ge,",
330
+ "venue": "in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2003\u20132011.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "27": {
336
+ "title": "\u201cGenerative steganography via auto-generation of semantic object contours,\u201d",
337
+ "author": "Zhili Zhou, Xiaohua Dong, Ruohan Meng, Meimin Wang, Hongyang Yan, Keping Yu, and Kim-Kwang Raymond Choo,",
338
+ "venue": "IEEE Transactions on Information Forensics and Security, 2023.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "28": {
344
+ "title": "\u201cData hiding during image processing using capsule networks,\u201d",
345
+ "author": "Zichi Wang, Guorui Feng, Hanzhou Wu, and Xinpeng Zhang,",
346
+ "venue": "Neurocomputing, vol. 537, pp. 49\u201360, 2023.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "29": {
352
+ "title": "\u201cEnhanced least significant bit algorithm for image steganography,\u201d",
353
+ "author": "Shilpa Gupta, Geeta Gujral, and Neha Aggarwal,",
354
+ "venue": "IJCEM International Journal of Computational Engineering & Management, vol. 15, no. 4, pp. 40\u201342, 2012.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "30": {
360
+ "title": "\u201cAn improved lsb based image steganography technique for rgb images,\u201d",
361
+ "author": "Amritpal Singh and Harpal Singh,",
362
+ "venue": "in 2015 IEEE International Conference on electrical, computer and communication technologies (ICECCT). IEEE, 2015, pp. 1\u20134.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "31": {
368
+ "title": "\u201cSteganogan: High capacity image steganography with gans,\u201d",
369
+ "author": "Kevin Alex Zhang, Alfredo Cuesta-Infante, Lei Xu, and Kalyan Veeramachaneni,",
370
+ "venue": "arXiv preprint arXiv:1901.03892, 2019.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "32": {
376
+ "title": "\u201cA new steganography algorithm based on color histograms for data embedding into raw video streams,\u201d",
377
+ "author": "Ozdemir Cetin and A Turan Ozcerit,",
378
+ "venue": "computers & security, vol. 28, no. 7, pp. 670\u2013682, 2009.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "33": {
384
+ "title": "\u201cA high capacity hevc steganographic algorithm using intra prediction modes in multi-sized prediction blocks,\u201d",
385
+ "author": "Yi Dong, Tanfeng Sun, and Xinghao Jiang,",
386
+ "venue": "in Digital Forensics and Watermarking: 17th International Workshop, IWDW 2018, Jeju Island, Korea, October 22-24, 2018, Proceedings 17. Springer, 2019, pp. 233\u2013247.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "34": {
392
+ "title": "\u201cA high-performance cnn-applied hevc steganography based on diamond-coded pu partition modes,\u201d",
393
+ "author": "Jindou Liu, Zhaohong Li, Xinghao Jiang, and Zhenzhen Zhang,",
394
+ "venue": "IEEE Transactions on Multimedia, vol. 24, pp. 2084\u20132097, 2021.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "35": {
400
+ "title": "\u201cAn anti-steganalysis hevc video steganography with high performance based on cnn and pu partition modes,\u201d",
401
+ "author": "Zhonghao Li, Xinghao Jiang, Yi Dong, Laijin Meng, and Tanfeng Sun,",
402
+ "venue": "IEEE Transactions on Dependable and Secure Computing, vol. 20, no. 1, pp. 606\u2013619, 2022.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "36": {
408
+ "title": "\u201cAdaptive hevc video steganography with high performance based on attention-net and pu partition modes,\u201d",
409
+ "author": "Songhan He, Dawen Xu, Lin Yang, and Weipeng Liang,",
410
+ "venue": "IEEE Transactions on Multimedia, 2023.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "37": {
416
+ "title": "\u201cArcface: Additive angular margin loss for deep face recognition,\u201d",
417
+ "author": "Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou,",
418
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4690\u20134699.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "38": {
424
+ "title": "\u201cSpatial transformer networks,\u201d",
425
+ "author": "Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al.,",
426
+ "venue": "Advances in neural information processing systems, vol. 28, 2015.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "39": {
432
+ "title": "\u201cSystematic literature review and analysis for arabic text steganography method practically,\u201d",
433
+ "author": "Nuur Alifah Roslan, Nur Izura Udzir, Ramlan Mahmod, and Adnan Gutub,",
434
+ "venue": "Egyptian Informatics Journal, 2022.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "40": {
440
+ "title": "\u201cA coverless image steganography based on robust image wavelet hashing,\u201d",
441
+ "author": "Nadia A Karim, Suhad A Ali, and Majid Jabbar Jawad,",
442
+ "venue": "TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 20, no. 6, pp. 1317\u20131325, 2022.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "41": {
448
+ "title": "\u201cRobust image steganography against general downsampling operations with lossless secret recovery,\u201d",
449
+ "author": "Sheng Li, Zichi Wang, Xiudong Zhang, and Xinpeng Zhang,",
450
+ "venue": "IEEE Transactions on Dependable and Secure Computing, 2023.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "42": {
456
+ "title": "\u201cEnhancing lsb using binary message size encoding for high capacity, transparent and secure audio steganography\u2013an innovative approach,\u201d",
457
+ "author": "Mahmoud M Mahmoud and Huwaida T Elshoush,",
458
+ "venue": "IEEE Access, vol. 10, pp. 29954\u201329971, 2022.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "43": {
464
+ "title": "\u201cArabic text steganography based on deep learning methods,\u201d",
465
+ "author": "Omer Farooq Ahmed Adeeb and Seyed Jahanshah Kabudian,",
466
+ "venue": "IEEE Access, vol. 10, pp. 94403\u201394416, 2022.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "44": {
472
+ "title": "\u201cText steganography in chat,\u201d",
473
+ "author": "M Hassan Shirali-Shahreza and Mohammad Shirali-Shahreza,",
474
+ "venue": "in 2007 3rd IEEE/IFIP International Conference in Central Asia on Internet. IEEE, 2007, pp. 1\u20135.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "45": {
480
+ "title": "\u201cInformation hiding: A new approach in text steganography,\u201d",
481
+ "author": "B Delina,",
482
+ "venue": "in Proceedings of the International Conference on Applied Computer and Applied Computational Science, World Scientific and Engineering Academy and Society (WSEAS 2008), 2008, pp. 689\u2013695.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "46": {
488
+ "title": "\u201cA view on latest audio steganography techniques,\u201d",
489
+ "author": "Fatiha Djebbar, Beghdad Ayad, Habib Hamam, and Karim Abed-Meraim,",
490
+ "venue": "in 2011 International Conference on Innovations in Information Technology. IEEE, 2011, pp. 409\u2013414.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "47": {
496
+ "title": "\u201cAudio steganography using bit modification,\u201d",
497
+ "author": "Kaliappan Gopalan,",
498
+ "venue": "in 2003 International Conference on Multimedia and Expo. ICME\u201903. Proceedings (Cat. No. 03TH8698). IEEE, 2003, vol. 1, pp. I\u2013629.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "48": {
504
+ "title": "\u201cComparative study of digital audio steganography techniques,\u201d",
505
+ "author": "Fatiha Djebbar, Beghdad Ayad, Karim Abed Meraim, and Habib Hamam,",
506
+ "venue": "EURASIP Journal on Audio, Speech, and Music Processing, vol. 2012, no. 1, pp. 1\u201316, 2012.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "49": {
512
+ "title": "\u201cImage generation network for covert transmission in online social network,\u201d",
513
+ "author": "Zhengxin You, Qichao Ying, Sheng Li, Zhenxing Qian, and Xinpeng Zhang,",
514
+ "venue": "in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 2834\u20132842.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "50": {
520
+ "title": "\u201cHigh-resolution image synthesis and semantic manipulation with conditional gans,\u201d",
521
+ "author": "Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro,",
522
+ "venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8798\u20138807.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "51": {
528
+ "title": "\u201cSynthesizing robust adversarial examples,\u201d",
529
+ "author": "Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok,",
530
+ "venue": "in International conference on machine learning. PMLR, 2018, pp. 284\u2013293.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "52": {
536
+ "title": "\u201cDeep charuco: Dark charuco marker pose estimation,\u201d",
537
+ "author": "Danying Hu, Daniel DeTone, and Tomasz Malisiewicz,",
538
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 8436\u20138444.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "53": {
544
+ "title": "\u201cJpeg-resistant adversarial images,\u201d",
545
+ "author": "Richard Shin and Dawn Song,",
546
+ "venue": "in NIPS 2017 Workshop on Machine Learning and Computer Security, 2017, vol. 1, p. 8.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "54": {
552
+ "title": "\u201cStegastamp: Invisible hyperlinks in physical photographs,\u201d",
553
+ "author": "Matthew Tancik, Ben Mildenhall, and Ren Ng,",
554
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2117\u20132126.",
555
+ "url": null
556
+ }
557
+ },
558
+ {
559
+ "55": {
560
+ "title": "\u201cDeep residual learning for image recognition,\u201d",
561
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun,",
562
+ "venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770\u2013778.",
563
+ "url": null
564
+ }
565
+ },
566
+ {
567
+ "56": {
568
+ "title": "\u201cArbitrary style transfer in real-time with adaptive instance normalization,\u201d",
569
+ "author": "Xun Huang and Serge Belongie,",
570
+ "venue": "in Proceedings of the IEEE international conference on computer vision, 2017, pp. 1501\u20131510.",
571
+ "url": null
572
+ }
573
+ },
574
+ {
575
+ "57": {
576
+ "title": "\u201cFew-shot unsupervised image-to-image translation,\u201d",
577
+ "author": "Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz,",
578
+ "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 10551\u201310560.",
579
+ "url": null
580
+ }
581
+ },
582
+ {
583
+ "58": {
584
+ "title": "\u201cSemantic image synthesis with spatially-adaptive normalization,\u201d",
585
+ "author": "Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu,",
586
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 2337\u20132346.",
587
+ "url": null
588
+ }
589
+ },
590
+ {
591
+ "59": {
592
+ "title": "\u201cBanach wasserstein gan,\u201d",
593
+ "author": "Jonas Adler and Sebastian Lunz,",
594
+ "venue": "Advances in neural information processing systems, vol. 31, 2018.",
595
+ "url": null
596
+ }
597
+ },
598
+ {
599
+ "60": {
600
+ "title": "\u201cImproved training of wasserstein gans,\u201d",
601
+ "author": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville,",
602
+ "venue": "Advances in neural information processing systems, vol. 30, 2017.",
603
+ "url": null
604
+ }
605
+ },
606
+ {
607
+ "61": {
608
+ "title": "\u201cVggface2: A dataset for recognising faces across pose and age,\u201d",
609
+ "author": "Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman,",
610
+ "venue": "in 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEE, 2018, pp. 67\u201374.",
611
+ "url": null
612
+ }
613
+ },
614
+ {
615
+ "62": {
616
+ "title": "\u201cUniversal detection of video steganography in multiple domains based on the consistency of motion vectors,\u201d",
617
+ "author": "Liming Zhai, Lina Wang, and Yanzhen Ren,",
618
+ "venue": "IEEE transactions on information forensics and security, vol. 15, pp. 1762\u20131777, 2019.",
619
+ "url": null
620
+ }
621
+ },
622
+ {
623
+ "63": {
624
+ "title": "\u201cA hevc video steganalysis algorithm based on pu partition modes.,\u201d",
625
+ "author": "Zhonghao Li, Laijing Meng, Shutong Xu, Zhaohong Li, Yunqing Shi, and Yuanchang Liang,",
626
+ "venue": "Computers, Materials & Continua, vol. 59, no. 2, 2019.",
627
+ "url": null
628
+ }
629
+ },
630
+ {
631
+ "64": {
632
+ "title": "\u201cA prediction mode steganalysis detection algorithm for hevc,\u201d",
633
+ "author": "Q Sheng, RD Wang, ML Huang, Q Li, and D Xu,",
634
+ "venue": "J Optoelectron-laser, vol. 28, no. 4, pp. 433\u2013440, 2017.",
635
+ "url": null
636
+ }
637
+ },
638
+ {
639
+ "65": {
640
+ "title": "\u201cDeepfake mnist+: a deepfake facial animation dataset,\u201d",
641
+ "author": "Jiajun Huang, Xueyu Wang, Bo Du, Pei Du, and Chang Xu,",
642
+ "venue": "in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1973\u20131982.",
643
+ "url": null
644
+ }
645
+ },
646
+ {
647
+ "66": {
648
+ "title": "\u201cImage disentanglement autoencoder for steganography without embedding,\u201d",
649
+ "author": "Xiyao Liu, Ziping Ma, Junxing Ma, Jian Zhang, Gerald Schaefer, and Hui Fang,",
650
+ "venue": "in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 2303\u20132312.",
651
+ "url": null
652
+ }
653
+ },
654
+ {
655
+ "67": {
656
+ "title": "Blackboard Systems,",
657
+ "author": "Robert Engelmore and Anthony Morgan, Eds.,",
658
+ "venue": "Addison-Wesley, Reading, Mass., 1986.",
659
+ "url": null
660
+ }
661
+ },
662
+ {
663
+ "68": {
664
+ "title": "\u201cCommunication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education,\u201d",
665
+ "author": "William J. Clancey,",
666
+ "venue": "in Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83), Menlo Park, Calif, 1983, pp. 556\u2013560, IJCAI Organization.",
667
+ "url": null
668
+ }
669
+ },
670
+ {
671
+ "69": {
672
+ "title": "\u201cClassification Problem Solving,\u201d",
673
+ "author": "William J. Clancey,",
674
+ "venue": "in Proceedings of the Fourth National Conference on Artificial Intelligence, Menlo Park, Calif., 1984, pp. 45\u201354, AAAI Press.",
675
+ "url": null
676
+ }
677
+ },
678
+ {
679
+ "70": {
680
+ "title": "\u201cNew ways to make microcircuits smaller,\u201d",
681
+ "author": "Arthur L. Robinson,",
682
+ "venue": "Science, vol. 208, no. 4447, pp. 1019\u20131022, 1980.",
683
+ "url": null
684
+ }
685
+ },
686
+ {
687
+ "71": {
688
+ "title": "\u201cNew Ways to Make Microcircuits Smaller\u2014Duplicate Entry,\u201d",
689
+ "author": "Arthur L. Robinson,",
690
+ "venue": "Science, vol. 208, pp. 1019\u20131026, 1980.",
691
+ "url": null
692
+ }
693
+ },
694
+ {
695
+ "72": {
696
+ "title": "\u201cStrategic explanations for a diagnostic consultation system,\u201d",
697
+ "author": "Diane Warner Hasling, William J. Clancey, and Glenn Rennels,",
698
+ "venue": "International Journal of Man-Machine Studies, vol. 20, no. 1, pp. 3\u201319, 1984.",
699
+ "url": null
700
+ }
701
+ },
702
+ {
703
+ "73": {
704
+ "title": "\u201cStrategic Explanations in Consultation\u2014Duplicate,\u201d",
705
+ "author": "Diane Warner Hasling, William J. Clancey, Glenn R. Rennels, and Thomas Test,",
706
+ "venue": "The International Journal of Man-Machine Studies, vol. 20, no. 1, pp. 3\u201319, 1983.",
707
+ "url": null
708
+ }
709
+ },
710
+ {
711
+ "74": {
712
+ "title": "\u201cPoligon: A System for Parallel Problem Solving,\u201d",
713
+ "author": "James Rice,",
714
+ "venue": "Technical Report KSL-86-19, Dept. of Computer Science, Stanford Univ., 1986.",
715
+ "url": null
716
+ }
717
+ },
718
+ {
719
+ "75": {
720
+ "title": "Transfer of Rule-Based Expertise through a Tutorial Dialogue,",
721
+ "author": "William J. Clancey,",
722
+ "venue": "Ph.D. diss., Dept. of Computer Science, Stanford Univ., Stanford, Calif., 1979.",
723
+ "url": null
724
+ }
725
+ },
726
+ {
727
+ "76": {
728
+ "title": "\u201cThe Engineering of Qualitative Models,\u201d",
729
+ "author": "William J. Clancey,",
730
+ "venue": "Forthcoming, 2021.",
731
+ "url": null
732
+ }
733
+ },
734
+ {
735
+ "77": {
736
+ "title": "\u201cCrime and punishment in scientific research,\u201d 2008.",
737
+ "author": "Mathieu Bouville,",
738
+ "venue": null,
739
+ "url": null
740
+ }
741
+ },
742
+ {
743
+ "78": {
744
+ "title": "\u201cPluto: The \u2019other\u2019 red planet,\u201d \\urlhttps://www.nasa.gov/nh/pluto-the-other-red-planet, 2015,",
745
+ "author": "NASA,",
746
+ "venue": "Accessed: 2018-12-06.",
747
+ "url": null
748
+ }
749
+ },
750
+ {
751
+ "79": {
752
+ "title": "\u201cFaceforensics++: Learning to detect manipulated facial images,\u201d",
753
+ "author": "Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nie\u00dfner,",
754
+ "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 1\u201311.",
755
+ "url": null
756
+ }
757
+ },
758
+ {
759
+ "80": {
760
+ "title": "\u201cHiding data in images by simple lsb substitution,\u201d",
761
+ "author": "Chi-Kwong Chan and Lee-Ming Cheng,",
762
+ "venue": "Pattern recognition, vol. 37, no. 3, pp. 469\u2013474, 2004.",
763
+ "url": null
764
+ }
765
+ },
766
+ {
767
+ "81": {
768
+ "title": "\u201cMaintaining rate-distortion optimization for ipm-based video steganography by constructing isolated channels in hevc,\u201d",
769
+ "author": "Yu Wang, Yun Cao, Xianfeng Zhao, Zhoujun Xu, and Meineng Zhu,",
770
+ "venue": "in Proceedings of the 6th ACM workshop on information hiding and multimedia security, 2018, pp. 97\u2013107.",
771
+ "url": null
772
+ }
773
+ },
774
+ {
775
+ "82": {
776
+ "title": "\u201cRobust image forgery detection over online social network shared images,\u201d",
777
+ "author": "Haiwei Wu, Jiantao Zhou, Jinyu Tian, and Jun Liu,",
778
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13440\u201313449.",
779
+ "url": null
780
+ }
781
+ },
782
+ {
783
+ "83": {
784
+ "title": "\u201cImplicit identity leakage: The stumbling block to improving deepfake detection generalization,\u201d",
785
+ "author": "Shichao Dong, Jin Wang, Renhe Ji, and et al.,",
786
+ "venue": "in CVPR, 2023.",
787
+ "url": null
788
+ }
789
+ },
790
+ {
791
+ "84": {
792
+ "title": "\u201cDiffswap: High-fidelity and controllable face swapping via 3d-aware masked diffusion,\u201d",
793
+ "author": "Wenliang Zhao, Yongming Rao, Weikang Shi, and et al.,",
794
+ "venue": "in CVPR, 2023.",
795
+ "url": null
796
+ }
797
+ },
798
+ {
799
+ "85": {
800
+ "title": "\u201cAltfreezing for more general video face forgery detection,\u201d",
801
+ "author": "Zhendong Wang, Jianmin Bao, and et al.,",
802
+ "venue": "in CVPR, 2023.",
803
+ "url": null
804
+ }
805
+ }
806
+ ],
807
+ "url": "http://arxiv.org/html/2401.00652v1"
808
+ }
20240101/2401.00653v1.json ADDED
@@ -0,0 +1,369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Prompt-IML: Image Manipulation Localization with pre-trained foundation models through prompt tuning",
3
+ "abstract": "Deceptive images can be shared in seconds with social networking services, posing substantial risks. Tampering traces, such as boundary artifacts and high-frequency information, have been significantly emphasized by massive networks in the Image Manipulation Localization (IML) field. However, they are prone to image post-processing operations, which limit the generalization and robustness of existing methods.\nWe present a novel Prompt-IML framework. We observe that humans tend to discern the authenticity of an image based on both semantic and high-frequency information, inspired by which, the proposed framework leverages rich semantic knowledge from pre-trained visual foundation models to assist IML.\nWe are the first to design a framework that utilizes visual foundation models specially for the IML task.\nMoreover, we design a Feature Alignment and Fusion module to align and fuse features of semantic features with high-frequency features, which aims at locating tampered regions from multiple perspectives. Experimental results demonstrate that our model can achieve better performance on eight typical fake image datasets and outstanding robustness.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The commonly encountered image processing techniques are copy-move, splicing, and inpainting, all of which have the capability to alter the original semantic content of images. Meanwhile, the rapid advancement of image editing tools has substantially reduced the difficulty and cost associated with producing deceptive images. Consequently, there is a pressing need to accurately locate the manipulated regions in deceptive images.\nThe Image Manipulation Localization (IML) task is to granularly locate tampered regions within images. With the advancement of deep learning, researchers are attempting to establish massive manipulation localization networks [7 ###reference_7###].\nMany existing methods revolve around specific tampering traces, seeking the optimal feature representation through carefully designed network architectures [2 ###reference_2###, 3 ###reference_3###, 1 ###reference_1###]. However, tampering traces are prone to image post-processing operations [31 ###reference_31###, 8 ###reference_8###]. It\u2019s a contributing factor to the limited robustness and generalization of the aforementioned methods.\n###figure_1### Moreover, we notice that existing methods ignore a key element of images to achieve generalizability and robustness, that is the semantic information [8 ###reference_8###].\nHumans naturally observe the coherence of the semantic information within the picture to identify fake images.\nSemantic information plays a principal part in many computer vision tasks [24 ###reference_24###].\nWe believe that it is not exceptional in IML.\nCompared with features related to tampering traces, semantic features are more robust to image post-processing.\nTherefore, employing the semantic features of images as another supplement for judgment will assist the task of IML.\nHowever, training a network with rich semantic knowledge using limited available datasets is challenging [25 ###reference_25###]. Besides, typical methods often utilize high-frequency features from images to identify manipulations, which brings new challenges of aligning semantic features with them [26 ###reference_26###].\n###figure_2### To overcome the aforementioned challenges, we exploit pre-trained visual foundation models to acquire semantic features of images through prompt tuning.\nFig. 1 ###reference_### exhibits the difference between the proposed method and typical methods.\nWe propose prompt-IML that actively utilizes semantic information along with high-frequency information for manipulation localization.\nSpecifically, We use BayarConv to extract the high-frequency features of images and feed them into subsequent networks for further processing. Simultaneously, we raise a semantic feature extraction network, which adheres to the architectural design of visual foundation models and is initialized with pre-trained weights. During training, we freeze it and attach several learnable prompt embeddings to image token sequences to adjust the semantic features. Then, we facilitate interaction between semantic features and high-frequency features through a designed Feature Alignment and Fusion (FAF) module, which involves multiple attention mechanisms to enhance features and locate tampered regions from multiple perspectives.\nOur main contributions are summarized as follows:\nWe are the first to design a framework that utilizes visual foundation models specially for the IML task. Incorporating semantic information with high-frequency information for discernment aligns more with the logic circuit of human judgment regarding image veracity.\nWe propose an FAF module, that enables adapting visual foundation models to IML tasks through prompt tuning. The proposed FAF module involves multiple attention mechanisms to align and fuse semantic features with high-frequency features.\nExperiments on eight datasets demonstrate the generalizability of the proposed framework. Extensive experiments prove the robustness of the framework against image post-processing operations."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Works",
15
+ "text": "Image Manipulation Localization. With the advancement of deep learning, researchers have embarked on efforts to establish end-to-end manipulation localization networks.\nMVSSNet++[8 ###reference_8###] integrates multi-scale features, contour features, and high-frequency features of images for feature extraction and utilizes spatial-channel attention for enhanced feature fusion. PSCC-Net[7 ###reference_7###] proposes a progressive spatial-channel attention module, utilizing multi-scale features and dense cross-connections to generate tampering masks of various granularity. These works involve the meticulous design of network architectures to acquire more optimal feature representations regarding tampering traces. Although these methods have achieved decent performance in the IML task, the choice of feature representation for tampering traces still significantly impacts the model\u2019s generalization and robustness.\nTuning Visual Foundation Models. Compared to fine-tuning, prompt tuning is an efficient, low-cost way of adapting an AI foundation model to new downstream tasks without retraining the model and updating its weights. This technique was first used in NLP, and VPT[27 ###reference_27###] is an efficient way to adapt it for the visual domain. Recently, EVP[10 ###reference_10###] achieved granular manipulated region localization by adjusting the embedding representation of images and incorporating high-frequency information. They attempt to adapt the pre-trained model through prompt tuning to various downstream tasks, including IML. However, due to the simple feature fusion design, their model performs poorly."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Proposed Method",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Approach Overview",
27
+ "text": "Fig. 2 ###reference_### illustrates the architecture of the proposed prompt-IML. The complete pipeline consists of two phases, i.e., Feature Extraction Network (FEN) and Manipulation Localization Network (MLN).\nThe FEN comprises two parallel branches: one extracts semantic features, and the other focuses on extracting high-frequency features. Given the differences between them, we employ a carefully designed FAF module to fuse features. This module primarily utilizes various attention mechanisms to facilitate interaction between the features. The multi-scale features outputted during the FEN stage are ultimately fed into the MLN. The MLN aggregates feature information through layer-wise up-sampling and outputs the final prediction results."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Feature Extraction Network",
33
+ "text": "Dual-Branch Architecture. We use two branches to extract multiple features of images, and both share the same structure based on Swin-Transformer [28 ###reference_28###]. The semantic branch is initialized with pre-trained weights and remains untrained during training to preserve the optimal semantic representations. Specifically, let be the input image. For the semantic branch, we get the input through partitioning the image into specified-sized patches:\nwhere represents the number of the patches, is a learnable positional embedding. For the high-frequency branch, we employ a set of BayarConv with kernels varying sizes, which prevent information loss caused by a fixed-size receptive field, to get the input :\nwhere symbolizes the kernel size.\nTo comprehensively consider both global and local information and mitigate information loss[8 ###reference_8###], multi-scale features are generated for the subsequent procedure in each branch. Specifically, each branch comprises four layers, namely sem layer and hfq layer, each of which consists of several blocks.\nThe forward propagation process in each block can be described below:\nwhere , denotes layer normalization, and denotes the output of the -th layer and -th block.\n###figure_3### Feature Alignment and Fusion Module.\nAttention mechanisms are widely used to enhance features in IML task [MVSS]. We propose a FAF module for better alignment and fusion of multiple features, which consists of channel attention, spatial attention, and deformable attention.\nFirst, we employ average pooling operation to reduce features, denoted by overline. Then, they are concatenated on dimension , which is denoted by , and fed into an MLP to generate corresponding channel-attention vectors , the above procedure can be formulated as:\nwhere is the reverse operation of . To obtain the spatial attention vector, we use two 1\u00d71 convolutions with an intermediate ReLU layer, denoted by , to aggregate spatial information, spatial-attention vectors can be obtained:\nFinally, we align branch features by applying attention vectors crosswise, which gives out the input of the next layer by residual adding, for :\nThen, we fuse semantic feature and high-frequency feature to get the input of the MLN.\nTampering operations affect a certain number of pixels rather than a single pixel, restricting the attention range is more advantageous in suppressing sporadic positive responses to the features. Therefore, we utilize deformable attention[29 ###reference_29###] for enhancement.\nThe fusion process can be described by the following equations:\nwhere , are learnable parameters, and DFA means Deformable Attention."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Manipulation Localization Network",
39
+ "text": "MLN adopt the architecture of Mask2Former[24 ###reference_24###], which involves two parts: the Pixel Decoder and the Transformer Decoder. The Pixel Decoder is primarily responsible for progressively upsampling features from low resolution to high resolution. The Transformer Decoder utilizes a single query embedding and multi-scale features as inputs. The use of multi-scale features is advantageous for locating small tampered regions, while query embeddings, combined with Masked-Attention, help restrict Cross-Attention to the tampered regions for extracting tampering-related features."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Prompt Tuning Method",
45
+ "text": "We leverage rich semantic features from pre-trained visual foundation models through prompt tuning. For each basic block, we concatenate unique prompt embeddings and image tokens as input:\nwhere symbolizes the total number of blocks in -th layer.\nAssume input with batch size of , where . We expand after partitioning to alter dimensions to , ensuring that each window contains exactly prompt embeddings for self-attention computation. After merging windows, we average on groups of prompt tokens to reshape back as .\n###figure_4###"
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiment",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Experimental Setup",
57
+ "text": "Datasets. During the training phase, we utilize the CASIA2[14 ###reference_14###] and synthetic datasets used by PSCC-Net[7 ###reference_7###] as the training set. To achieve a balance between training effectiveness and efficiency, we randomly sample 30,000 images from the synthetic dataset for training. During the testing phase, we employ eight common datasets to assess our model\u2019s performance: CASIA1[15 ###reference_15###], COVER[17 ###reference_17###], IMD20[22 ###reference_22###], NIST16[18 ###reference_18###], Columbia[16 ###reference_16###], DEFACTO-12K[21 ###reference_21###], In-the-Wild[20 ###reference_20###], and Korus[19 ###reference_19###].\nImplementation Details. We train our model on two RTX 3060 GPUs with a batch size of 14. We employ weighted cross-entropy loss as the objective function. Input images are resized to . We utilize the AdamW optimizer with , , weight decay 0.05, and cosine annealing warm restarts strategy. The maximum learning rate is set to 1e-4, and the minimum learning rate is 1e-6. The model is trained for 80 epochs, including 5 warm-up epochs."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Comparisons",
63
+ "text": "We compare our method with 13 state-of-the-art models to comprehensively assess the model\u2019s performance. We follow the metrics of previous work [13 ###reference_13###] for evaluation, the experimental results are shown in Table 1 ###reference_###, in which the best results are bolded and the second-best results are underlined.\nThe proposed model places either 1st or 2nd on all datasets, demonstrating its effectiveness and generalization ability.\nIt is worth noting that H-LSTM achieves favorable performance on the NIST16 dataset, primarily owing to the specially fine-tuning.\nMVSSNet++ is a meticulously designed network that fully leverages boundary artifacts and high-frequency information from forged images. However, by integrating the semantic and high-frequency information, we achieve a 9% F1-score improvement in average.\nEVP fails to achieve satisfactory generalization performance, possibly due to its less effective fusion strategy.\nThe manipulation localization results of various methods are illustrated in Fig. 3 ###reference_###, in which our approach transcends limitations associated with specific types of datasets, demonstrating its efficacy in effectively addressing a wide array of tampering methods."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Robustness Test",
69
+ "text": "In the real-world scenario, manipulated images may suffer from various post-processing techniques, leading to the fading or disappearance of tampering traces, which significantly compromises the model\u2019s performance.\nWe follow the setup introduced by [13 ###reference_13###], introducing six common perturbations to mimic post-processing effects of brightness, contrast, darkening, dithering, pink noise and JPEG2000 compression.\nWe evaluate the robustness of each method on the CASIA1 dataset, with pixel-level localization AUC scores presented in Fig. 4 ###reference_###.\nThe results demonstrate the necessity of incorporating semantic information, as the aligned semantic features supplement the high-frequency feature well, which contributes in the robustness of the proposed method."
70
+ },
71
+ {
72
+ "section_id": "4.4",
73
+ "parent_section_id": "4",
74
+ "section_name": "Ablation Study",
75
+ "text": "To assess the effectiveness of the modules we design, we conduct comprehensive ablation experiments. Table 2 ###reference_### presents the specific experimental settings and corresponding F1 scores testing on the CASIA1 dataset. Experiment 1 utilize only the semantic branch through prompt tuning. Experiment 2, on the other hand, solely employ a high-frequency branch trained from scratch. The results demonstrate that either semantic or high-frequency information is vital in the IML task.\nFurthermore, we investigate the effectiveness of the designed alignment and fusion method via experiments 3 to 5.\nWe ablate the deformable attention in fusion by substituting it with element-wise addition.\nThe results exhibit the effectiveness of the designed multiple attention mechanisms."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusions",
81
+ "text": "We present Prompt-IML, which introduces semantic information of pre-trained visual foundation models into IML tasks. The semantic information is leveraged through prompt tuning and fused with high-frequency information of images. Experimental results on typical IML datasets demonstrate the effectiveness of the proposed method."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.1.1\">Table 1</span>: </span>The comparison of image manipulation localization performance (F1 score with fixed threshold: 0.5). The best performance in each column are bolded and the second best underlined.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.1.1\" style=\"font-size:90%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.2.1\" style=\"font-size:90%;\">CASIA1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.3.1\" style=\"font-size:90%;\">NIST16</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.4.1\" style=\"font-size:90%;\">COVER</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.5.1\" style=\"font-size:90%;\">IMD20</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.6.1\" style=\"font-size:90%;\">Columbia</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.7.1\" style=\"font-size:90%;\">DEF-12K</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.8.1\" style=\"font-size:90%;\">In-the-Wild</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.9.1\" style=\"font-size:90%;\">Korus</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.5.1.1.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.10.1\" style=\"font-size:90%;\">Average</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.2.1.1.1\" style=\"font-size:90%;\">FCN\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a><span class=\"ltx_text\" id=\"S3.T1.5.2.1.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.2.1\" style=\"font-size:90%;\">0.441</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.3.1\" style=\"font-size:90%;\">0.167</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.4.1\" style=\"font-size:90%;\">0.199</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.5.1\" style=\"font-size:90%;\">0.210</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.6.1\" style=\"font-size:90%;\">0.223</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.7.1\" style=\"font-size:90%;\">0.130</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.8.1\" style=\"font-size:90%;\">0.192</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.9.1\" style=\"font-size:90%;\">0.122</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.1.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.10.1\" style=\"font-size:90%;\">0.211</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.3.2.1.1\" style=\"font-size:90%;\">DeepLabv3\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib12\" title=\"\">12</a><span class=\"ltx_text\" id=\"S3.T1.5.3.2.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.2.1\" style=\"font-size:90%;\">0.429</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.3.1\" style=\"font-size:90%;\">0.237</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.4.1\" style=\"font-size:90%;\">0.151</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.5.1\" style=\"font-size:90%;\">0.216</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.6.1\" style=\"font-size:90%;\">0.442</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.7.1\" style=\"font-size:90%;\">0.068</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.8.1\" style=\"font-size:90%;\">0.220</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.9.1\" style=\"font-size:90%;\">0.120</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.3.2.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.10.1\" style=\"font-size:90%;\">0.235</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.4.3.1.1\" style=\"font-size:90%;\">MFCN\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib1\" title=\"\">1</a><span class=\"ltx_text\" id=\"S3.T1.5.4.3.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.2.1\" style=\"font-size:90%;\">0.346</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.3.1\" style=\"font-size:90%;\">0.243</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.4.1\" style=\"font-size:90%;\">0.148</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.5.1\" style=\"font-size:90%;\">0.170</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.6.1\" style=\"font-size:90%;\">0.184</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.7.1\" style=\"font-size:90%;\">0.067</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.8.1\" style=\"font-size:90%;\">0.161</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.9.1\" style=\"font-size:90%;\">0.118</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.4.3.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.10.1\" style=\"font-size:90%;\">0.180</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.5.4.1.1\" style=\"font-size:90%;\">RRU-Net\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib2\" title=\"\">2</a><span class=\"ltx_text\" id=\"S3.T1.5.5.4.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.2.1\" style=\"font-size:90%;\">0.291</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.3.1\" style=\"font-size:90%;\">0.200</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.4.1\" style=\"font-size:90%;\">0.078</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.5.1\" style=\"font-size:90%;\">0.159</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.6.1\" style=\"font-size:90%;\">0.264</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.7.1\" style=\"font-size:90%;\">0.033</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.8.1\" style=\"font-size:90%;\">0.178</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.9.1\" style=\"font-size:90%;\">0.097</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.4.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.10.1\" style=\"font-size:90%;\">0.163</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.6.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.6.5.1.1\" style=\"font-size:90%;\">HPFCN\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib4\" title=\"\">4</a><span class=\"ltx_text\" id=\"S3.T1.5.6.5.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.2.1\" style=\"font-size:90%;\">0.173</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.3.1\" style=\"font-size:90%;\">0.172</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.4.1\" style=\"font-size:90%;\">0.104</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.5.1\" style=\"font-size:90%;\">0.111</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.6.1\" style=\"font-size:90%;\">0.115</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.7.1\" style=\"font-size:90%;\">0.038</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.8.1\" style=\"font-size:90%;\">0.125</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.9.1\" style=\"font-size:90%;\">0.097</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.6.5.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.6.5.10.1\" style=\"font-size:90%;\">0.117</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.7.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.7.6.1.1\" style=\"font-size:90%;\">MantraNet\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib3\" title=\"\">3</a><span class=\"ltx_text\" id=\"S3.T1.5.7.6.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.2.1\" style=\"font-size:90%;\">0.187</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.3.1\" style=\"font-size:90%;\">0.158</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.4.1\" style=\"font-size:90%;\">0.236</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.5.1\" style=\"font-size:90%;\">0.164</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.6.1\" style=\"font-size:90%;\">0.452</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.7.1\" style=\"font-size:90%;\">0.067</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.8.1\" style=\"font-size:90%;\">0.314</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.9.1\" style=\"font-size:90%;\">0.110</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.7.6.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.7.6.10.1\" style=\"font-size:90%;\">0.211</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.8.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.8.7.1.1\" style=\"font-size:90%;\">H-LSTM\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib5\" title=\"\">5</a><span class=\"ltx_text\" id=\"S3.T1.5.8.7.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.2.1\" style=\"font-size:90%;\">0.156</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.8.7.3.1\" style=\"font-size:90%;\">0.357</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.4.1\" style=\"font-size:90%;\">0.163</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.5.1\" style=\"font-size:90%;\">0.202</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.6.1\" style=\"font-size:90%;\">0.149</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.7.1\" style=\"font-size:90%;\">0.059</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.8.1\" style=\"font-size:90%;\">0.173</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.9.1\" style=\"font-size:90%;\">0.143</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.7.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.8.7.10.1\" style=\"font-size:90%;\">0.175</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.9.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.9.8.1.1\" style=\"font-size:90%;\">SPAN\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib6\" title=\"\">6</a><span class=\"ltx_text\" id=\"S3.T1.5.9.8.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.2.1\" style=\"font-size:90%;\">0.143</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.3.1\" style=\"font-size:90%;\">0.211</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.4.1\" style=\"font-size:90%;\">0.144</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.5.1\" style=\"font-size:90%;\">0.145</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.6.1\" style=\"font-size:90%;\">0.503</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.7.1\" style=\"font-size:90%;\">0.036</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.8.1\" style=\"font-size:90%;\">0.196</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.9.1\" style=\"font-size:90%;\">0.086</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.8.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.9.8.10.1\" style=\"font-size:90%;\">0.183</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.10.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.10.9.1.1\" style=\"font-size:90%;\">PSCC\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib7\" title=\"\">7</a><span class=\"ltx_text\" id=\"S3.T1.5.10.9.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.2.1\" style=\"font-size:90%;\">0.335</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.3.1\" style=\"font-size:90%;\">0.173</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.4.1\" style=\"font-size:90%;\">0.220</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.5.1\" style=\"font-size:90%;\">0.197</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.6.1\" style=\"font-size:90%;\">0.503</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.7.1\" style=\"font-size:90%;\">0.072</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.8.1\" style=\"font-size:90%;\">0.303</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.9.1\" style=\"font-size:90%;\">0.114</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.10.9.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.10.9.10.1\" style=\"font-size:90%;\">0.240</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.11.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.11.10.1.1\" style=\"font-size:90%;\">EVP\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib10\" title=\"\">10</a><span class=\"ltx_text\" id=\"S3.T1.5.11.10.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.2.1\" style=\"font-size:90%;\">0.483</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.3.1\" style=\"font-size:90%;\">0.210</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.4.1\" style=\"font-size:90%;\">0.114</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.5.1\" style=\"font-size:90%;\">0.233</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.6.1\" style=\"font-size:90%;\">0.277</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.7.1\" style=\"font-size:90%;\">0.090</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.8.1\" style=\"font-size:90%;\">0.231</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.9.1\" style=\"font-size:90%;\">0.113</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.11.10.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.11.10.10.1\" style=\"font-size:90%;\">0.219</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.12.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.12.11.1.1\" style=\"font-size:90%;\">CAT-Net\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib9\" title=\"\">9</a><span class=\"ltx_text\" id=\"S3.T1.5.12.11.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.2.1\" style=\"font-size:90%;\">0.237</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.3.1\" style=\"font-size:90%;\">0.102</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.4.1\" style=\"font-size:90%;\">0.210</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.5.1\" style=\"font-size:90%;\">0.257</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.6.1\" style=\"font-size:90%;\">0.206</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.12.11.7.1\" style=\"font-size:90%;\">0.206</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.8.1\" style=\"font-size:90%;\">0.217</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.9.1\" style=\"font-size:90%;\">0.085</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.11.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.12.11.10.1\" style=\"font-size:90%;\">0.190</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.13.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.13.12.1.1\" style=\"font-size:90%;\">MVSS-Net++\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S3.T1.5.13.12.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.2.1\" style=\"font-size:90%;\">0.513</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.3.1\" style=\"font-size:90%;\">0.304</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.13.12.4.1\" style=\"font-size:90%;\">0.482</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.5.1\" style=\"font-size:90%;\">0.270</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.6.1\" style=\"font-size:90%;\">0.660</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.7.1\" style=\"font-size:90%;\">0.095</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.8.1\" style=\"font-size:90%;\">0.295</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.9.1\" style=\"font-size:90%;\">0.102</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.12.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.13.12.10.1\" style=\"font-size:90%;\">0.340</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.14.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.5.14.13.1.1\" style=\"font-size:90%;\">PIM\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T1.5.14.13.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib13\" title=\"\">13</a><span class=\"ltx_text\" id=\"S3.T1.5.14.13.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.14.13.2.1\" style=\"font-size:90%;\">0.566</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.14.13.3.1\" style=\"font-size:90%;\">0.280</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.14.13.4.1\" style=\"font-size:90%;\">0.251</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.14.13.5.1\" style=\"font-size:90%;\">0.419</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.14.13.6.1\" style=\"font-size:90%;\">0.680</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.14.13.7.1\" style=\"font-size:90%;\">0.167</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.14.13.8.1\" style=\"font-size:90%;\">0.418</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.14.13.9.1\" style=\"font-size:90%;\">0.234</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.13.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.14.13.10.1\" style=\"font-size:90%;\">0.377</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.15.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.15.14.1.1\" style=\"font-size:90%;\">Ours</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.15.14.2.1\" style=\"font-size:90%;\">0.581</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.15.14.3.1\" style=\"font-size:90%;\">0.343</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.15.14.4.1\" style=\"font-size:90%;\">0.414</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.15.14.5.1\" style=\"font-size:90%;\">0.423</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.15.14.6.1\" style=\"font-size:90%;\">0.801</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.7\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.15.14.7.1\" style=\"font-size:90%;\">0.194</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.8\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S3.T1.5.15.14.8.1\" style=\"font-size:90%;\">0.414</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.9\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.15.14.9.1\" style=\"font-size:90%;\">0.266</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T1.5.15.14.10\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.15.14.10.1\" style=\"font-size:90%;\">0.430</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
88
+ "capture": "Table 1: The comparison of image manipulation localization performance (F1 score with fixed threshold: 0.5). The best performance in each column are bolded and the second best underlined."
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.1.1\">Table 2</span>: </span>Image Manipulation Localization Performance(F1 score with fixed threshold: 0.5)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.5.1.1.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.1.1\" style=\"font-size:90%;\">Setting</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.2.1\" style=\"font-size:90%;\">Sem</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.3.1\" style=\"font-size:90%;\">HP</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.4.1\" style=\"font-size:90%;\">F.Align</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.5.1\" style=\"font-size:90%;\">F.Fuse</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.6.1\" style=\"font-size:90%;\">F1-score</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.5.2.1.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.1.1\" style=\"font-size:90%;\">1</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.2.1.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.2.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.2.1.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.3.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.2.1.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.4.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.2.1.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.5.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.2.1.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.2.1.6.1\" style=\"font-size:90%;\">0.481</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.5.3.2.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.1.1\" style=\"font-size:90%;\">2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.2.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.2.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.2.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.2.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.4.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.2.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.5.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.2.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.3.2.6.1\" style=\"font-size:90%;\">0.392</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.5.4.3.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.1.1\" style=\"font-size:90%;\">3</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.2.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.4.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.5.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.4.3.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.4.3.6.1\" style=\"font-size:90%;\">0.505</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.5.5.4.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.1.1\" style=\"font-size:90%;\">4</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.2.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.4.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.5.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.4.6.1\" style=\"font-size:90%;\">0.555</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.5.6.5.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.1.1\" style=\"font-size:90%;\">5</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.5.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.2.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.5.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.5.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.4.1\" style=\"font-size:90%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.5.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.5.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.6.5.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.6.5.6.1\" style=\"font-size:90%;\">0.517</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.5.7.6.1\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.1.1\" style=\"font-size:90%;\">6</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.7.6.2\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.2.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.7.6.3\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.3.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.7.6.4\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.4.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.7.6.5\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.5.1\" style=\"font-size:90%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.5.7.6.6\" style=\"padding-left:5.7pt;padding-right:5.7pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.7.6.6.1\" style=\"font-size:90%;\">0.581</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
92
+ "capture": "Table 2: Image Manipulation Localization Performance(F1 score with fixed threshold: 0.5)"
93
+ }
94
+ },
95
+ "image_paths": {
96
+ "1": {
97
+ "figure_path": "2401.00653v1_figure_1.png",
98
+ "caption": "Fig. 1: Prompt-IML incorporates semantic information with high-frequency information to improve the performance. Semantic information may notice specific objects (circled red region) to assist the manipulation localization.",
99
+ "url": "http://arxiv.org/html/2401.00653v1/x1.png"
100
+ },
101
+ "2": {
102
+ "figure_path": "2401.00653v1_figure_2.png",
103
+ "caption": "Fig. 2: Architecture overview. The Channel, Spatial, Deformable represents the procedure of Eq. 4, Eq. 5, and Eq. 7.",
104
+ "url": "http://arxiv.org/html/2401.00653v1/x2.png"
105
+ },
106
+ "3": {
107
+ "figure_path": "2401.00653v1_figure_3.png",
108
+ "caption": "Fig. 3: Manipulation localization results on images originating from multiple datasets. The 3-rd column represents the results of our method, while columns 4 to 10 depict the results of another six SOTA methods.",
109
+ "url": "http://arxiv.org/html/2401.00653v1/x3.png"
110
+ },
111
+ "4": {
112
+ "figure_path": "2401.00653v1_figure_4.png",
113
+ "caption": "Fig. 4: Robustness evaluation against 6 different perturbations. Test dataset is CASIA1, and AUC is the evaluation metric. The x-axis symbolizes the perturbation severity level from 0 to 9 with 0 being no perturbation.",
114
+ "url": "http://arxiv.org/html/2401.00653v1/x4.png"
115
+ }
116
+ },
117
+ "validation": true,
118
+ "references": [
119
+ {
120
+ "1": {
121
+ "title": "\u201cImage splicing localization using a multi-task fully convolutional network (mfcn),\u201d",
122
+ "author": "Ronald Salloum, Yuzhuo Ren, and C-C Jay Kuo,",
123
+ "venue": "Journal of Visual Communication and Image Representation, vol. 51, pp. 201\u2013209, 2018.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "2": {
129
+ "title": "\u201cRru-net: The ringed residual u-net for image splicing forgery detection,\u201d",
130
+ "author": "Xiuli Bi, Yang Wei, Bin Xiao, and Weisheng Li,",
131
+ "venue": "in Proceedings of the IEEE/CVF Conference on CVPR Workshops, 2019, pp. 0\u20130.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "3": {
137
+ "title": "\u201cManTra-Net: Manipulation tracing network for detection and localization of image forgeries with anomalous features,\u201d",
138
+ "author": "Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan,",
139
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 9543\u20139552.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "4": {
145
+ "title": "\u201cLocalization of deep inpainting using high-pass fully convolutional network,\u201d",
146
+ "author": "Haodong Li and Jiwu Huang,",
147
+ "venue": "in proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 8301\u20138310.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "5": {
153
+ "title": "\u201cHybrid lstm and encoder\u2013decoder architecture for detection of image forgeries,\u201d",
154
+ "author": "Jawadul H Bappy, Cody Simons, Lakshmanan Nataraj, BS Manjunath, and Amit K Roy-Chowdhury,",
155
+ "venue": "IEEE Transactions on Image Processing, vol. 28, no. 7, pp. 3286\u20133300, 2019.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "6": {
161
+ "title": "\u201cSpan: Spatial pyramid attention network for image manipulation localization,\u201d",
162
+ "author": "Xuefeng Hu, Zhihan Zhang, Zhenye Jiang, Syomantak Chaudhuri, Zhenheng Yang, and Ram Nevatia,",
163
+ "venue": "in Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part XXI 16. Springer, 2020, pp. 312\u2013328.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "7": {
169
+ "title": "\u201cPscc-net: Progressive spatio-channel correlation network for image manipulation detection and localization,\u201d",
170
+ "author": "Xiaohong Liu, Yaojie Liu, Jun Chen, and Xiaoming Liu,",
171
+ "venue": "IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, pp. 7505\u20137517, 2022.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "8": {
177
+ "title": "\u201cMvss-net: Multi-view multi-scale supervised networks for image manipulation detection,\u201d",
178
+ "author": "Chengbo Dong, Xinru Chen, Ruohan Hu, Juan Cao, and Xirong Li,",
179
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3539\u20133553, 2022.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "9": {
185
+ "title": "\u201cLearning jpeg compression artifacts for image manipulation detection and localization,\u201d",
186
+ "author": "Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, and Changick Kim,",
187
+ "venue": "International Journal of Computer Vision, vol. 130, no. 8, pp. 1875\u20131895, 2022.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "10": {
193
+ "title": "\u201cExplicit visual prompting for low-level structure segmentations,\u201d",
194
+ "author": "Weihuang Liu, Xi Shen, Chi-Man Pun, and Xiaodong Cun,",
195
+ "venue": "in Proceedings of the IEEE/CVF Conference on CVPR, 2023, pp. 19434\u201319445.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "11": {
201
+ "title": "\u201cFully convolutional networks for semantic segmentation,\u201d",
202
+ "author": "Jonathan Long, Evan Shelhamer, and Trevor Darrell,",
203
+ "venue": "in Proceedings of the IEEE conference on CVPR, 2015, pp. 3431\u20133440.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "12": {
209
+ "title": "\u201cDeeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,\u201d",
210
+ "author": "Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille,",
211
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834\u2013848, 2017.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "13": {
217
+ "title": "\u201cPixel-inconsistency modeling for image manipulation localization,\u201d",
218
+ "author": "Chenqi Kong, Anwei Luo, Shiqi Wang, Haoliang Li, Anderson Rocha, and Alex C Kot,",
219
+ "venue": "arXiv preprint arXiv:2310.00234, 2023.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "14": {
225
+ "title": "\u201cCasia image tampering detection evaluation database,\u201d",
226
+ "author": "Jing Dong, Wei Wang, and Tieniu Tan,",
227
+ "venue": "in 2013 IEEE China summit and international conference on signal and information processing. IEEE, 2013, pp. 422\u2013426.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "15": {
233
+ "title": "\u201cCasia image tampering detection evaluation database,\u201d",
234
+ "author": "Jing Dong, Wei Wang, and Tieniu Tan,",
235
+ "venue": "in 2013 IEEE China summit and international conference on signal and information processing. IEEE, 2013, pp. 422\u2013426.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "16": {
241
+ "title": "\u201cColumbia image splicing detection evaluation dataset,\u201d",
242
+ "author": "Tian-Tsong Ng, Jessie Hsu, and Shih-Fu Chang,",
243
+ "venue": "DVMM lab. Columbia Univ CalPhotos Digit Libr, 2009.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "17": {
249
+ "title": "\u201cCoverage\u2014a novel database for copy-move forgery detection,\u201d",
250
+ "author": "Bihan Wen, Ye Zhu, Ramanathan Subramanian, Tian-Tsong Ng, Xuanjing Shen, and Stefan Winkler,",
251
+ "venue": "in 2016 IEEE international conference on image processing (ICIP). IEEE, 2016, pp. 161\u2013165.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "18": {
257
+ "title": "\u201cMfc datasets: Large-scale benchmark datasets for media forensic challenge evaluation,\u201d",
258
+ "author": "Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy N Yates, Andrew Delgado, Daniel Zhou, Timothee Kheyrkhah, Jeff Smith, and Jonathan Fiscus,",
259
+ "venue": "in 2019 IEEE Winter Applications of Computer Vision Workshops. IEEE, 2019, pp. 63\u201372.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "19": {
265
+ "title": "\u201cEvaluation of random field models in multi-modal unsupervised tampering localization,\u201d",
266
+ "author": "Pawe\u0142 Korus and Jiwu Huang,",
267
+ "venue": "in 2016 IEEE international workshop on information forensics and security (WIFS). IEEE, 2016, pp. 1\u20136.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "20": {
273
+ "title": "\u201cFighting fake news: Image splice detection via learned self-consistency,\u201d",
274
+ "author": "Minyoung Huh, Andrew Liu, Andrew Owens, and Alexei A Efros,",
275
+ "venue": "in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 101\u2013117.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "21": {
281
+ "title": "\u201cDefacto: Image and face manipulation dataset,\u201d",
282
+ "author": "Ga\u00ebl Mahfoudi, Badr Tajini, Florent Retraint, Frederic Morain-Nicolier, Jean Luc Dugelay, and PIC Marc,",
283
+ "venue": "in 2019 27Th european signal processing conference (EUSIPCO). IEEE, 2019, pp. 1\u20135.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "22": {
289
+ "title": "\u201cImd2020: A large-scale annotated dataset tailored for detecting manipulated images,\u201d",
290
+ "author": "Adam Novozamsky, Babak Mahdian, and Stanislav Saic,",
291
+ "venue": "in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2020, pp. 71\u201380.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "23": {
297
+ "title": "\u201cRobust image forgery detection over online social network shared images,\u201d",
298
+ "author": "Haiwei Wu, Jiantao Zhou, Jinyu Tian, and Jun Liu,",
299
+ "venue": "in Proceedings of the IEEE/CVF Conference on CVPR, 2022, pp. 13440\u201313449.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "24": {
305
+ "title": "\u201cMasked-attention mask transformer for universal image segmentation,\u201d",
306
+ "author": "Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar,",
307
+ "venue": "in Proceedings of the IEEE/CVF conference on CVPR, 2022, pp. 1290\u20131299.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "25": {
313
+ "title": "\u201cMasked autoencoders are scalable vision learners,\u201d",
314
+ "author": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, and Ross Girshick,",
315
+ "venue": "in Proceedings of the IEEE/CVF conference on CVPR, 2022, pp. 16000\u201316009.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "26": {
321
+ "title": "\u201cCmx: Cross-modal fusion for rgb-x semantic segmentation with transformers,\u201d",
322
+ "author": "Jiaming Zhang, Huayao Liu, Kailun Yang, Xinxin Hu, Ruiping Liu, and Rainer Stiefelhagen,",
323
+ "venue": "IEEE Transactions on Intelligent Transportation Systems, 2023.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "27": {
329
+ "title": "\u201cVisual prompt tuning,\u201d",
330
+ "author": "Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim,",
331
+ "venue": "in European Conference on Computer Vision, 2022, pp. 709\u2013727.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "28": {
337
+ "title": "\u201cSwin transformer: Hierarchical vision transformer using shifted windows,\u201d",
338
+ "author": "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo,",
339
+ "venue": "in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10012\u201310022.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "29": {
345
+ "title": "\u201cDeformable detr: Deformable transformers for end-to-end object detection,\u201d",
346
+ "author": "Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai,",
347
+ "venue": "arXiv preprint arXiv:2010.04159, 2020.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "30": {
353
+ "title": "\u201cLearning to immunize images for tamper localization and self-recovery,\u201d",
354
+ "author": "Qichao Ying, Hang Zhou, Zhenxing Qian, Sheng Li, and Xinpeng Zhang,",
355
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "31": {
361
+ "title": "\u201cDraw: Defending camera-shooted raw against image manipulation,\u201d",
362
+ "author": "Xiaoxiao Hu, Qichao Ying, Zhenxing Qian, Sheng Li, and Xinpeng Zhang,",
363
+ "venue": "in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 22434\u201322444.",
364
+ "url": null
365
+ }
366
+ }
367
+ ],
368
+ "url": "http://arxiv.org/html/2401.00653v1"
369
+ }
20240101/2401.00657v1.json ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Optimizing ADMM and Over-Relaxed ADMM Parameters for Linear Quadratic Problems",
3
+ "abstract": "The Alternating Direction Method of Multipliers (ADMM) has gained significant attention across a broad spectrum of machine learning applications. Incorporating the over-relaxation technique shows potential for enhancing the convergence rate of ADMM. However, determining optimal algorithmic parameters, including both the associated penalty and relaxation parameters, often relies on empirical approaches tailored to specific problem domains and contextual scenarios. Incorrect parameter selection can significantly hinder ADMM\u2019s convergence rate. To address this challenge, in this paper we first propose a general approach to optimize the value of penalty parameter, followed by a novel closed-form formula to compute the optimal relaxation parameter in the context of linear quadratic problems (LQPs). We then experimentally validate our parameter selection methods through random instantiations and diverse imaging applications, encompassing diffeomorphic image registration, image deblurring, and MRI reconstruction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "ADMM is a versatile algorithm with applications spanning various domains, including compressed sensing (Hou, Li, and Zhang 2022 ###reference_16###; Liu et al. 2023 ###reference_19###), image processing (Chan, Wang, and Elgendy 2016 ###reference_8###; Yazaki, Tanaka, and Chan 2019 ###reference_31###), and machine learning (Li et al. 2022 ###reference_18###; Zhou and Li 2023 ###reference_33###). Although introduced in the 1970s for optimization, its roots can be traced back to the 1950s as a method to solve elliptic and parabolic partial difference equations (Boyd et al. 2011 ###reference_5###). ADMM leverages the convergence strengths of the method of multipliers and the decomposability property of dual ascent. It is particularly useful in addressing convex optimization of considerable scale, beyond the capacity of conventional solvers. The ongoing research and outstanding algorithmic performance have significantly contributed to its widespread adoption, highlighting the growing importance of exploring its theoretical properties, particularly regarding parameter selection (Ghadimi et al. 2014 ###reference_14###; Wang et al. 2019 ###reference_28###).\nADMM, from a technical viewpoint, decomposes complex optimization problems into manageable sub-problems, often solvable using point-wise, closed-form solvers (Cand\u00e8s et al. 2011 ###reference_6###; Lu et al. 2016 ###reference_20###; Thorley et al. 2021 ###reference_27###; Jia et al. 2021 ###reference_17###; Duan et al. 2023 ###reference_11###). It proceeds by iteratively updating these sub-problems alternately until a solution meeting the original problem\u2019s objectives and constraints is attained. Within ADMM, the augmented Lagrange function incorporates penalty terms associated with the constraints. The penalty parameters determine the strength of these penalty terms. As highlighted in (Deng and Yin 2016 ###reference_10###), the convergence rate of ADMM is directly impacted by these penalty parameters. The optimal selection of such parameters can significantly enhance the algorithm\u2019s convergence rate. However, the lack of a universal method to compute these parameters optimally remains a challenge.\nThe convergence rate of ADMM can be further accelerated by leveraging information from prior iterations during the computation of subsequent iterations. Such a technique is known as over-relaxation and often used in conjunction with ADMM (De Pierro and Iusem 1986 ###reference_9###; Zhang et al. 2020 ###reference_32###). Numerous research endeavors have been devoted to defining appropriate values for the resultant relaxation parameter. Notably, in the study conducted by (Eckstein 1994 ###reference_12###), the authors proposed a widely acknowledged empirical range of values, typically falling within , which however is not always the case according to our findings in this paper. Despite a multitude of papers presenting specific guidelines for selecting this parameter, many real-world application papers (Stellato et al. 2020 ###reference_25###; Duan et al. 2023 ###reference_11###) still resort to empirically determined values. This reliance on empirical choices is due to the absence of a straightforward and efficient method that can promptly and optimally determine this relaxation parameter.\nThe objective of this paper is to introduce novel methods for the selection of optimal parameters within both ADMM and over-relaxed ADMM. As an example, we focus on linear quadratic problems (LQPs), particularly with applications tailored to image processing. The theories developed in this paper could offer valuable insights for addressing other non-quadratic problems, such as non-smooth optimization. More specifically, we have identified four key contributions of this paper, summarized as follows:\nWe perform a comprehensive convergence analysis of the ADMM algorithm as applied to LQPs, effectively demonstrating its unconditional convergence within the context of LQPs. This is achieved by initially converting the ADMM iterations into its fixed-point iterations, which facilitates the derivation of the iteration matrix. Subsequently, we theoretically show that the spectral radius of the iteration matrix is bounded by , regardless of the value of the penalty parameter.\nWe propose a general optimization method for the selection of the optimal penalty parameter in ADMM. We achieve this by utilizing numerical gradient descent to minimize the spectral radius of the iteration matrix. Moreover, in specific scenarios like image deblurring and MRI reconstruction, we show the existence of an closed-form solution for accurately determining the optimal penalty parameter within ADMM.\nWe establish, for the first time, the existence of an closed-form solution for determining the relaxation parameter in over-relaxed ADMM. We find that for any arbitrary value of the penalty parameter, there exists a corresponding relaxation parameter, computed from the closed-form solution, that minimizes the spectral radius of the iteration matrix. Consequently, we can transform the original joint optimization problem, with respect to both penalty and relaxation parameters, into a single-variable optimization problem focused only on the penalty parameter.\nWe verify our proposed parameter selection methods through random instantiations and practical real-world imaging applications, encompassing diffeomorphic image registration, image deblurring, MRI reconstruction. This approach sets us apart from previous methods, e.g., (Ghadimi et al. 2014 ###reference_14###), that only depend on simulated data for validation purpose."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Works",
15
+ "text": "(Boley 2013 ###reference_4###) studied the convergence rate of ADMM for both quadratic and linear programs via the spectral analysis based on a novel matrix recurrence. While acknowledging that the penalty parameters of ADMM can influence its convergence rate, they did not offer guidance on how to select these parameters. To address this issue, (Ghadimi et al. 2014 ###reference_14###) reformulated ADMM into a fixed-point iteration system to analyze the impact of parameters on the convergence rate of ADMM and over-relaxed ADMM. By minimizing the spectral radius of the iteration matrix, they successfully derived optimal penalty and relaxation parameters for quadratic programming. (Teixeira et al. 2015 ###reference_26###) extended the applicability of Ghadimi\u2019s theory by transforming the distributed quadratic programming into an equivalent constrained quadratic programming. (Fran\u00e7a and Bento 2016 ###reference_13###) introduced a method that determines the relaxation parameter for semi-definite programming through the analysis of the problem\u2019s condition number.\n(Boyd et al. 2011 ###reference_5###) suggested an empirical parameter update strategy for ADMM\u2019s penalty parameters. The idea is to maintain a proportional relationship between the norms of primal and dual residuals, ensuring their convergence to zero within a specified factor. (Xu, Figueiredo, and Goldstein 2017 ###reference_30###) proposed an adaptive ADMM approach by applying the Barzilai-Borwein spectral method to the original ADMM algorithm. Their method allows to dynamically update penalty parameters in each iteration based on primal and dual residuals. Inspired by this work, (Mavromatis, Foti, and Vavalis 2020 ###reference_21###) introduced a weighted penalty parameter ADMM algorithm for solving optimal power flow problems. Their approach involves the computation of absolute values from the admittance matrix and the Hessian matrix in each ADMM iteration. These values are then used to recalibrate the penalty parameters, aiming to refine the accuracy of parameter estimation.\nHowever, certain limitations exist in the current research landscape. Firstly, many methods (Boyd et al. 2011 ###reference_5###; Xu, Figueiredo, and Goldstein 2017 ###reference_30###; Wohlberg 2017 ###reference_29###; Mhanna, Verbi\u010d, and Chapman 2018 ###reference_22###) rely on primal and dual residuals for estimating optimal parameters during iterations, but there often lack closed-form or explicit pre-iteration parameter selection approaches. Secondly, existing parameter selection techniques, based on the spectral analysis of the iteration matrix (Ghadimi et al. 2014 ###reference_14###; Fran\u00e7a and Bento 2016 ###reference_13###), predominantly focus on specific problem types (e.g., standard quadratic problem with being an identity matrix). These methods requiring the spectral radius of the iteration matrix to be computable in an explicit form, which restricts their applicability and generalization ability (Stellato et al. 2020 ###reference_25###). In this paper, we will propose effective methods to address these two challenges."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "This section starts with the introduction of essential notations utilized in the subsequent formulations. We proceed by presenting the concept of fixed-point iterations, which serves as a foundational element for both the convergence analysis and parameter selection processes. Following this, we proceed to apply both ADMM and its over-relaxed variant to address LQPs. In the final stages, we propose novel methods for selecting the penalty and relaxation parameters. This is accomplished through the conversion of ADMM and over-relaxed ADMM into the form of fixed-point iterations, followed by the utilization of spectral radius analysis."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Notations and Fixed-Point Iterations",
27
+ "text": "Let and denote respectively the set of real and complex numbers, denote the set of positive numbers, denote the set of matrices, and (or ) be the identity matrix. For the square matrix and its corresponding eigenvalues , we define the th smallest eigenvalue of as , and the spectral radius of as .\nFixed-point iterations involve the iterative process below\nwhere is known as the iteration matrix, , and . It was shown in (Ghadimi et al. 2014 ###reference_14###) that the convergence factor of this fixed-point iteration system is equal to . Here, the convergence factor is defined as\nwhere represents the norm, and denotes the optimal solution (i.e., so-called ground truth). The sequence is -sublinear if , -linear if , and -superlinear if . Throughout this paper, the letter has been omitted when referring to the convergence rate. For linearly convergence sequences with , if we define as the smallest iteration count to ensure for all , then can be calculated by , where denotes the worst case distance between and , i.e., . This suggests that by reducing the value of the the convergence factor , the iteration count can be decreased, leading to a faster convergence rate."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "ADMM for LQPs",
33
+ "text": "The LQPs for image processing we study in this paper have the following structure\nwhere is the regularization parameter; or is an encoding matrix; or is the unknown vector; or is the input vector; and is a regularization matrix. The value of determines the output quality, whereas smaller values of tend to yield smoother results. By differentiating (1 ###reference_###) with respect to and setting the respective derivative to zero, we have the following linear system\nWhen addressing the solution of Equation (2 ###reference_###), two primary challenges arise: Firstly, in certain scenarios like our MRI reconstruction and diffeomorphic image registration, where may be positive semi-definite, the process of inverting such a matrix becomes unfeasible. Secondly, in the context of higher-dimensional cases like 3D medical image registration (Thorley et al. 2021 ###reference_27###), even if the matrix remains positive definite, the process of matrix inversion becomes computationally expensive. To address these two issues, we propose to use ADMM to handle the original problem (1 ###reference_###), as an alternative to using the normal equation to solve (2 ###reference_###).\nTo apply ADMM, we introduce an auxiliary variable , a Lagrangian multiplier , and a penalty parameter , transforming (1 ###reference_###) into the following augmented Lagrange function\nTo optimize (3 ###reference_###) with ADMM, we need to decompose it into two sub-problems with respect to and and then update the Lagrangian multipliers until the process converges. The following Algorithm 1 outlines the optimization process using ADMM.\nIn Algorithm 1, we have and . It is worth noting that while matrix inversion is applied to both variables and , fast solvers exist in specific cases due to the distinctive structure of and . For instance, in diffeomorphic image registration, takes on a rank-1 form, allowing efficient inversion through the Morris-Sherman equation (Bartlett 1951 ###reference_2###; Thorley et al. 2021 ###reference_27###). Similarly, in MRI reconstruction and diffeomorphic image registration, can be effectively diagonalized using the discrete Fourier transformation basis functions (Goldstein and Osher 2009 ###reference_15###; Duan et al. 2023 ###reference_11###). Consequently, the application of ADMM to solve LQPs offers distinct advantages.\nIn order to determine the optimal penalty parameter in ADMM automatically, we need to transform the ADMM iterations in Algorithm 1 into the following fixed-point iteration system, solely with respect to the variable\nwhere is the iteration matrix with defined as\nNext, given a value of , we can prove\nregardless of the value of . As per Section 3.1, we know that the convergence factor of Algorithm 1 is equal to the spectral radius of the iteration matrix. As such, is bounded by 1, meaning Algorithm 1 or (4 ###reference_###) is unconditionally convergent.\nDetailed derivations proving the equivalence between Algorithm 1 and the fixed-point iteration system (4 ###reference_###), as well as Inequality (6 ###reference_###), have been provided in Appendix 1 of the arXiv version of this paper.\n\u220e\nNext, we search the optimal parameter that minimizes the convergence rate of Algorithm 1. Since is dependent on the penalty parameter , the objective is to identify a value for that minimizes the convergence factor . For this, we define the following minimization problem\nwhere . From Inequality (6 ###reference_###) in Theorem 1 we have , and we can also easily derive . As such, we have , with which the minimization problem (7 ###reference_###) can be converted to\nThough the minimization problem (8 ###reference_###) is a one-dimensional optimization problem with respect to only , computing directly is however not trivial. This is the reason why a general applicable method for optimizing is still lacking. Previous works (Ghadimi et al. 2014 ###reference_14###; Teixeira et al. 2015 ###reference_26###) were based on the assumption that can be explicitly written for spectral analysis. However, in practical applications such as diffeomorphic registration in Section 4.2, this is a significant limitation. To address this challenge, we propose to use numerical gradient descent to optimize\nwhere denotes the step size. In this study, we employed the central finite difference scheme to compute gradients. Compared to the one-sided finite difference method, this scheme offers better numerical stability. It also provides more accurate estimation of gradients. It is important to note that this gradient descent method is general, as it does not need to know the explicit form of the eigenvalues of matrix . The definition for the central finite difference is given by\nwhere represents a small value. In our experiments, we set this value within the range of to , which led to a satisfactory convergence of the gradient descent (9 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Over-Relaxed ADMM",
39
+ "text": "Over-relaxation technique can be used in the ADMM algorithm and further accelerate the convergence rate of ADMM. This method is achieved by introducing an additional relaxation parameter and replacing in Algorithm 1 with . Algorithm 2 outlines the optimization process of the augmented Lagrange function (3 ###reference_###) using over-relaxed ADMM.\nTo investigate the influence of relaxation parameter on convergence, we transform Algorithm 2 into its fixed-point iteration system. Such a conversion approach is in line with Proof of Theorem 1 in Appendix. The resulting fixed-point iteration system is given as follows\nAfter obtaining the iteration matrix , we can analyze the spectral radius of this matrix to determine the optimal relaxation parameter .\nThe optimal can be directly calculated using the following closed-form formula\nwhere is a matrix whose entries reply on the value of . As per Equation (10 ###reference_###), we can compute the optimal relaxation parameter as long as a value of is given.\n###figure_1### To prove Theorem 2, we begin with the following two-dimensional joint optimization problem\nwhere . In order to express the spectral radius in terms of the eigenvalue structure, we first derive the equality , and the spectral radius is then defined as\nFrom Inequality (6 ###reference_###) in Theorem 1, we know . Based on this and (12 ###reference_###), we plot Figure 1 ###reference_### to demonstrate the correlation between the absolute eigenvalue of the iteration matrix and the relaxation parameter. From this figure, it is straightforward to express the spectral radius as the following piecewise function\nwhere with a slight abuse of notation, we use to represent .\nOnce we have (13 ###reference_###), our objective is to minimize it in order to enhance the convergence rate. From Figure 1 ###reference_### again, it becomes evident that the smallest spectral radius is located at the intersection point where the following equality holds\nfrom which can be computed using a closed-form solution as follows\nwhich exactly verifies the validity of Equation (10 ###reference_###).\n\u220e\nIf now we plug the optimal into (13 ###reference_###), we can convert the joint minimization problem (11 ###reference_###) into the following minimization problem\nwhich is a single-variable optimization problem with respect to only . This problem can be minimized with a numerical gradient descent method similar to Equation (9 ###reference_###). Once is found, can be computed using the closed-form solution (10 ###reference_###). It is worth noting that even if is not optimal, computed via (10 ###reference_###) can still accelerate convergence."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": "In this section, we will first test the generalization ability of our proposed parameter selection method through random instantiations. Following that, we will apply the proposed parameter selection methods to diffeomorphic image registration, image deblurring, and MRI reconstruction. We will compare our optimal ADMM algorithm and over-relaxed variant (oADMM) with gradient descent (GD), gradient descent with Nesterov\u2019s acceleration (GD-N) (Nesterov 1983 ###reference_23###; Bartlett and Duan 2021 ###reference_1###), gradient descent with Nesterov\u2019s acceleration and restart (GD-NR) (O\u2019donoghue and Candes 2015 ###reference_24###; Bartlett and Duan 2021 ###reference_1###), as well as conjugate gradient (CG). In all of our experiments, we chose the step size in gradient-based methods using the Lipschitz constant of the corresponding problem. It is worth noting that optimal values for penalty parameters can be determined analytically for image deblurring and MRI reconstruction problems. However, for image registration numerical gradient descent is required to compute these parameters."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Generalization Ability",
51
+ "text": "Emphasizing that our approach is model-driven, the selection of parameters111The parameter selection also relies on the regularization parameter which we fix as a constant in this paper. relies on the matrices and in the minimization problem (1 ###reference_###). As such, the measure of generalization ability lies in how effectively our method performs as and undergo variations, which is in contrast to data-driven methods, where the generalization ability is often examined using multiple different datasets.\nWe presented Figure 2 ###reference_### to demonstrate the generalization ability of our approach, where the analysis is based on 50 random instantiations of and while keeping and fixed. For ADMM, we employed numerical gradient descent to minimize (8 ###reference_###) with respect to . For oADMM, we utilized numerical gradient descent to minimize (36 ###reference_###) with regard to , whilst the optimal value of was calculated using (10 ###reference_###) once was found. We note that the optimal values of for both ADMM and oADMM are similar and that the optimal values of are not within as suggested in (Eckstein 1994 ###reference_12###). As evident from Figure 2 ###reference_###, the calculated optimal values consistently result in faster convergence rates for both ADMM and oADMM, reaffirming the generalization ability of our proposed parameter selection methods.\n###figure_2###"
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Diffeomorphic Image Registration",
57
+ "text": "Computing a diffeomorphic deformation can be treated as modelling a dynamical system (Beg et al. 2005 ###reference_3###), given by an ordinary differential equation (ODE): , where is the identity transformation and indicates the velocity field at time (). The ODE can be solved by Euler integration, in which the deformation field is calculated as the compositions of a series of small deformations, defined as . If the velocity fields are sufficiently small whilst satisfying some smoothness constraints, the resulting composition is a diffeomorphic deformation.\nTo compute the velocity fields whilst satisfying these diffeomorphic constraints, we minimize the following linear quadratic problem (Thorley et al. 2021 ###reference_27###)\nwhere denote the spatial derivatives of the image; represents the temporal derivative of the image; and denote the velocity field in and directions. In this case, by setting\nand\nwe can use numerical gradient descent to compute optimal parameters for both ADMM and oADMM.\nIn Figure 3 ###reference_###, we show results obtained through the introduced diffeomorphic registration technique. We examine the impact of the penalty parameter in both ADMM and oADMM, and then evaluate the convergence efficiency of different algorithms. Given a pair of images (depicted as source and target in the figure), we can compute a deformation (shown in the bottom left panel) that ensures a positive Jacobian determinant (shown in the bottom middle panel) for all pixel positions. In the top right panel, we show the correlation between the spectral radius of the iteration matrix and in both ADMM and oADMM. As can be seen, there exists an unique optimal value where the spectral radius is minimized. As such, when using numerical gradient descent, it is possible to find the optimal value of that can considerably reduce iteration counts. This panel also illustrates that , producing the smallest spectral radius for oADMM, closely aligns with that of ADMM. Furthermore, due to the two-loop222We did not use the pyramid implementation as in (Thorley et al. 2021 ###reference_27###), so we ended up with a two-loop algorithm comprising inner ADMM/oADMM iterations and outer warping iterations. iterative nature of diffeomorphic image registration, the data term of undergoes slight changes at each iteration of the outer loop. These changes however do not significantly influence the value of , as evident from the top right panel. Therefore, given a specific value of , it is sufficient to use gradient descent to search for each outer iteration. Finally, in the bottom right panel, convergence rates among different algorithms are compared. As is evident, the parameter-optimized oADMM algorithm remains the fastest in terms of convergence rate.\n###figure_3###"
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "Image Deblurring",
63
+ "text": "In this application, we look at a phantom test image. The image went through a Gaussian blur of size and standard deviation 2, followed by an additive zero-mean white Gaussian noise with standard deviation . The top left and middle panels of Figure 4 ###reference_### depict the original and blurred images, respectively. To deblur the image we minimize the following problem\nwhere is the matrix representing the blur operator, is the vectorized unknown clean image, and is the vectorized input image. By setting and , the matrix in (5 ###reference_###) for this application has the form of\nSince is a convolution matrix derived from the Gaussian kernel function, the eigenvalues of can be calculated using the two-dimensional discrete Fourier transform (Capus and Brown 2003 ###reference_7###). With , we can derive the maximum eigenvalues of as\nwhere is either or . Since in this case can be explicitly written, we can derive closed-form solutions for the parameters in ADMM and over-relaxed ADMM. In Theorem 3, we give their optimal parameters.\nFirstly, to tackle the optimization problem (16 ###reference_###) using ADMM, given a regularization parameter , the optimal value of the penalty parameter in ADMM can be expressed in closed form\nwhich was derived by minimizing the value of in (17 ###reference_###).\nIf over-relaxed ADMM is used to tackle the optimization problem (16 ###reference_###), the optimal penalty and relaxation parameters are given by\namong which was determined by minimizing the problem (36 ###reference_###) with and defined in (17 ###reference_###), and then was computed using (10 ###reference_###) with .\nDetailed derivations have been given in Appendix 2 of the arXiv version of this paper.\n\u220e\n###figure_4### In Figure 4 ###reference_###, the top right panel displays the deblurred image from oADMM (comparable results were achieved with ADMM), which closely resembles the original image. Note that we set the regularization parameter to for this experiment. The bottom left and middle panels demonstrate that with the optimal and there is a clear enhancement over the model\u2019s convergence, and that the optimal values of for both ADMM and oADMM are the same in this case. The bottom right panel shows that ADMM, CG, and oADMM exhibit superior performance than GD, GD-N, and GD-NR. Upon a detailed examination from the zoomed-in window, CG addresses this quadratic problem very well, albeit still needing multiple iterations to attain convergence. In contrast, oADMM achieves convergence in a single step333The spectral radius in this case is close to zero, leading to a superlinear convergence rate., outperforming all compared algorithms."
64
+ },
65
+ {
66
+ "section_id": "4.4",
67
+ "parent_section_id": "4",
68
+ "section_name": "MRI Reconstruction",
69
+ "text": "To reconstruct MR images we minimize the problem\nwhere is the sampling matrix; is the Fourier transform matrix; is a complex-valued MR image stacked as a column vector; is the undersampled -space data; denotes the first-order gradient operator.\nBy setting and , the matrix in (5 ###reference_###) for this application has the form of\nwhere . Due to the use of periodic boundary conditions, can be efficiently diagonalized in the form of , where is a diagonal matrix. Equation (19 ###reference_###) can be simplified to , where is given as\nwhich is a diagonal matrix. The eigenvalues of are simply the values along the diagonal of . If we define as the th smallest eigenvalue of , and as the diagonal value of at the position where is indexed from , the maximum eigenvalues of can be derived as\nwhere . Since can be written explicitly, we can derive the closed-form solution for in ADMM. In Theorem 3, we present the optimal value for this parameter. If over-relaxed ADMM is used to solve (18 ###reference_###), a closed-form solution still exists for . It is however too cumbersome to derive them in this case. As such, the penalty parameter in over-relaxed ADMM was searched by gradient descent, and once was found the optimal relaxation parameter can be directly obtained using Equation (10 ###reference_###) with .\nTo tackle the optimization problem (18 ###reference_###) using ADMM, given a regularization parameter , the optimal value of the penalty parameter in ADMM can be expressed in closed form\nwhere , and are defined as follows\nwhere is the hadamard product; denotes the smallest eigenvalue of , excluding the eigenvalues corresponding to zero entries on the diagonal of ;\n denotes the smallest eigenvalue of , excluding the eigenvalues corresponding to zero entries on the diagonal ; and\n represents the largest eigenvalue of , excluding the eigenvalues corresponding to zero entries along the diagonal of .\nDetailed derivations have been given in Appendix 3 of the arXiv version of this paper.\n\u220e\nIn Figure 5 ###reference_###, we reconstruct a cardiac MR image from -space. The original image (displayed in the top left panel) was first transformed into -space using the Fourier transformation. Then of the data there was taken using a cartesian sampling mask, displayed in the original image. This undersampled data was then corrupted by an additive zero-mean white Gaussian noise with standard deviation to form in (18 ###reference_###). The reconstruction (top right), despite some slight blurring due to the smooth regularization, clearly enhances image quality compared to that displayed in the top middle panel, which is a direct reconstruction of using the inverse Fourier transformation. The bottom left and middle panels of this figure illustrate that the choice of and has a significant impact on the convergence rate and that our proposed methods result in faster convergence. Thanks to the utilization of these optimal parameters, we observe a clear superiority of our ADMM and oADMM over GD, its accelerated variants, and CG in terms of convergence efficiency.\n###figure_5###"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "In this paper, we presented automated techniques for selecting optimal penalty and relaxation parameters within the framework of ADMM and over-relaxed ADMM for linear quadratic problems. Our approaches involve a numerical gradient descent method for estimating the penalty parameter and a novel closed-form solution for determining the optimal relaxation parameter. We verified the generalizability and efficacy of these approaches through random instantiations and real-world imaging applications."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.SS2.9\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.SS2.9.9\" style=\"width:237.8pt;height:171.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(19.0pt,-13.7pt) scale(1.1904790390615,1.1904790390615) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.SS2.9.9.9\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.SS2.9.9.9.10.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.SS2.9.9.9.10.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS2.9.9.9.10.1.1.1\">Algorithm 1:</span> ADMM for LQPs</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.4.4.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.SS2.4.4.4.4.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS2.4.4.4.4.4.1\">Input:</span> matrices and ; parameter and \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.6.6.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS2.6.6.6.6.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS2.6.6.6.6.2.1\">Initialize:</span> and \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.9.9.9.11.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS2.9.9.9.11.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.SS2.9.9.9.11.2.1.1\">Repeat:</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.7.7.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS2.7.7.7.7.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.8.8.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS2.8.8.8.8.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.9.9.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS2.9.9.9.9.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS2.9.9.9.12.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.SS2.9.9.9.12.3.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS2.9.9.9.12.3.1.1\">until</span> some stopping criterion is met</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
82
+ "capture": "Figure 1: Relationship between and the value of . The slope of each line before reflection is . The spectral radius before the intersection point is governed by the green line, while after the reflection, it is determined by the reflected red line. The intersection point corresponds to the optimal as well as the minimum spectral radius of the iteration matrix ."
83
+ },
84
+ "2": {
85
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.SS3.10\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.SS3.10.10\" style=\"width:237.8pt;height:160.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(12.2pt,-8.2pt) scale(1.11425135418051,1.11425135418051) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.SS3.10.10.10\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.SS3.10.10.10.11.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.SS3.10.10.10.11.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS3.10.10.10.11.1.1.1\">Algorithm 2:</span> Over-relaxed ADMM for LQPs</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.5.5.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.SS3.5.5.5.5.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS3.5.5.5.5.5.1\">Input:</span> matrices and ; parameter , and \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.7.7.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS3.7.7.7.7.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS3.7.7.7.7.2.1\">Initialize:</span> and \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.10.10.10.12.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS3.10.10.10.12.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.SS3.10.10.10.12.2.1.1\">Repeat:</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.8.8.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS3.8.8.8.8.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.9.9.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS3.9.9.9.9.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.10.10.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.SS3.10.10.10.10.1\">\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.SS3.10.10.10.13.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.SS3.10.10.10.13.3.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.SS3.10.10.10.13.3.1.1\">until</span> some stopping criterion is met</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
86
+ "capture": "Figure 1: Relationship between and the value of . The slope of each line before reflection is . The spectral radius before the intersection point is governed by the green line, while after the reflection, it is determined by the reflected red line. The intersection point corresponds to the optimal as well as the minimum spectral radius of the iteration matrix ."
87
+ }
88
+ },
89
+ "image_paths": {
90
+ "1": {
91
+ "figure_path": "2401.00657v1_figure_1.png",
92
+ "caption": "Figure 1: Relationship between |1+\u03b1\u2062\u03bbi\u2062(Q)|1\ud835\udefcsubscript\ud835\udf06\ud835\udc56\ud835\udc44|1+\\alpha\\lambda_{i}(Q)|| 1 + italic_\u03b1 italic_\u03bb start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_Q ) | and the value of \u03b1\ud835\udefc\\alphaitalic_\u03b1. The slope of each line before reflection is \u03bbi\u2062(Q)subscript\ud835\udf06\ud835\udc56\ud835\udc44\\lambda_{i}\\left(Q\\right)italic_\u03bb start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_Q ). The spectral radius before the intersection point is governed by the green line, while after the reflection, it is determined by the reflected red line. The intersection point corresponds to the optimal \u03b1*superscript\ud835\udefc\\alpha^{*}italic_\u03b1 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT as well as the minimum spectral radius of the iteration matrix I+\u03b1\u2062Q\ud835\udc3c\ud835\udefc\ud835\udc44I+\\alpha Qitalic_I + italic_\u03b1 italic_Q.",
93
+ "url": "http://arxiv.org/html/2401.00657v1/x1.png"
94
+ },
95
+ "2": {
96
+ "figure_path": "2401.00657v1_figure_2.png",
97
+ "caption": "Figure 2: Left: Convergence rates of different methods and parameter values based on 1 random instantiation of A\ud835\udc34Aitalic_A and L\ud835\udc3fLitalic_L. Right: Convergence rates based on 50 random instantiations of A\ud835\udc34Aitalic_A and L\ud835\udc3fLitalic_L. The solid lines represent the average over 50 instantiations. The algorithm is ADMM when \u03b1=1\ud835\udefc1\\alpha=1italic_\u03b1 = 1, and oADMM when \u03b1=\u03b1*\ud835\udefcsuperscript\ud835\udefc\\alpha=\\alpha^{*}italic_\u03b1 = italic_\u03b1 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT.",
98
+ "url": "http://arxiv.org/html/2401.00657v1/x2.png"
99
+ },
100
+ "3": {
101
+ "figure_path": "2401.00657v1_figure_3.png",
102
+ "caption": "Figure 3: Illustration of diffeomorphic image registration results, visualization of the correlation between spectral radius and \u03b8\ud835\udf03\\thetaitalic_\u03b8, and comparison of convergence rates of algorithms. The x\ud835\udc65xitalic_x-axes of the two plots in the third column represent the values of \u03b8\ud835\udf03\\thetaitalic_\u03b8 and iteration numbers, respectively.",
103
+ "url": "http://arxiv.org/html/2401.00657v1/x3.png"
104
+ },
105
+ "4": {
106
+ "figure_path": "2401.00657v1_figure_4.png",
107
+ "caption": "Figure 4: Demonstration of image deblurring effects and convergence rates of different algorithms. The x\ud835\udc65xitalic_x-axis and y\ud835\udc66yitalic_y-axis of each plot in the second row represent iteration numbers and log\u2062(\u2016uk\u2212u*\u2016)lognormsuperscript\ud835\udc62\ud835\udc58superscript\ud835\udc62{\\rm{log}}(\\|u^{k}-u^{*}\\|)roman_log ( \u2225 italic_u start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT - italic_u start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT \u2225 ), respectively.",
108
+ "url": "http://arxiv.org/html/2401.00657v1/x4.png"
109
+ },
110
+ "5": {
111
+ "figure_path": "2401.00657v1_figure_5.png",
112
+ "caption": "Figure 5: Demonstration of MRI reconstruction results and comparison of convergence rates among algorithms. The x\ud835\udc65xitalic_x-axis and y\ud835\udc66yitalic_y-axis of each plot in the second row represent iteration numbers and log\u2062(\u2016uk\u2212u*\u2016)lognormsuperscript\ud835\udc62\ud835\udc58superscript\ud835\udc62{\\rm{log}}(\\|u^{k}-u^{*}\\|)roman_log ( \u2225 italic_u start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT - italic_u start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT \u2225 ), respectively.",
113
+ "url": "http://arxiv.org/html/2401.00657v1/x5.png"
114
+ }
115
+ },
116
+ "validation": true,
117
+ "references": [
118
+ {
119
+ "1": {
120
+ "title": "Accelerated first order methods for variational imaging.",
121
+ "author": "Bartlett, J.; and Duan, J. 2021.",
122
+ "venue": "arXiv preprint arXiv:2110.02813.",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "2": {
128
+ "title": "An inverse matrix adjustment arising in discriminant analysis.",
129
+ "author": "Bartlett, M. S. 1951.",
130
+ "venue": "The Annals of Mathematical Statistics, 22(1): 107\u2013111.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "3": {
136
+ "title": "Computing large deformation metric mappings via geodesic flows of\ndiffeomorphisms.",
137
+ "author": "Beg, M. F.; Miller, M. I.; Trouv\u00e9, A.; and Younes, L. 2005.",
138
+ "venue": "International Journal of Computer Vision, 61: 139\u2013157.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "4": {
144
+ "title": "Local linear convergence of the alternating direction method of\nmultipliers on quadratic or linear programs.",
145
+ "author": "Boley, D. 2013.",
146
+ "venue": "SIAM Journal on Optimization, 23(4): 2183\u20132207.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "5": {
152
+ "title": "Distributed optimization and statistical learning via the alternating\ndirection method of multipliers.",
153
+ "author": "Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J.; et al. 2011.",
154
+ "venue": "Foundations and Trends\u00ae in Machine learning,\n3(1): 1\u2013122.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "6": {
160
+ "title": "Robust principal component analysis?",
161
+ "author": "Cand\u00e8s, E. J.; Li, X.; Ma, Y.; and Wright, J. 2011.",
162
+ "venue": "Journal of the ACM (JACM), 58(3): 1\u201337.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "7": {
168
+ "title": "Fractional Fourier transform of the Gaussian and fractional domain\nsignal support.",
169
+ "author": "Capus, C.; and Brown, K. 2003.",
170
+ "venue": "IEE Proceedings-Vision, Image and Signal Processing, 150(2):\n99\u2013106.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "8": {
176
+ "title": "Plug-and-play ADMM for image restoration: fixed-point convergence and\napplications.",
177
+ "author": "Chan, S. H.; Wang, X.; and Elgendy, O. A. 2016.",
178
+ "venue": "IEEE Transactions on Computational Imaging, 3(1): 84\u201398.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "9": {
184
+ "title": "A relaxed version of Bregman\u2019s method for convex programming.",
185
+ "author": "De Pierro, A. R.; and Iusem, A. 1986.",
186
+ "venue": "Journal of Optimization Theory and Applications, 51: 421\u2013440.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "10": {
192
+ "title": "On the global and linear convergence of the generalized alternating\ndirection method of multipliers.",
193
+ "author": "Deng, W.; and Yin, W. 2016.",
194
+ "venue": "Journal of Scientific Computing, 66: 889\u2013916.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "11": {
200
+ "title": "Arbitrary order total variation for deformable image registration.",
201
+ "author": "Duan, J.; Jia, X.; Bartlett, J.; Lu, W.; and Qiu, Z. 2023.",
202
+ "venue": "Pattern Recognition, 109318.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "12": {
208
+ "title": "Parallel alternating direction multiplier decomposition of convex\nprograms.",
209
+ "author": "Eckstein, J. 1994.",
210
+ "venue": "Journal of Optimization Theory and Applications, 80(1):\n39\u201362.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "13": {
216
+ "title": "An explicit rate bound for over-relaxed ADMM.",
217
+ "author": "Fran\u00e7a, G.; and Bento, J. 2016.",
218
+ "venue": "In 2016 IEEE International Symposium on Information Theory\n(ISIT), 2104\u20132108. IEEE.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "14": {
224
+ "title": "Optimal parameter selection for the alternating direction method of\nmultipliers (ADMM): quadratic problems.",
225
+ "author": "Ghadimi, E.; Teixeira, A.; Shames, I.; and Johansson, M. 2014.",
226
+ "venue": "IEEE Transactions on Automatic Control, 60(3): 644\u2013658.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "15": {
232
+ "title": "The split Bregman method for L1-regularized problems.",
233
+ "author": "Goldstein, T.; and Osher, S. 2009.",
234
+ "venue": "SIAM Journal on Imaging Sciences, 2(2): 323\u2013343.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "16": {
240
+ "title": "Truncated residual based plug-and-play ADMM algorithm for MRI\nreconstruction.",
241
+ "author": "Hou, R.; Li, F.; and Zhang, G. 2022.",
242
+ "venue": "IEEE Transactions on Computational Imaging, 8: 96\u2013108.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "17": {
248
+ "title": "Learning a model-driven variational network for deformable image\nregistration.",
249
+ "author": "Jia, X.; Thorley, A.; Chen, W.; Qiu, H.; Shen, L.; Styles, I. B.; Chang, H. J.;\nLeonardis, A.; De Marvao, A.; O\u2019Regan, D. P.; et al. 2021.",
250
+ "venue": "IEEE Transactions on Medical Imaging, 41(1): 199\u2013212.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "18": {
256
+ "title": "Robust decentralized learning using ADMM with unreliable agents.",
257
+ "author": "Li, Q.; Kailkhura, B.; Goldhahn, R.; Ray, P.; and Varshney, P. K. 2022.",
258
+ "venue": "IEEE Transactions on Signal Processing, 70: 2743\u20132757.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "19": {
264
+ "title": "Distributed network reconstruction based on binary compressed sensing\nvia ADMM.",
265
+ "author": "Liu, Y.; Huang, K.; Yang, C.; and Wang, Z. 2023.",
266
+ "venue": "IEEE Transactions on Network Science and Engineering.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "20": {
272
+ "title": "Implementation of high-order variational models made easy for image\nprocessing.",
273
+ "author": "Lu, W.; Duan, J.; Qiu, Z.; Pan, Z.; Liu, R. W.; and Bai, L. 2016.",
274
+ "venue": "Mathematical Methods in the Applied Sciences, 39(14):\n4208\u20134233.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "21": {
280
+ "title": "Auto-tuned weighted-penalty parameter ADMM for distributed optimal\npower flow.",
281
+ "author": "Mavromatis, C.; Foti, M.; and Vavalis, M. 2020.",
282
+ "venue": "IEEE Transactions on Power Systems, 36(2): 970\u2013978.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "22": {
288
+ "title": "Adaptive ADMM for distributed AC optimal power flow.",
289
+ "author": "Mhanna, S.; Verbi\u010d, G.; and Chapman, A. C. 2018.",
290
+ "venue": "IEEE Transactions on Power Systems, 34(3): 2025\u20132035.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "23": {
296
+ "title": "A method of solving a convex programming problem with convergence\nrate .",
297
+ "author": "Nesterov, Y. E. 1983.",
298
+ "venue": "In Doklady Akademii Nauk, volume 269, 543\u2013547. Russian\nAcademy of Sciences.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "24": {
304
+ "title": "Adaptive restart for accelerated gradient schemes.",
305
+ "author": "O\u2019donoghue, B.; and Candes, E. 2015.",
306
+ "venue": "Foundations of computational mathematics, 15: 715\u2013732.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "25": {
312
+ "title": "OSQP: An operator splitting solver for quadratic programs.",
313
+ "author": "Stellato, B.; Banjac, G.; Goulart, P.; Bemporad, A.; and Boyd, S. 2020.",
314
+ "venue": "Mathematical Programming Computation, 12(4): 637\u2013672.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "26": {
320
+ "title": "The ADMM algorithm for distributed quadratic problems: parameter\nselection and constraint preconditioning.",
321
+ "author": "Teixeira, A.; Ghadimi, E.; Shames, I.; Sandberg, H.; and Johansson, M. 2015.",
322
+ "venue": "IEEE Transactions on Signal Processing, 64(2): 290\u2013305.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "27": {
328
+ "title": "Nesterov accelerated ADMM for fast diffeomorphic image registration.",
329
+ "author": "Thorley, A.; Jia, X.; Chang, H. J.; Liu, B.; Bunting, K.; Stoll, V.; de Marvao,\nA.; O\u2019Regan, D. P.; Gkoutos, G.; Kotecha, D.; et al. 2021.",
330
+ "venue": "In Medical Image Computing and Computer Assisted\nIntervention\u2013MICCAI 2021: 24th International Conference, Strasbourg, France,\nSeptember 27\u2013October 1, 2021, Proceedings, Part IV 24, 150\u2013160.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "28": {
336
+ "title": "Admm for efficient deep learning with global convergence.",
337
+ "author": "Wang, J.; Yu, F.; Chen, X.; and Zhao, L. 2019.",
338
+ "venue": "In Proceedings of the 25th ACM SIGKDD International Conference\non Knowledge Discovery & Data Mining, 111\u2013119.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "29": {
344
+ "title": "ADMM penalty parameter selection by residual balancing.",
345
+ "author": "Wohlberg, B. 2017.",
346
+ "venue": "arXiv preprint arXiv:1704.06209.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "30": {
352
+ "title": "Adaptive ADMM with spectral penalty parameter selection.",
353
+ "author": "Xu, Z.; Figueiredo, M.; and Goldstein, T. 2017.",
354
+ "venue": "In Artificial Intelligence and Statistics, 718\u2013727. PMLR.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "31": {
360
+ "title": "Interpolation and denoising of graph signals using plug-and-play\nADMM.",
361
+ "author": "Yazaki, Y.; Tanaka, Y.; and Chan, S. H. 2019.",
362
+ "venue": "In ICASSP 2019-2019 IEEE International Conference on Acoustics,\nSpeech and Signal Processing (ICASSP), 5431\u20135435.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "32": {
368
+ "title": "Privacy-preserving decentralized power system economic dispatch\nconsidering carbon capture power plants and carbon emission trading scheme\nvia over-relaxed ADMM.",
369
+ "author": "Zhang, R.; Yan, K.; Li, G.; Jiang, T.; Li, X.; and Chen, H. 2020.",
370
+ "venue": "International Journal of Electrical Power & Energy Systems,\n121: 106094.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "33": {
376
+ "title": "Federated learning via inexact ADMM.",
377
+ "author": "Zhou, S.; and Li, G. Y. 2023.",
378
+ "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence.",
379
+ "url": null
380
+ }
381
+ }
382
+ ],
383
+ "url": "http://arxiv.org/html/2401.00657v1"
384
+ }
20240101/2401.00658v1.json ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Point Cloud in the Air",
3
+ "abstract": "Acquisition and processing of point clouds (PCs) is a crucial enabler for many emerging applications reliant on 3D spatial data, such as robot navigation, autonomous vehicles, and augmented reality. In most scenarios, PCs acquired by remote sensors must be transmitted to an edge server for fusion, segmentation, or inference. Wireless transmission of PCs not only puts on increased burden on the already congested wireless spectrum, but also confronts a unique set of challenges arising from the irregular and unstructured nature of PCs. In this paper, we meticulously delineate these challenges and offer a comprehensive examination of existing solutions while candidly acknowledging their inherent limitations. In response to these intricacies, we proffer four pragmatic solution frameworks, spanning advanced techniques, hybrid schemes, and distributed data aggregation approaches. In doing so, our goal is to chart a path toward efficient, reliable, and low-latency wireless PC transmission.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A point cloud (PC) is a set of points that collectively depict a physical object or scene [1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###]. It encapsulates both the geometry, conveyed through the three-dimensional (3D) coordinates of the points, and various attributes associated with each point, including color, reflectance, transparency, curvature, among others. PCs offer a distinct advantage over other 3D representations by providing a highly detailed and accurate depiction of an object\u2019s shape, structure, and spatial characteristics, particularly in representing complex non-manifold geometries.\nAs a bridge between the physical and virtual worlds, PCs find extensive applications across various fields, such as robotics, autonomous driving, digital twin, virtual/augmented reality (VR/AR), cultural heritage, telepresence, etc. [3 ###reference_3###, 4 ###reference_4###]. For instance, the facial recognition technology commonly found on smartphones relies on PC recognition. By utilizing 3D PC data of facial features, smartphones can accurately identify and authenticate users. In autonomous driving, PCs play a pivotal role in enhancing the perception capabilities of autonomous vehicles. By providing detailed spatial information about the surrounding environment, PCs enable accurate detection and interpretation of objects, facilitating safer navigation and decision-making. PCs have also been leveraged in cutting-edge products like the Apple Vision Pro, which incorporates PC technology to create realistic and interactive virtual environments and deliver immersive VR/AR experiences to users.\nPCs are typically captured using various sensing technologies, such as light detection and ranging (LIDAR), 3D scanning, depth sensors, and photogrammetry. LIDAR, for example, emits laser beams and measures the time it takes for the light to reflect back from objects in the environment. By scanning the surroundings and recording the 3D coordinates of objects, a LIDAR can construct PCs with high accuracy and intricate details. In dynamic environments, it becomes necessary to continuously capture multiple frames of PCs. This continuous acquisition of successive frames adds an additional temporal dimension to the 3D data, resulting in the formation of 4D PCs, also known as dynamic PCs.\n###figure_1### PCs exhibit three key characteristics [1 ###reference_1###, 3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###]:\nLarge data volume. To represent complex spatial objects and scenes, PCs can comprise several million points, leading to a substantial amount of data. Considering only the representation of 3D coordinates, a PC with 1 million points requires 48Mb data. If we further consider dynamic PCs, the data volume sampled at a rate of 30 frames per second escalates to 1.44Gbps.\nNon-uniformly distributed and non-ordered. In contrast to traditional 2D data like images or videos, PC data exhibit irregular spatial distribution, primarily caused by the shape characteristics of the objects being represented. Furthermore, the points in a PC lack inherent order, resulting in limited semantic information among these points.\nMulti-view acquisition. The raw PCs captured from sensors exhibit another form of irregularity referred to as the \u201cintense in proximity, sparse in distance\u201d. In this phenomenon, there is a higher concentration of points in close proximity, while points become sparser as the distance increases due to factors like reduced reflectivity or occlusion. Consequently, the deployment of distributed sensors for multi-view scanning, as shown in Fig. 1 ###reference_###, has become indispensable to address this irregularity and obtain a more comprehensive and detailed depiction of the environment.\nTo date, the study of PCs has primarily been conducted within the computer science community, with a predominant emphasis on the semantic segmentation and classification of readily available PCs. However, in practical applications, the acquisition and processing of PCs may not be co-located. Particularly in the case of multi-view acquisitions, the fusion and stitching of PCs sensed by distributed sensors are carried out at a remote server. This highlights the criticality and inevitability of wireless transmission of PCs in real-world scenarios, serving as a fundamental requirement for transitioning algorithms from laboratory experiments to practical systems. For carefully-fused high-fidelity PCs, wireless communication provides enhanced mobility and flexibility, making it essential for applications demanding real-time collaborative data processing and decision making.\nIn this paper, our primary focus centers on the pivotal issue of wireless transmission of PCs. Our main contributions are distilled as follows:\nWe emphasize the profound importance of wireless PC transmission and elucidate four major challenges it confronts. This underscores the compelling necessity for the development of dedicated wireless transmission systems tailored specifically to PCs.\nWe conduct a comprehensive review and in-depth analysis of existing works and solutions, pinpointing their strengths and limitations. The analysis yields invaluable insights, paving the way for potential systemic solutions.\nWe put forth four pragmatic solution frameworks in response to the intricate challenges inherent in PC transmission. These encompass a semantic communication framework grounded in deep joint source-channel coding (DeepJSCC) [6 ###reference_6###], a representational compression framework drawing inspiration from neural radiance fields (NeRF) [7 ###reference_7###], an uplink PC feature aggregation framework, and a distributed broadcast framework tailored for applications with stringent delay constraints."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Main Challenges of Wireless PC Transmission",
15
+ "text": "To start with, this section summarizes the challenges encountered by wireless PC transmission.\nLimited bandwidth. The scarcity of spectrum in wireless communications presents a significant challenge in efficiently utilizing the available bandwidth for reliable transmission of PCs. This challenge becomes particularly prominent in scenarios involving distributed fusion, where multiple sensors need to transmit captured PCs to a common access point (AP), and dynamic PC transmission, which necessitates real-time continuous updates of spatial information.\nEfficient PC compression (PCC). By removing redundancies in both coordinates and attributes, PCC transforms the PC to a compressed bitstream, thereby alleviating the burden of wireless transmission and reducing sensors\u2019 energy expenditure. Yet, PCs present a unique challenge because they consist of unstructured, unordered, and irregularly distributed points that often lack semantic information. The intricacy is compounded when it comes to attribute compression, as these attributes are tied to the irregular geometry. The task at hand involves not only reducing data volume but also preserving the critical information that makes PCs meaningful. Thus, the art of extracting these meaningful features and efficiently compressing PCs remains an active and challenging frontier in current research and development.\nCliff and leveling effects. While compression reduces the data volume, it also renders the compressed data more vulnerable to bit errors and packet losses. Mitigating these issues necessitates the adoption of channel coding. However, two notable hurdles naturally arise: the cliff and levelling effects. The cliff effect materializes as a precipitous drop in transmission rate when the channel quality dips below a specific threshold. Meanwhile, the levelling effect embodies a scenario where the transmission rate stubbornly resists improvement despite enhancements in channel quality, unless the modulation and coding scheme (MCS) can be adaptively reconfigured in harmony with real-time channel conditions. These inherent intricacies underscore the challenge of ensuring robust and reliable PC transmission, particularly over time-varying channels.\nDelay sensitive applications. Many PC applications, including autonomous vehicles, real-time AR/VR, remote meetings, and remote surgery, heavily depend on the seamless flow of real-time processing, visualization, and critical decision-making. In these contexts, the tolerance for delays is exceedingly low, making it imperative to strike a delicate equilibrium between the complexity and performance of compression and decompression algorithms."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Existing Solutions and Their Limitations",
21
+ "text": "Within the existing literature, considerable research efforts have been dedicated to tackling the challenges highlighted in Section II ###reference_### and facilitating effective wireless PC transmission. Table I ###reference_### provides an overview of these solutions and highlights their key features. Depending on the modulation schemes employed for data transmission, these approaches can be broadly categorized as digital, analog, and hybrid schemes. In the following, we will comprehensively explore these three categories of strategies, delving into their respective merits and limitations."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Digital schemes",
27
+ "text": "Digital transmission is the classical approach for PC transmission. This method involves compressing redundancies to convert PCs into bitstreams, followed by well-established entropy coding and channel coding. The channel-coded bitstream is then modulated onto constellations for transmission. In this approach, the most challenging issue is PCC.\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "3.1.1",
31
+ "parent_section_id": "3.1",
32
+ "section_name": "III-A1 PCC standards",
33
+ "text": "As early as 2014, PCC has captured the attention of the MPEG 3D Graphics Coding Group. After thorough and iterative discussions, the first standardized PCC protocols were introduced in 2020 [3 ###reference_3###]. The final standards encompass two distinct compression approaches: video-based PCC (V-PCC, ISO/IEC 23090-5) and geometry-based PCC (G-PCC, ISO/IEC 23090-9).\nV-PCC is primarily designed for dense PCs exhibiting a relatively uniform distribution of points. Its fundamental concept involves projecting the 3D spatial data points into a collection of 2D images, and compressing the images using 2D video codec, which is widely available on most devices nowadays. In contrast, G-PCC is designed with a focus on sparse PCs, characterized by a non-uniform distribution of points in space. As opposed to projection, G-PCC leverages the octree data structure for direct 3D space encoding. This entails dividing the space into nested cubic cells, each accommodating a specific point or set of points, as shown in Fig. 2 ###reference_###. Both V-PCC and G-PCC support dynamic PC transmission, but only V-PCC exploits the temporal correlations among consecutive PC frames for efficient compression."
34
+ },
35
+ {
36
+ "section_id": "3.1.2",
37
+ "parent_section_id": "3.1",
38
+ "section_name": "III-A2 Traditional approaches",
39
+ "text": "The difficulty of PCC is rooted in the irregular spatial distribution of points. To tackle this hurdle, traditional methods primarily focus on converting the irregular data structure into a more regular form. This conversion can be achieved through techniques such as projection, voxelization, or graph transformation.\nProjection, as demonstrated in V-PCC, aims to reduce the 3D PCs into more manageable 2D images through multi-view projection. However, the loss of intricate 3D geometric information during projection can be a concern. Additionally, the inability to fully utilize the inherent sparsity in PCs can result in increased computational complexity and suboptimal representations.\nVoxelization is a technique that discretizes 3D space into voxels, providing a structured framework for representing PCs. It often involves the use of tree structures like octrees and KD-trees. While this method has its advantages, it can introduce significant computational and memory demands, often without effectively leveraging the sparsity of PCs. Furthermore, the voxelization process itself may lead to a loss of geometric intricacies due to quantization onto a voxel grid.\nGraph-based approaches prove efficient in representing complex data structures like PCs thanks to their ability to capture relationships and connections among data points. In this methodology, the geometry of PCs serves as the basis for constructing vertices and edges in a graph. Attributes associated with each data point become signals on the corresponding graph vertices. Leveraging this graph representation, we can employ graph signal processing techniques, such as graph Fourier transform (GFT) and graph wavelet transform, for efficient feature extraction and compression."
40
+ },
41
+ {
42
+ "section_id": "3.1.3",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "III-A3 Deep learning (DL)-aided approaches",
45
+ "text": "DL enables end-to-end PCC frameworks that seamlessly encompass data transformation, feature extraction, encoding, and decoding [8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###, 11 ###reference_11###, 5 ###reference_5###]. In the following, we examine three representative works.\nIn [8 ###reference_8###], the authors present a comprehensive approach to compress LiDAR streams by exploiting spatio-temporal redundancies through a learned deep entropy model. The approach involves quantizing and encoding spatial coordinates into an octree representation, incorporating a deep entropy model to predict occupancies and intensity values, and entropy coding for bitstream generation. They demonstrate that DL-aided schemes can efficiently compress both spatial coordinates and intensity values while preserving reconstruction quality.\nRef. [9 ###reference_9###] introduces Deep-PCAC, a DL-assisted approach for lossy PC attribute compression. Unlike previous methods that voxelize or project points, Deep-PCAC directly encodes and decodes attributes using geometry. It employs second-order point convolution for better spatial correlation utilization, a dense point-inception block for enhanced feature propagation, and a multiscale loss for improved optimization. While it falls short in performance compared to G-PCC, Deep-PCAC serves as a foundational work in DL-based attribute compression.\nAnother representative PCC approach is DeepPCC [10 ###reference_10###].\nUnlike existing methods, DeepPCC offers a unified framework for both geometry and attribute compression. It leverages multiscale neighborhood information aggregation, sparse convolution, and KNN self-attention to efficiently capture spatial correlations in PCs. Moreover, DeepPCC is computationally efficient in that the computations are limited to positively-occupied voxels."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Analog and hybrid schemes",
51
+ "text": "The theoretical foundation of digital transmission rests on Shannon\u2019s separation theorem that relies on idealized assumptions, such as ergodic sources and channels, and infinite block lengths. In practice, however, we are confronted with non-ergodic sources and channels, and operate over limited source and channel block lengths. This disparity between theoretical assumptions and real-world restrictions renders the digital approach suboptimal.\nOn the other hand, digital transmission suffers from the cliff and levelling effects, which become notably pronounced in time-varying and broadcast channels. Picture a typical downlink scenario in wireless networks, where an AP endeavors to broadcast a PC to multiple users. Here, the entire communication system\u2019s performance is at the mercy of its weakest link \u2013 the channel with the poorest quality. This limitation, unfortunately, leads to missed opportunities as the potential of other, better-performing channels remain untapped.\nTo overcome the above limitations, [13 ###reference_13###] proposed a new paradigm called semantic PC transmission (SEPT). Two schemes that underpin SEPT are DeepJSCC and discrete-time analog transmission, each offering distinctive advantages.\nThe design of JSCC systems has traditionally focused on specific sources and channels. However, DeepJSCC heralds a paradigm shift by leveraging DL to enable end-to-end learning of DeepJSCC encoder and decoder for arbitrary sources and channels as long as sufficiently rich data are available. This revolutionary approach vastly broadens the applicability of JSCC across a spectrum of scenarios.\nWhen coupled with discrete-time analog transmission, SEPT introduces a favorable feature whereby each user can reconstruct the input signal at a quality allowed by its channel noise level. This capability effectively mitigates the cliff and leveling effects. Consider the context of broadcast channels. With SEPT, the decoding performance of each user correlates directly with their individual channel quality. This eliminates the shortcoming that the overall system performance is dictated by the worst channel quality.\nAnother noteworthy approach that capitalizes on analog transmission is HostCast [4 ###reference_4###]. In contrast to SEPT, HostCast embraces a graph-based methodology, where the PC is conceptualized as a graph, with its geometric components serving as vertices and attributes functioning as signals attributed to these vertices. HostCast leverages GFT to process these signals, subsequently transmitting them directly in an analog fashion without the intermediaries of digital quantization and entropy coding. This distinctive approach grants HostCast the capability to enhance reconstruction quality gracefully as the wireless channel quality improves.\nExpanding upon HostCast, the authors further introduced HoloCast+ [12 ###reference_12###]. This iteration employs a hybrid digital-analog coding scheme, seamlessly integrating octree-based digital compression with GFT-based analog coding. This combined approach not only outperforms HostCast in terms of reconstruction quality but also effectively mitigates the cliff and leveling effects."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV Potential Solutions and Research Directions",
57
+ "text": "In the previous two sections, we have highlighted the challenges associated with wireless PC transmission and provided a concise overview of existing works. Clearly, there is a compelling need to rethink and redesign communication systems meticulously crafted for the nuances of PCs, considering their pivotal role in the impending digital landscape.\nIn this section, we will delve into the challenges, harness insights gleaned from the latest strides in the field, and offer our perspectives for future research.\n###figure_3###"
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-A Semantic communication",
63
+ "text": "Semantic communication, an emerging communication system design approach, has gained prominence in recent years [14 ###reference_14###, 6 ###reference_6###]. It broadly encompasses a category of end-to-end communication system designs rooted in DL.\nDiverging from traditional communication systems fixated on achieving low bit error rates, semantic communication centers its focus on overall system performance and offers two main advantages.\nFirst, semantic communication leverages more efficient DL-based compression methods. Over recent years, DL has demonstrated superior performance in compressing various data sources, spanning voice, image, and video signals. Given the irregular nature of PCs, we foresee DL-based PCC as a major future trend and a crucial research avenue. Existing DL-based PCC endeavors [9 ###reference_9###, 10 ###reference_10###] have already shown advantages over traditional methods like G-PCC.\nSecond, semantic communication leverages DeepJSCC and analog transmission to mitigate the cliff and leveling effects. SEPT [13 ###reference_13###] represents an attempt within semantic communication for PC transmission. Leveraging point transformer\u2019s feature extraction capability and the pooling layer\u2019s feature summarization capabilities, SEPT progressively reduces the number of points within a PC, culminating in its transformation into a latent vector for analog transmission to the receiver, thereby effectively addressing the cliff and leveling effects, as shown in Fig. 3 ###reference_###(a).\nWhile SEPT\u2019s success serves as a promising precedent for applying semantic communication principles to PC transmission, several challenges remain on the horizon. A key concern is that the current SEPT framework is primarily suited for small-scale PCs.\nAs the number of points within a PC grows, surpassing G-PCC with SEPT becomes a formidable task.\nFig. 3 ###reference_###(b) evaluates SEPT on a large-scale dataset. As shown, the performance of SEPT degrades significantly compared with G-PCC. This raises a pivotal question: How can we effectively employ semantic communication for large-scale PCs? Here, we propose several potential solutions:\nCraft new DL-based feature extraction and compression framework tailored for large PCs.\nExplore hybrid digital and analog coding frameworks. In this approach, alongside transmitting the latent vector through analog means, we can also transmit key points within the large PC to the receiver in a digital format. Once these key points, which characterize the structure of the PC, are known to the receiver, the transmitter can describe the neighboring points (including both geometry and attributes) around these key points as part of the latent vector.\nOur preliminary experiments in Fig. 3 ###reference_###(b) indicate that this hybrid coding approach can remarkably enhance SEPT\u2019s performance on large PCs.\nRepresentational compression. Another intriguing avenue is to explore representational compression methods inspired by NeRF, which we will delve into later in Section IV-B ###reference_###.\nOverall, the integration of DL and semantic communication holds significant promise for enhancing the efficiency and reliability of PC transmission, offering exciting opportunities for future research in this field.\nIn various PC applications, the receiver often possesses a static environmental map. Leveraging this static map, the receiver can train a PC generative model, thereby gaining the ability to generate or fill in missing parts of the PCs. Exploiting this generative capability at the receiver, we can develop corresponding conditional semantic encoders at the transmitter and significantly reduce the amount of data that needs to be transmitted.\n###figure_4###"
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B NeRF and representational compression",
69
+ "text": "NeRF is an innovative approach for constructing 3D scenes and objects from 2D images [7 ###reference_7###]. Unlike traditional methods, NeRF takes a new route by instructing the DNN model on how light interacts with objects within a scene, effectively constructing a 3D model by capturing their appearance from multiple perspectives.\nThe success of NeRF not only provides a means to construct 3D PCs from 2D images but also offers an inspiring concept: when contemplating the compression of a PC, we can parametrically represent the relationship between spatial point positions and their attributes as a function (e.g., DNNs). In doing so, the transmitter only needs to transmit the parameters of this function to the receiver. The receiver can then input individual coordinates into the function to determine whether there are points at those coordinates and what attributes those points possess.\nIn essence, this method shifts the focus from transmitting extensive spatial data and attributes to transmitting a compact function that can efficiently represent the original PC.\nWhen we employ DNN as the representative function, the input and output of the DNN can be written as\nwhere denotes a 3D coordinate and is the DNN parameterized by . The training of occurs at the transmitter, and subsequently, is transmitted to the receiver. With , the receiver can deduce the presence of a point at any given position and retrieve associated attributes if a point indeed exists.\nThe simulation results in Fig. 4 ###reference_### confirm the efficacy of representational compression. As shown, the DNN architecture plays a pivotal role. When we opt for a convolutional neural network (CNN), representational compression outperforms G-PCC in terms of geometry reconstruction. On the other hand, the exploration of more efficient DNN designs that can surpass G-PCC in terms of both geometry and attribute reconstruction presents an enticing research direction.\nWhen we employ DNNs as the representative functions, our challenge shifts from transmitting PCs to transmitting DNNs.\nDNN parameters are a set of real numbers. This implies that representational compression effectively transforms the PC into a latent vector. At this juncture, we can employ a semantic communication approach to transmit DNN parameters, thereby achieving higher efficiency and mitigating the cliff and leveling effects. This approach corresponds to the third solution described in Section IV-A ###reference_### for large-scale PCs."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-C Uplink PC aggregation",
75
+ "text": "So far, much of our discussion has revolved around the downlink aspect, emphasizing the efficient transmission of PCs from an AP to downlink users. Yet, another intriguing and crucial scenario is how the AP acquires PCs in the uplink. Recall that PC collection exhibits the characteristic of being intense in proximity but sparse in the distance. Therefore, an AP needs to utilize a distributed deployment of sensors to gather environmental information and construct spatial PCs. The specific solutions in practice depend on the types of sensors deployed.\nWhen the available sensors in the environment are cameras, NeRF offers a ready-made solution. In this setup, multiple sensors transmit captured images from various perspectives to the AP. The AP then trains a DNN using these multi-view images to represent the entire 3D environment, subsequently rendering the 3D PC. From a wireless communication standpoint, the upside is that this approach can leverage well-established image encoders and decoders. The downside, however, is that it places additional responsibilities on the AP, as training the DNN and rendering the PC can consume substantial computational resources and time.\nIn scenarios where sensors in the environment directly collect PCs, the central challenge becomes the transmission and aggregation of distributed multi-view PCs. Unlike in point-to-point and broadcast channels, the primary objective here is to aggregate all the PCs. Thus, having each sensor transmit its PC to the AP might prove inefficient. Instead, a promising approach is feature aggregation. Here, PCs collected by multiple sensors are first transformed into feature space, with feature vectors subsequently transmitted to the AP for aggregation.\nThis approach raises two key questions.\nFirst, should the transmission and aggregation of feature vectors occur separately, i.e., employing orthogonal multiple access followed by aggregation, or should they take place jointly via an over-the-air computation approach? Which approach strikes a better trade-off between feature space size and the reconstruction performance at the AP? Second, should the features extracted by multiple sensors reinforce each other or complement each other? These questions pose intriguing challenges and opportunities for optimizing the transmission of PC data in a distributed sensing environment.\nConsidering distinct channel conditions and the diverse information resolution needs for downstream tasks across various spatial directions, we can adopt a flexible multi-resolution multi-user feature aggregation approach. In this context, sensors have the liberty to downsample their PCs, with each sensor emphasizing the transmission of features at different scales. This prioritization can be determined by the significance of the transmitted information and the dynamic channel conditions. Leveraging multi-resolution feature aggregation can result in more efficient spectrum utilization without sacrificing the performance of downstream tasks.\nAnother captivating scenario arises with multimodal aggregation. In environments featuring a range of sensors like cameras for visual data and LiDAR for depth sensing, the AP has the opportunity to harness information from multiple modalities. This approach opens up exciting possibilities for integrating data from diverse sensor types, potentially leading to richer and more accurate representations of the environment. It also brings forth the challenge of efficiently transmitting and fusing data from these different sources, which is a compelling area for future exploration.\n###figure_5###"
76
+ },
77
+ {
78
+ "section_id": "4.4",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-D Delay critical applications",
81
+ "text": "In the previous analysis and discussion, we have not yet incorporated the critical consideration of delay constraints. In practical applications, information freshness stands as a paramount metric. Outdated data loses its relevance, rendering delay a critical concern in wireless PC transmission.\nPC transmission involves three main steps: compression, air interface transmission, and reconstruction. Among these, air interface transmission typically consumes the least amount of time. More often than not, compression and reconstruction are the most time-intensive steps [5 ###reference_5###]. Thus, for applications with stringent time constraints, particularly those necessitating real-time responses, devising lightweight compression or JSCC schemes becomes pivotal in real-world scenarios.\nIn exceptionally critical, life-dependent situations, it might become necessary to forego compression and reconstruction, opting instead to directly downsample the raw PC data for transmission. That said, this choice does not imply a lack of optimization opportunities. It is important to recognize that PCs are a genuine reflection of 3D space, and a substantial portion of the PC data collected by distributed sensors often exhibits overlaps. This insight inspires us to craft a transmission strategy with the aim of minimizing the overall number of transmissions (and consequently, transmission time).\nConsider a scenario in vehicular networks with vehicles equipped with LIDAR sensors. We can partition the complete PC of the space into distinct small regions or cubes. With LIDAR, each of the vehicles can sense a subset of these cubes. Our objective is to ensure that each vehicle can recover all cubes through wireless communication.\nWhen a Roadside Unit (RSU) is present, the solution to this problem becomes relatively clear. Specifically, the communication process can be divided into two phases: the uplink and downlink. During the uplink phase, the RSU collects the data of all cubes. At this stage, the vehicles only need to transmit data for the cubes that the RSU fails to perceive.\nIn the downlink phase, the RSU broadcasts the entire PC to vehicles. At this juncture, the transmission problem can be modeled as an index coding problem [15 ###reference_15###], where the RSU holds files to distribute, and each vehicle possesses its own side information.\nA more intricate scenario arises when the RSU is absent, and vehicles have to broadcast to each other, exchanging information in a self-organized manner to ensure that each vehicle ultimately obtains the data it could not perceive on its own. We refer to this problem as the distributed broadcasting problem.\nFinding the optimal transmission scheme for the distributed broadcasting problem remains a formidable challenge. In this paper, we put forth a new hypergraph approach to address this challenge. To illustrate the concept, consider the scenario depicted in Fig. 5 ###reference_###, where and . Each vehicle captures some regions of the scene, leading to overlapping observations. It is the intricate relationships between these observed regions that render this problem challenging.\nOur hypergraph framework, however, offers a means to streamline this intricacy.\nWithin the hypergraph, vehicles are represented by vertices, while the observed regions are represented by edges. Given the potential for multiple vehicles to observe the same region, these edges can connect multiple vertices, giving rise to a hypergraph representation. Our approach not only streamlines the problem but also uncovers fresh analytical avenues. As an example, for a hypergraph , a lower bound of the optimal number of transmissions is found to be"
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "This paper undertook a thorough investigation into the challenges surrounding wireless PC transmission and introduced innovative solution frameworks that hold the promise of reshaping the landscape of efficient 3D spatial data transmission. Our findings illuminate several critical research directions, encompassing semantic transmission of large-scale PCs, representational compression inspired by NeRF, uplink PC aggregation, and the optimization for delay-critical applications. In view of these challenges and opportunities, our work underscores the imperative for sustained innovation and interdisciplinary collaboration to seamlessly integrate 3D spatial data into the interconnected fabric of the future."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>A summary of existing solutions for wireless PC transmission.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.36\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.1\" style=\"padding:-0.5pt 5.7pt;\">Schemes</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.2\" style=\"padding:-0.5pt 5.7pt;\">PC data</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.3\" style=\"padding:-0.5pt 5.7pt;\">Backbone</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.4\" style=\"padding:-0.5pt 5.7pt;\">Modulation</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.5\" style=\"padding:-0.5pt 5.7pt;\">lossless</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.6\" style=\"padding:-0.5pt 5.7pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.36.37.1.6.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1.6.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.36.37.1.6.1.1.1\" style=\"padding:-0.5pt 5.7pt;\">Exploit temporal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1.6.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.36.37.1.6.1.2.1\" style=\"padding:-0.5pt 5.7pt;\">Correlations</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.36.37.1.7\" style=\"padding:-0.5pt 5.7pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.36.37.1.7.1\">\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1.7.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.36.37.1.7.1.1.1\" style=\"padding:-0.5pt 5.7pt;\">Prevent cliff &amp;</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.36.37.1.7.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.36.37.1.7.1.2.1\" style=\"padding:-0.5pt 5.7pt;\">levelling effects</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib1\" title=\"\">1</a>]</cite> PCL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.6\" style=\"padding:-0.5pt 5.7pt;\">Octree</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.1.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.2.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.2.2.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.3.3.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib2\" title=\"\">2</a>]</cite> Draco</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.6\" style=\"padding:-0.5pt 5.7pt;\">KD-tree</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.4.4.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.4.4.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.5.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.5.5.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.6.6.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.6.6.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib3\" title=\"\">3</a>]</cite> V-PCC</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.6\" style=\"padding:-0.5pt 5.7pt;\">Video codec</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.7.7.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.7.7.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.8.8.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.8.8.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.9.9.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.9.9.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib3\" title=\"\">3</a>]</cite> G-PCC</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.6\" style=\"padding:-0.5pt 5.7pt;\">Octree</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.10.10.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.10.10.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.11.11.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.11.11.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.12.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.12.12.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.15.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib8\" title=\"\">8</a>]</cite> Muscle</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.15.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.15.6\" style=\"padding:-0.5pt 5.7pt;\">Octree &amp; deep entropy model</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.15.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.13.13.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.14.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.14.14.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.15.15.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.15.15.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.18.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.18.18.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib9\" title=\"\">9</a>]</cite> Deep-PCAC</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.18.18.5\" style=\"padding:-0.5pt 5.7pt;\">Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.18.18.6\" style=\"padding:-0.5pt 5.7pt;\">Point Con. &amp; point-inception block</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.18.18.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.16.16.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.16.16.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.17.17.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.17.17.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.18.18.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.18.18.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.21\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib10\" title=\"\">10</a>]</cite> DeepPCC</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.6\" style=\"padding:-0.5pt 5.7pt;\">Sparse conv. &amp; octree &amp; local attention</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.19.19.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.19.19.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.20.20.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.20.20.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.21.21.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.24.24\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.24.24.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a>]</cite> AITransfer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.24.24.5\" style=\"padding:-0.5pt 5.7pt;\">Geo.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.24.24.6\" style=\"padding:-0.5pt 5.7pt;\">PointNet++</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.24.24.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.22.22.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.22.22.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.23.23.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.23.23.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.24.24.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.24.24.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.27.27\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.27.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib5\" title=\"\">5</a>]</cite> PCV delivery</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.27.5\" style=\"padding:-0.5pt 5.7pt;\">Geo.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.27.6\" style=\"padding:-0.5pt 5.7pt;\">PointNet++ &amp; PU-GAN</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.27.7\" style=\"padding:-0.5pt 5.7pt;\">Digital</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.25.25.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.25.25.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.26.26.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.26.26.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.27.27.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.27.27.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.30.30\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.30.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib4\" title=\"\">4</a>]</cite> HoloCast</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.30.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.30.6\" style=\"padding:-0.5pt 5.7pt;\">GFT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.30.7\" style=\"padding:-0.5pt 5.7pt;\">Analog</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.28.28.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.28.28.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.29.29.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.29.29.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.30.30.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.30.30.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.33.33\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.33.33.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib12\" title=\"\">12</a>]</cite> HoloCast+</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.33.33.5\" style=\"padding:-0.5pt 5.7pt;\">Geo. &amp; Attr.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.33.33.6\" style=\"padding:-0.5pt 5.7pt;\">Octree &amp; GFT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.33.33.7\" style=\"padding:-0.5pt 5.7pt;\">Hybrid</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.31.31.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.31.31.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.32.32.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.32.32.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.33.33.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.33.33.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.36.36\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.36.36.4\" style=\"padding:-0.5pt 5.7pt;\">\n<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib13\" title=\"\">13</a>]</cite> SEPT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.36.36.5\" style=\"padding:-0.5pt 5.7pt;\">Geo.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.36.36.6\" style=\"padding:-0.5pt 5.7pt;\">Point transformer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.36.36.7\" style=\"padding:-0.5pt 5.7pt;\">Analog</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.34.34.1\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.34.34.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.35.35.2\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.02\" id=\"S3.T1.35.35.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.02\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.02) matrix(1 0 0 -1 0 0) translate(0.48,0) translate(0,0.48)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 0 0 C 3.16 3.9 5.16 5.9 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 1.81 8.6 C 3.77 5.3 4.95 3.53 7.24 0.45\" style=\"fill:none\"></path></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.36.36.3\" style=\"padding:-0.5pt 5.7pt;\"><svg class=\"ltx_picture\" height=\"10.09\" id=\"S3.T1.36.36.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"10.09\"><g fill=\"#000000\" stroke=\"#000000\" transform=\"translate(0,10.09) matrix(1 0 0 -1 0 0) translate(0.55,0) translate(0,0.55)\"><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.7pt\"><path d=\"M 2.26 0 C 4.26 3.95 5.82 6.03 9.05 9.05\" style=\"fill:none\"></path></g><g color=\"#000000\" stroke-linecap=\"round\" stroke-width=\"0.8pt\"><path d=\"M 0 3.17 C 0.79 1.91 1.25 1.23 2.08 0\" style=\"fill:none\"></path></g></g></svg>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
94
+ "capture": "TABLE I: A summary of existing solutions for wireless PC transmission."
95
+ }
96
+ },
97
+ "image_paths": {
98
+ "1": {
99
+ "figure_path": "2401.00658v1_figure_1.png",
100
+ "caption": "Figure 1: PC-aided collaborative environmental awareness for autonomous vehicles. Acquiring a comprehensive PC for a scene from distributed sensors through uplink transmission, which is then fed to users through downlink transmission.",
101
+ "url": "http://arxiv.org/html/2401.00658v1/x1.png"
102
+ },
103
+ "2": {
104
+ "figure_path": "2401.00658v1_figure_2.png",
105
+ "caption": "Figure 2: Octree encoding subdivides the 3D space into hierarchical cubic regions, storing points of the PC as leaf nodes for efficient spatial representation.",
106
+ "url": "http://arxiv.org/html/2401.00658v1/x2.png"
107
+ },
108
+ "3": {
109
+ "figure_path": "2401.00658v1_figure_3.png",
110
+ "caption": "Figure 3: (a) SEPT efficiently addresses the cliff and leveling effects on the downsampled ShapeNet dataset. (b) The rate-distortion performance of SEPT and G-PCC on the large-scale SemanticKITTI dataset. While the raw SEPT (analog) exhibits performance degradation, the SEPT (hybrid) scheme outperforms G-PCC when transmitting certain key points to the receiver.",
111
+ "url": "http://arxiv.org/html/2401.00658v1/x3.png"
112
+ },
113
+ "4": {
114
+ "figure_path": "2401.00658v1_figure_4.png",
115
+ "caption": "Figure 4: The rate-distortion performance of representational compression benchmarked against G-PCC on the 8iVFB dataset (loot). The DNN is designed to be multilayer perceptron (MLP) and CNN, respectively.",
116
+ "url": "http://arxiv.org/html/2401.00658v1/x4.png"
117
+ },
118
+ "5": {
119
+ "figure_path": "2401.00658v1_figure_5.png",
120
+ "caption": "Figure 5: A hypergraph approach for the distributed broadcasting of PCs.",
121
+ "url": "http://arxiv.org/html/2401.00658v1/x5.png"
122
+ }
123
+ },
124
+ "validation": true,
125
+ "references": [],
126
+ "url": "http://arxiv.org/html/2401.00658v1"
127
+ }
20240101/2401.00661v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00662v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240101/2401.00663v1.json ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "1st Place Solution for 5th LSVOS Challenge: Referring Video Object Segmentation",
3
+ "abstract": "The recent transformer-based models have dominated the Referring Video Object Segmentation (RVOS) task due to the superior performance. Most prior works adopt unified DETR framework to generate segmentation masks in query-to-instance manner. In this work, we integrate strengths of that leading RVOS models to build up an effective paradigm. We first obtain binary mask sequences from the RVOS models. To improve the consistency and quality of masks, we propose Two-Stage Multi-Model Fusion strategy. Each stage rationally ensembles RVOS models based on framework design as well as training strategy, and leverages different video object segmentation (VOS) models to enhance mask coherence by object propagation mechanism. Our method achieves on Ref-Youtube-VOS validation set and on test set, which ranks 1st place on 5th Large-scale Video Object Segmentation Challenge (ICCV 2023) track 3. Code is available at https://github.com/RobertLuo1/iccv2023_RVOS_Challenge.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Referring Video Object Segmentation aims to segment and track the target object referred by the given text description in a video.\nThis emerging field has garnered attention due to its potential applications in video editing and human-robot interaction.\nThe critical challenge in RVOS lies in the pixel-level alignment between different modalities and time steps, primarily due to the varied nature of video content and unrestricted language expression.\nMost early approaches [1 ###reference_1###, 7 ###reference_7###, 8 ###reference_8###] adopt multi-stage and complex pipelines that take the bottom-up or top-down paradigms to segment each frame separately, while recent works MTTR [2 ###reference_2###], Referformer [15 ###reference_15###] propose to unify cross-modal interaction with pixel-level understanding into transformer structure.\nFor example, 2022 first winner [6 ###reference_6###] simply employs fine-tuned Referformer as backbone to generate a series of high quality masks.\nHowever, these methods may lose the perception of target objects for language descriptions expressing temporal variations of objects due to the lack of video-level multi-modal understanding.\nTo address this issue, SOC [9 ###reference_9###], MUTR [17 ###reference_17###] efficiently aggregate inter and intra-frame information.\nMeanwhile, UNINEXT [16 ###reference_16###] proposes a unified prompt-guided formulation for universal instance perception, reuniting previously fragmented instance-level sub-tasks into a whole and achieve good performance for the RVOS task.\n###figure_1### In our work, we incorporate benefits of the previous mainstream works to provide an effective paradigm.\nBy utilizing the model ensemble strategy as well as semi-supervised VOS approaches as post-process to enhance the masks quality in each stage, we develop a Two-Stage Multi-model Fusion strategy.\nSpecifically, we select AOT [18 ###reference_18###] to preliminary improve the masks quality in the first stage, but with the increase of propagation layers and number of RVOS models that are processed by that, it will inevitably lead to loss of information unrelated to the object, which may weaken the effect of the consequent model fusion.\nTherefore, on the basis of the high quality mask sequences from the first stage, we further exploit the potential of multi-model fusion by utilizing DeAOT [19 ###reference_19###] in the second stage.\nThe final leaderboard shows that our method ranks 1st place in the 5th Large-scale Video Object Segmentation Challenge (ICCV 2023): Referring Video Object Segmentation track."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Method",
21
+ "text": "Given frames of video clip , where and a referring text expression , where denotes the i-th word in the text.\nRVOS task is to generate a series of binary segmentation masks , of the referred object."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Backbone",
27
+ "text": "We adopt SOC [9 ###reference_9###], MUTR [17 ###reference_17###], Referformer [15 ###reference_15###] and UNINEXT [16 ###reference_16###], the current prevalent RVOS models, as our backbones to respectively generate binary segmentation masks .\nwhere indicates the corresponding backbone.\nWe train SOC jointly on RefCOCO and Ref-Youtube-VOS datasets, while we directly use checkpoints with the highest performance without training for other models."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Post-process",
33
+ "text": "The video object segmentation has been proved to improve the segmentation mask consistency by object propagation mechanism. Specifically, [6 ###reference_6###] adopts AOT [18 ###reference_18###] as post-process to enhance the quality of mask results generated by RVOS models, which brings a clear improvement in accuracy. The general procedure are first selecting the key-frame index of mask sequences probability from RVOS model, then using VOS model to perform forward and backward propagation. It can be formulated as:\nwhere denotes the VOS model for post-process.\nIn our experiment, we find that although AOT can facilitate the temporal quality of mask results, the benefit decreases when conducting the Stage II fusions (which is elaborate on Sec. 3.3 ###reference_###). It is hypothesized that AOT potentially lead to loss of object-agnostic visual information in deep propagation layer. Consequently, the advantage of ensemble is degraded due to the aggregated object information loss from different RVOS models that are post-processed by the the same VOS model. Intuitively, we propose to use two VOS models for post-processing in different stages to alleviate the problem."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Two-Stage Multi-model Fusion",
39
+ "text": "We find that models with different frameworks process the object referred by the textual description in different perspectives. SOC unifies temporal modeling and cross modal alignment to achieve video-level understanding, which comprehends expressions containing temporal variations well. Due to the object discovery and retrieval paradigm, UNINEXT has strong ability of localizing and tracking same objects referred by different textual descriptions. MUTR introduces temporal transformer to facilitate objects interaction across frames, improving consistency of masks. In order to make full use of advantages of different frameworks, we propose two-stage multi-model fusion strategy. Similar to [6 ###reference_6###], we fuse the masks predicted by different referring expressions that describe the same target from different models, which is formulated as:\nwhere denotes the number of different textual descriptions referred to the same object and indicates the models. where , are the width and height of the mask respectively.\nStage I Referformer treats the task as sequence prediction problem and perform cross modal interaction in each frame.\nIts simple framework could serve as a baseline to segment the referred object but easily fails to capture the temporal variation of object across frames.\nWe believe that SOC and MUTR can explicitly increase the inter-frame interaction, which is a reasonable compensation for Referformer.\nTherefore, in the first stage, we fuse three models and use AOT as post-process to enhance the mask quality. For clarity, the fused model is denoted as SMR.\nStage II UNINEXT is jointly trained with prevalent datasets of instance perception tasks.\nIt is capable of perceiving diverse objects referred by different descriptions, thanks to static object queries which absorb rich information from data in different domain.\nAlthough it achieve high performance with VIT-Huge backbone [4 ###reference_4###] by feeding large scale of data, the lack of global view of object may cause the inconsistency when generating masks across frames.\nTherefore, we solve this problem by two-fold. (1) Employ DeAOT to propagate the object information from the key frame to another. (2) Ensemble with the SMR fused model to integrate information from inter-frame interaction."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiment",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Dataset and Metrics",
51
+ "text": "We evaluate our model on Ref-Youtube-VOS dataset of 2023 Referring Youtube-VOS challenge. It contains 3,978 high-resolution YouTube videos with about 15K language expressions. These videos are divided into 3,471 training videos, 202 validation videos and 305 test videos.\nwe adopt standard evaluation metrics: region similarity (), contour accuracy () and\ntheir average value () on Ref-Youtube-VOS."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Training Detail",
57
+ "text": "We train SOC with pretrained Video Swin Transformer and RoBERTa as the encoder for 30 epochs. The model is optimized by Adam optimizer with the initial learning rate of 1e-4. During training, we apply RandomResize and Horizontal Flip for data augmentation.\nSpecifically, all frames are downsampled to 360\u00d7640. In post-process, we follow [6 ###reference_6###] retrain DeAOT network with Swin-L backbone using default parameters in [19 ###reference_19###]."
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "Main Results",
63
+ "text": "Our method achieves on test set which outperforms the next team by and rank 1st place in Large-scale Video Object Segmentation Challenge (ICCV 2023): Referring Video Object Segmentation track."
64
+ },
65
+ {
66
+ "section_id": "4.4",
67
+ "parent_section_id": "4",
68
+ "section_name": "Ablation Study",
69
+ "text": "To validate the effectiveness of each module, we conduct simple ablation studies. As we mention above that we use SOC [9 ###reference_9###], MUTR [17 ###reference_17###], Referformer [15 ###reference_15###] and UNINEXT [16 ###reference_16###] as RVOS models to generate mask results for post-processing and fusion. It is noted that SOC is the main model that are included in two stages, and for simplicity, we set it as the baseline. As shown in Tab. 1 ###reference_###, the preliminary model fusion and post-process with AOT in stage I, brings an improvement of . While the fused model achieve with the second stage model ensemble, demonstrating the rationality and significance of our proposed two-stage multi models fusion."
70
+ },
71
+ {
72
+ "section_id": "4.5",
73
+ "parent_section_id": "4",
74
+ "section_name": "Qualitative Results",
75
+ "text": "Fig. 2 ###reference_### shows the prediction of our method for complex scenarios segmentation, i.e., similar appearance, occlusion and large variations.\nIt can be seen that our method precisely segments the referred object."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_rr ltx_border_tt\" id=\"S3.T1.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.3.4.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.3.3\">\n &amp; \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_tt\" id=\"S3.T1.3.4.1.1\">SOC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.4.1.2\">67.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S3.T1.3.5.2.1\">+AOT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.5.2.2\">69.5 <span class=\"ltx_text\" id=\"S3.T1.3.5.2.2.1\" style=\"color:#FF0000;\">(+2.0)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.6.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr\" id=\"S3.T1.3.6.3.1\">+Multi-model Fusion (Stage I) &amp; AOT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.6.3.2\">72.4 <span class=\"ltx_text\" id=\"S3.T1.3.6.3.2.1\" style=\"color:#FF0000;\">(+2.9)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.7.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr\" id=\"S3.T1.3.7.4.1\">+Multi-model Fusion (Stage II) &amp; DeAOT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.3.7.4.2\">75.7 <span class=\"ltx_text\" id=\"S3.T1.3.7.4.2.1\" style=\"color:#FF0000;\">(+3.3)</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.6.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.7.2\" style=\"font-size:90%;\">Ablation study of each module on our model\u2019s performance on <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.7.2.1\">validation set</span>.</span></figcaption>\n</figure>",
82
+ "capture": "Table 1: Ablation study of each module on our model\u2019s performance on validation set."
83
+ }
84
+ },
85
+ "image_paths": {
86
+ "1": {
87
+ "figure_path": "2401.00663v1_figure_1.png",
88
+ "caption": "Figure 1: The overall architecture of our method.",
89
+ "url": "http://arxiv.org/html/2401.00663v1/x1.png"
90
+ },
91
+ "2": {
92
+ "figure_path": "2401.00663v1_figure_2.png",
93
+ "caption": "Figure 2: Visualization results on Ref-Youtube-VOS.",
94
+ "url": "http://arxiv.org/html/2401.00663v1/x2.png"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [
99
+ {
100
+ "1": {
101
+ "title": "Refvos: A closer look at referring expressions for video object\nsegmentation.",
102
+ "author": "Miriam Bellver, Carles Ventura, Carina Silberer, Ioannis Kazakos, Jordi Torres,\nand Xavier Gir\u00f3-i-Nieto.",
103
+ "venue": "CoRR, abs/2010.00263, 2020.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "2": {
109
+ "title": "End-to-end referring video object segmentation with multimodal\ntransformers.",
110
+ "author": "Adam Botach, Evgenii Zheltonozhskii, and Chaim Baskin.",
111
+ "venue": "In CVPR, pages 4975\u20134985, 2022.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "3": {
117
+ "title": "Rethinking space-time networks with improved memory coverage for\nefficient video object segmentation.",
118
+ "author": "Ho Kei Cheng, Yu-Wing Tai, and Chi-Keung Tang.",
119
+ "venue": "In NIPS, pages 11781\u201311794, 2021.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "4": {
125
+ "title": "An image is worth 16x16 words: Transformers for image recognition at\nscale.",
126
+ "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,\nXiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg\nHeigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.",
127
+ "venue": "In ICLR, 2021.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "5": {
133
+ "title": "Actor and action video segmentation from a sentence.",
134
+ "author": "Kirill Gavrilyuk, Amir Ghodrati, Zhenyang Li, and Cees G. M. Snoek.",
135
+ "venue": "In CVPR, pages 5958\u20135966, 2018.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "6": {
141
+ "title": "1st place solution for youtubevos challenge 2022: Referring video\nobject segmentation.",
142
+ "author": "Zhiwei Hu, Bo Chen, Yuan Gao, Zhilong Ji, and Jinfeng Bai.",
143
+ "venue": "arXiv preprint arXiv:2212.14679, 2022.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "7": {
149
+ "title": "Video object segmentation with language referring expressions.",
150
+ "author": "Anna Khoreva, Anna Rohrbach, and Bernt Schiele.",
151
+ "venue": "In ACCV, volume 11364, pages 123\u2013141. Springer, 2018.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "8": {
157
+ "title": "Rethinking cross-modal interaction from a top-down perspective for\nreferring video object segmentation.",
158
+ "author": "Chen Liang, Yu Wu, Tianfei Zhou, Wenguan Wang, Zongxin Yang, Yunchao Wei, and\nYi Yang.",
159
+ "venue": "CoRR, abs/2106.01061, 2021.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "9": {
165
+ "title": "SOC: semantic-assisted object cluster for referring video object\nsegmentation.",
166
+ "author": "Zhuoyan Luo, Yicheng Xiao, Yong Liu, Shuyan Li, Yitong Wang, Yansong Tang, Xiu\nLi, and Yujiu Yang.",
167
+ "venue": "CoRR, abs/2305.17011, 2023.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "10": {
173
+ "title": "Video object segmentation using space-time memory networks.",
174
+ "author": "Seoung Wug Oh, Joon-Young Lee, Ning Xu, and Seon Joo Kim.",
175
+ "venue": "In ICCV, pages 9226\u20139235, 2019.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "11": {
181
+ "title": "Video object segmentation using space-time memory networks.",
182
+ "author": "Seoung Wug Oh, Joon-Young Lee, Ning Xu, and Seon Joo Kim.",
183
+ "venue": "In ICCV, pages 9226\u20139235, 2019.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "12": {
189
+ "title": "URVOS: unified referring video object segmentation network with a\nlarge-scale benchmark.",
190
+ "author": "Seonguk Seo, Joon-Young Lee, and Bohyung Han.",
191
+ "venue": "In ECCV, pages 208\u2013223, 2020.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "13": {
197
+ "title": "Feelvos: Fast end-to-end embedding learning for video object\nsegmentation.",
198
+ "author": "Paul Voigtlaender, Yuning Chai, Florian Schroff, Hartwig Adam, Bastian Leibe,\nand Liang-Chieh Chen.",
199
+ "venue": "In CVPR, pages 9481\u20139490, 2019.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "14": {
205
+ "title": "Context modulated dynamic networks for actor and action video\nsegmentation with language queries.",
206
+ "author": "Hao Wang, Cheng Deng, Fan Ma, and Yi Yang.",
207
+ "venue": "In AAAI, pages 12152\u201312159, 2020.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "15": {
213
+ "title": "Language as queries for referring video object segmentation.",
214
+ "author": "Jiannan Wu, Yi Jiang, Peize Sun, Zehuan Yuan, and Ping Luo.",
215
+ "venue": "In CVPR, pages 4964\u20134974. IEEE, 2022.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "16": {
221
+ "title": "Universal instance perception as object discovery and retrieval.",
222
+ "author": "Bin Yan, Yi Jiang, Jiannan Wu, Dong Wang, Zehuan Yuan, Ping Luo, and Huchuan\nLu.",
223
+ "venue": "In CVPR, 2023.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "17": {
229
+ "title": "Referred by multi-modality: A unified temporal transformer for\nvideo object segmentation.",
230
+ "author": "Shilin Yan, Renrui Zhang, Ziyu Guo, Wenchao Chen, Wei Zhang, Hongyang Li, Yu\nQiao, Zhongjiang He, and Peng Gao.",
231
+ "venue": "CoRR, abs/2305.16318, 2023.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "18": {
237
+ "title": "Associating objects with transformers for video object segmentation.",
238
+ "author": "Zongxin Yang, Yunchao Wei, and Yi Yang.",
239
+ "venue": "NIPS, 34:2491\u20132502, 2021.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "19": {
245
+ "title": "Decoupling features in hierarchical propagation for video object\nsegmentation.",
246
+ "author": "Zongxin Yang and Yi Yang.",
247
+ "venue": "NIPS, 35:36324\u201336336, 2022.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "20": {
253
+ "title": "Discriminative bimodal networks for visual localization and detection\nwith natural language queries.",
254
+ "author": "Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, I-An Huang, and Honglak Lee.",
255
+ "venue": "In CVPR, pages 1090\u20131099, 2017.",
256
+ "url": null
257
+ }
258
+ }
259
+ ],
260
+ "url": "http://arxiv.org/html/2401.00663v1"
261
+ }
20240101/2401.00678v1.json ADDED
@@ -0,0 +1,659 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "General-purpose foundation models for increased autonomy in robot-assisted surgery",
3
+ "abstract": "The dominant paradigm for end-to-end robot learning focuses on optimizing task-specific objectives that solve a single robotic problem such as picking up an object or reaching a target position. However, recent work on high-capacity models in robotics has shown promise toward being trained on large collections of diverse and task-agnostic datasets of video demonstrations [1, 2, 3, 4, 5].\nThese models have shown impressive levels of generalization to unseen circumstances, especially as the amount of data and the model complexity scale.\nSurgical robot systems that learn from data have struggled to advance as quickly as other fields of robot learning for a few reasons[6, 7, 8, 9, 10]: (1) there is a lack of existing large-scale open-source data to train models, (2) it is challenging to model the soft-body deformations that these robots work with during surgery because simulation cannot match the physical and visual complexity of biological tissue, and (3) surgical robots risk harming patients when tested in clinical trials and require more extensive safety measures.\nThis perspective article aims to provide a path toward increasing robot autonomy in robot-assisted surgery through the development of a multi-modal, multi-task, vision-language-action model for surgical robots.\nUltimately, we argue that surgical robots are uniquely positioned to benefit from general-purpose models and provide three guiding actions toward increased autonomy in robot-assisted surgery.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Surgical robot learning",
9
+ "text": "Robot learning is a relatively new paradigm aimed at developing techniques that enable a robot to acquire novel skills through learning algorithms.\nOne of the most common approaches in robot learning is to optimize a model through Deep Reinforcement Learning (DRL)[20 ###reference_20###] toward solving task-specific objectives, such as picking up an object or moving to a goal position.\nThe fundamental concept behind DRL is trial-and-error search, i.e., the agent selects an action when given a particular state , receives a reward based on that state, and then transitions to a new state . The goal of the agent is to maximize the expected cumulative reward over time.\nThis is similar to the classical robotic approach to RAS with the difference that robot learning arrives at a solution by gathering data and improving its ability to perform that task.\nDRL solutions are often optimized in simulated environments and are then transferred to real robotic hardware through various methods referred to as crossing the sim-to-real gap.\nWhile much progress has been made in developing locomotion[21 ###reference_21###, 22 ###reference_22###] and manipulation[23 ###reference_23###, 24 ###reference_24###] controllers that can cross this gap, these techniques struggle to generalize in RAS applications primarily because it is challenging to model soft-body deformations, such as tearing, cutting, and stretching.\nIn addition, it has been shown that a controller designed for one task is often hard to generalize to broader problems[25 ###reference_25###], which has remained a major open problem in robotics[26 ###reference_26###].\nAnother technique used in RAS is demonstration-guided learning[27 ###reference_27###, 28 ###reference_28###, 29 ###reference_29###], or imitation-learning (IL), which aims to emulate expert-like behaviors from recorded data.\nThere are two primary approaches for IL: the first uses supervised learning to shape a controller to imitate expert behavior and the second combines supervised and reinforcement learning, where IL reduces the search space for DRL, but does not constrain it as with the supervised only approach.\nWhile IL with small datasets has been used successfully on simple problems, these approaches have been shown to suffer from distribution shift problems and hence exhibit poor generalization[30 ###reference_30###, 31 ###reference_31###].\nMost existing techniques used in robot learning suggest robots are good specialists, but poor generalists.\nHowever, recent work has demonstrated that the brittleness of imitation learning can be overcome with a sufficiently large amount of data together with a high-capacity model[1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###, 32 ###reference_32###].\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "General-purpose models in robotics",
15
+ "text": "In the field of natural language processing (NLP), pretraining high-capacity models on large datasets (termed foundation models) and further fine-tuning on particular tasks has revolutionized many applications[33 ###reference_33###, 34 ###reference_34###, 35 ###reference_35###].\nThese models are trained using self-supervised learning, a technique where the model trains itself to learn one part of the input from another part of the input.\nFor example, language models can be trained to predict the next word in a sequence given all the previous words that come before it.\nThis is advantageous because it does not require human labels and enables training on much larger sources of data than previously accessible.\nWhen high-capacity models are trained on large-scale datasets, they learn a remarkable breadth of knowledge and exhibit a wide range of capabilities, from text completion to translation and question-answering. Most impressively, these models generalize well to a variety of tasks they were not explicitly trained on[36 ###reference_36###].\nRecently, promising results have been demonstrated by the introduction of the robot transformer (RT) architecture[2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###]. The RT architecture follows the same structure as the high-capacity models used in NLP, however, they differ in that they are multimodal, taking as input natural language commands, visual input from the robot camera, and sensor readings such as joint positions and velocities enabled by the vision-language-action transformer architecture.\nUnlike previous models for robot learning, RT models are trained on large offline robotic datasets of task demonstrations using imitation learning.\nOffline learning means that the robot is trained on a static dataset as opposed to online learning, where training occurs when the robot is directly interacting with its environment.\nThe demonstrations can be a wide range of data sources from tele-operators to videos of humans performing the task; it is intuitive to see why online learning is frequently utilized in the field of robot learning\u2014the robot interacts with its environment, it is given feedback on how well it performed the task, then it improves its performance.\nThe path toward leveraging offline datasets is not as clear.\nFor example, how can a robot learn to execute a task when its given demonstrations from humans with intuitive user controllability?\nThis is precisely what RTs make possible.\nRTs take as input a sequence of images and a task description in natural language and output an action, which is executed at each timestep.\nThe foundation of an RT model is a transformer, which, at a high level, can be described as a sequence model which maps an input sequence to an output sequence using self-attention layers and fully-connected neural networks[36 ###reference_36###].\nVision transformers enable images to share a common representation space with language by condensing discrete spatial segments of the image into language tokens and propagating these tokens through the language model[37 ###reference_37###].\nRTs take this a step further by first mapping the input data (images and language instructions) to a sequence representation.\nThis is done by processing the images through a pre-trained convolutional network and conditioning it on a pre-trained embedding of the instruction, which is used to produce action tokens.\nAction tokens can be then converted to any control modality that a given robot supports.\nThree major patterns occur when RTs are trained on large offline datasets[1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###]: (1) generalization to unseen tasks scales with the number of diverse tasks trained on, (2) skills can be extrapolated from heterogeneous data sources, such as simulated data, data from different robots, and human demonstrations, and (3) learned policies generalize well enough to be used in long-horizon problems.\nWe believe these discoveries present an opportunity for surgical robots to benefit from RT architectures, perhaps more significantly than robotic applications in other fields.\n###figure_2###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "The unique position of surgical robots",
21
+ "text": "Despite these recent advancements, there are several limitations that must be addressed to justify the practical use of RTs on robotic systems.\nThe proposed RT architecture thus far operates on a slow timescale (2-3 Hz) due to the immense compute demand of running a high-capacity model.\nBecause of this, design sacrifices were made on the model capacity, potentially reducing the efficacy of the controller[2 ###reference_2###].\nIn addition, this method is unlikely to be computationally tractable on embedded computing devices even with the current design, as would be necessary for mobile robots.\nEven larger mobile robots equipped with extensive on-board computers would quickly run out of battery supply from model inference, limiting their extended operation.\nThis is in contrast with surgical robots, which are uniquely positioned to benefit from RT architectures for a few reasons.\nUnlike mobile robots, surgical robots operate at slower rates and do not require conserving energy because of battery limitations.\nThese robots are not required to perform computations with embedded computing devices, rather they can be directly integrated with large-scale computing clusters because they remain stationary during operation.\nThis could even allow for much higher capacity models to be used for surgical robots in place of the current resource constrained RT architectures.\nIn addition, rather than requiring large teams of people dedicated to manually collecting robot data, like in the existing RT works, there are thousands of robotic surgeries that occur every day[38 ###reference_38###], which could be used as training data for the robot.\nThere also exist high-quality phantom models of many organ systems[39 ###reference_39###, 40 ###reference_40###] which can be used for collecting training data and safely testing the autonomous surgical robot.\nThis can be supplemented by the already existing abundance of high-quality surgical demonstrations across many different procedures with language instructions aimed at training junior surgeons (see Section Unification of medical data).\nWhile many of the primary challenges associated with RT methods may not be a problem for surgical robots, integrating these models in practice comes with its own difficulties. We suggest that there are three major challenges toward the development of an RT model for RAS (RT-RAS): (1) developing built-in systems for risk avoidant behavior and determining when control should be handed over to the surgeon, (2) the unification of medical data across universities, hospitals, and industry, and (3) improving safety of the RT-RAS beyond demonstration data.\nIn the following sections, we provide an outline for how to address these issues, and ultimately describe a path toward increased autonomy in RAS with general-purpose models."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Risk avoidance",
27
+ "text": "The most immediate concern when providing further autonomy to a surgical robot is in determining when and where the robot lacks the confidence to perform a particular step of the process and when to hand over control to a surgeon.\nThese situations can happen due to an irregularity during surgery that is far outside of its training set that would require a human surgeon to step in.\nIn traditional engineering, these concerns are typically addressed by rigorously testing edge cases and going through all possible scenarios.\nHowever, in surgical operations, it is generally not possible to be prepared for all possible events before a surgery occurs\u2014surgical operations require the ability to adapt to events that have never been encountered.\nIn the case of an RT-RAS, we cannot expect the same level of adaptation as humans, especially with early prototypes of the device.\nOne solution is to train the autonomous robot to avoid situations outside of what it observed in the dataset.\nThis can be achieved through the implementation of conservative Q-learning (CQL)[41 ###reference_41###].\nCQL is an algorithm that learns a value function to prevent overestimation in offline DRL.\nCQL learns a conservative Q-function so that the expected value of a policy under this function lower-bounds its true value.\nThis prevents overestimation that can occur due to actions that lead the robot into scenarios that are out of the training distribution.\nThis technique has been adapted successfully in large-scale RT models[42 ###reference_42###] demonstrating much higher success rates on novel tasks.\nIn RAS, one of the benefits of CQL is that it can be used to directly provide behavior certainty metrics, which can be relayed to a surgical teleoperator who can manually override if the robot remains in a state of high uncertainty for too long.\nAnother technique which could address this problem is conformal prediction[43 ###reference_43###]. Conformal prediction offers a viable approach by providing a measure of certainty for every decision made by the robot. This method aims to provide a set of probable outcomes, instead of giving just a single prediction, thereby offering users insights into the uncertainty or confidence level associated with each prediction. Deploying conformal prediction works as follows: the process begins with the separation of the incoming data into two subsets\u2013the training set and the calibration set. The training set is used to develop an initial model, while the calibration set is used to fine-tune the confidence measures associated with each prediction. Conformal prediction has been used recently to develop RT architectures that provide statistical guarantees on task completion[44 ###reference_44###] , such that the model \"knows when they do not know and ask for help when needed.\"\nThis work aimed to develop two principles into their controller, (1) calibrated confidence: the robot aims to find help to ensure a statistically guaranteed level of success, and (2) minimal help: the robot minimizes the overall amount of help it requests by reducing ambiguities.\nMerging the principles of CQL and conformal prediction could prove instrumental for the RT to address the problem of risk avoidance and switching autonomy to the surgeon."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Unification of medical data",
33
+ "text": "To date, there is a significant amount of surgical video data publicly accessible from various procedures such as cataracts[45 ###reference_45###, 46 ###reference_46###], neurosurgery[47 ###reference_47###], cholecystectomy [48 ###reference_48###, 49 ###reference_49###, 50 ###reference_50###], and proctocolectomy [51 ###reference_51###, 52 ###reference_52###] as well as more general manipulation skills such as peg transferring with laproscopic tools[53 ###reference_53###, 54 ###reference_54###, 55 ###reference_55###].\nThere are also 206 demonstrations performed by twelve different subjects operating a surgical robot, with the recorded data being robot actions chosen by the operator over time[56 ###reference_56###].\nThere also are many surgical videos on publicly accessible sites, such as YouTube, including a curated dataset of 2000 videos of open-surgery demonstrations from YouTube[57 ###reference_57###].\nWhile there exists a wide collection of open data, it still stands that medical data is hard to acquire for large-scale machine learning projects for a variety of reasons[58 ###reference_58###].\nThis is primarily because of patient privacy ethics as well as the relatively small sample sizes used in medical studies.\nA recent study demonstrated that among authors in medicine who provide data availability statements with their work, as is required by journals, only 6.8% were actually willing to share the data upon request[59 ###reference_59###].\nIn addition, code sharing in medicine was as low as 0% to 23%[60 ###reference_60###], which is in contrast to the field of artificial intelligence which has rates from 35% to 51%[61 ###reference_61###].\nRecently, a collaboration between 21 institutions assembled a dataset in order to train a large-scale RT model that can control 22 different robots[4 ###reference_4###] on 160,000 different tasks.\nThis work demonstrates quite clearly that high-capacity models continue to improve their ability to execute tasks with more data, even outside of the tasks they are expected to solve and the robotic hardware they are expected to control.\nMore importantly, this work demonstrates that building a successful RT model requires an amount of data beyond what individual labs or sometimes even organizations can collect on their own.\nHowever, sharing data at this scale is especially challenging in medical applications where data is collected much more slowly and comes with additional complications.\nIt is additionally challenging to find data from medical failure, which is often important for steering the model away from harmful decisions.\nDespite these challenges, large-scale collaborations in medicine still occur and are often proven tremendously successful[62 ###reference_62###, 63 ###reference_63###, 64 ###reference_64###, 65 ###reference_65###].\nIt is clear that if an RT-RAS were to be realized, universities, industry, and hospitals would have to share data and openly collaborate as many have done in the past."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Beyond imitation-quality safety",
39
+ "text": "Collecting sufficient data does not solve every problem of designing a successful RT-RAS system.\nSince the proposed model is trained directly on surgical demonstrations, the performance is bounded by the quality of data it is provided with given that RT architectures are fundamentally imitating the demonstrations[2 ###reference_2###].\nThis can be limiting since the ultimate objective would be to produce robotic systems that are more safe than human teleoperators.\nEssentially, we would like the robot to continually attain better performance as it gathers experience from procedures.\nHere, we outline several paths toward achieving this.\nAn important observation is that an RT-RAS will passively collect data in diverse conditions from various surgical procedures simply by performing operations.\nThis data can be used to improve the predictive capability of the foundation model underlying the RT-RAS via self-supervised learning.\nHowever, self-supervision will not necessarily improve the performance of the robot during surgery\u2014it is desirable to fine-tune on data that provides feedback.\nToward this, one possibility is to use short- and long-term patient outcomes as feedback for training the model on its performance during surgery.\nThis feedback does not need to come from exactly the same robot that performed the surgery, but can come from any robot performing similar operations as long as the surgical data is stored[4 ###reference_4###].\nWhile surgical outcomes can be affected by factors outside of the quality of surgery itself[66 ###reference_66###, 67 ###reference_67###, 68 ###reference_68###], such as the patient\u2019s general health, this feedback becomes less noisy as more data is collected.\nAnother possibility is to develop a model which takes surgical demonstrations and provides a quality rating for each part of the procedure using data curated by expert surgeons.\nPrevious work has already built models to rate the quality of surgical operations with relatively high consistency aligning with surgeon opinions[69 ###reference_69###, 70 ###reference_70###].\nThis rating could be used as a signal for the RT-RAS as it performs surgery, getting immediate feedback after a surgery which can be used to reinforce high-quality behaviors and discourage mistakes and inefficiencies.\nLike the previous solution, quality assessment is also prone to noise since different surgeons have differing opinions about which behaviors are high-quality.\nHowever, it is also likely that this feedback becomes less noisy as more expert opinions are used to train the model.\nShort- and long-term patient outcomes combined with quality assessments can be used to improve beyond surgeon quality performance. However, integrating these types of feedback into a cohesive learning paradigm is not trivial and will likely be an open problem for building an effective RT-RAS."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Improving surgical education",
45
+ "text": "While autonomous RT-RAS has much potential for increasing surgeon productivity, it may also allow for both an increase in the number of trained surgeons as well as improved surgeon training. This is because one of the majors challenges of training junior surgeons is getting consistent feedback from expert surgeons during their development, who are often very time constrained.\nHaving an imitation-trained RT-RAS could enable the development of training simulators that utilize two components of the model: the imitation-trained policy and the risk avoidance system. The imitation policy could allow for on-demand demonstrations shaped by expert surgeons that junior surgeons could watch and then execute themselves. The risk avoidance system (e.g. CQL uncertainty predictor) can also be used to prevent trainees from taking actions that are too far away from what an expert surgeon would do. This allows for immediate targeted feedback, which has been demonstrated to increase learning during demonstrations[71 ###reference_71###].\nIn addition, these systems could be implemented for trainees as they first begin performing surgeries on real patients as a safety mechanism which is the most likely time that mistakes would be made[72 ###reference_72###].\nThe safety mechanism could provide soft constraints, warning the surgeon either during or after a mistake has been made, or hard constraints, physically preventing dangerous actions from being taken.\nThe RT-RAS controller could also be used as a way to assist junior surgeons when their confidence is low or when more than two robotic arms are needed to be controlled for a particular operation."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusions",
51
+ "text": "Building autonomous RAS has remained challenging since the dominant paradigm for robot learning requires either interacting with simulated models of the task or training low-capacity models on expert demonstrations. However, recent progress in applying high-capacity RTs to robotic systems has opened an opportunity to further autonomy in RAS through the use of large-scale surgeon demonstration datasets. Surgical robots are uniquely positioned to benefit from RTs because they do not require operating at fast timescales, do not have energy or embedded device limitations, and there is a plentiful source of demonstrations from real surgeons to use as training data. Building these models has the potential to increase the consistency of surgical procedures, as well as reduce the need for supervision and hence the cost of procedures altogether.\nThis work outlined three major challenges toward the development of an RT-RAS: (1) developing built-in systems for risk avoidant behavior and determining when control should be handed over to the surgeon, (2) the unification of medical data across universities, hospitals, and industry, and (3) improving safety of the RT-RAS beyond demonstration data. Then, each challenge was addressed with guiding actions. The actions outlined require the coordination of many institutions, from universities, hospitals, and industry in order to be realized.\nWe hope this work inspires the development of a general-purpose model for autonomous surgery and invites the collaboration of many different institutions to accomplish this worthwhile task."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Acknowledgements",
57
+ "text": "This material is based upon work supported by the National Science Foundation Graduate Research Fellowship for Comp/IS/Eng-Robotics under Grant No. DGE 2139757 and NSF/FRR 2144348."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {
63
+ "1": {
64
+ "figure_path": "2401.00678v1_figure_1.png",
65
+ "caption": "Figure 1: An architecture diagram of the proposed vision-language-action robot transformer. Video frames are taken as input, flattened, and passed through a linear projection to be used as input tokens along with a word embedding. The transformer encoder outputs action tokens which are de-tokenized to produce a robot action, from which the robot end-effector position is updated.",
66
+ "url": "http://arxiv.org/html/2401.00678v1/extracted/5325004/surgicaltransformer.jpg"
67
+ },
68
+ "2": {
69
+ "figure_path": "2401.00678v1_figure_2.png",
70
+ "caption": "Figure 2: A proposed control loop for the autonomous robot transformer-RAS (RT-RAS). Surgeon provides action commands as text input. The RT-RAS executes these commands while maintaining high confidence, otherwise autonomy is switched to the surgeon.",
71
+ "url": "http://arxiv.org/html/2401.00678v1/extracted/5325004/Figure3.jpg"
72
+ },
73
+ "3": {
74
+ "figure_path": "2401.00678v1_figure_3.png",
75
+ "caption": "Figure 3: Outline of the two step pre-training for the RT-RAS. The first involves fine-tuning a vision-language model on captioned surgical demonstrations (e.g. from surgical training videos). The second involves pre-training a vision-language-action model on surgical demonstrations with kinematics.",
76
+ "url": "http://arxiv.org/html/2401.00678v1/extracted/5325004/Figure2.jpg"
77
+ }
78
+ },
79
+ "validation": true,
80
+ "references": [
81
+ {
82
+ "1": {
83
+ "title": "A generalist agent.",
84
+ "author": "Reed, S. et al.",
85
+ "venue": "\\JournalTitlearXiv preprint arXiv:2205.06175 (2022).",
86
+ "url": null
87
+ }
88
+ },
89
+ {
90
+ "2": {
91
+ "title": "Rt-1: Robotics transformer for real-world control at scale.",
92
+ "author": "Brohan, A. et al.",
93
+ "venue": "\\JournalTitlearXiv preprint arXiv:2212.06817 (2022).",
94
+ "url": null
95
+ }
96
+ },
97
+ {
98
+ "3": {
99
+ "title": "Rt-2: Vision-language-action models transfer web knowledge to robotic control.",
100
+ "author": "Brohan, A. et al.",
101
+ "venue": "\\JournalTitlearXiv preprint arXiv:2307.15818 (2023).",
102
+ "url": null
103
+ }
104
+ },
105
+ {
106
+ "4": {
107
+ "title": "Open X-Embodiment: Robotic learning datasets and RT-X models.",
108
+ "author": "Collaboration, O. X.-E. et al.",
109
+ "venue": "https://robotics-transformer-x.github.io (2023).",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "5": {
115
+ "title": "Toward general-purpose robots via foundation models: A survey and meta-analysis.",
116
+ "author": "Hu, Y. et al.",
117
+ "venue": "\\JournalTitlearxiv (2023).",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "6": {
123
+ "title": "State of the art in surgical robotics: clinical applications and technology challenges.",
124
+ "author": "Cleary, K. & Nguyen, C.",
125
+ "venue": "\\JournalTitleComputer Aided Surgery 6, 312\u2013328 (2001).",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "7": {
131
+ "title": "Open-sourced reinforcement learning environments for surgical robotics.",
132
+ "author": "Richter, F., Orosco, R. K. & Yip, M. C.",
133
+ "venue": "\\JournalTitlearXiv preprint arXiv:1903.02090 (2019).",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "8": {
139
+ "title": "Deep reinforcement learning for soft, flexible robots: Brief review with impending challenges.",
140
+ "author": "Bhagat, S., Banerjee, H., Ho Tse, Z. T. & Ren, H.",
141
+ "venue": "\\JournalTitleRobotics 8, 4 (2019).",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "9": {
147
+ "title": "Reinforcement learning in surgery.",
148
+ "author": "Datta, S. et al.",
149
+ "venue": "\\JournalTitleSurgery 170, 329\u2013332 (2021).",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "10": {
155
+ "title": "Lapgym\u2013an open source framework for reinforcement learning in robot-assisted laparoscopic surgery.",
156
+ "author": "Scheikl, P. M. et al.",
157
+ "venue": "\\JournalTitlearXiv preprint arXiv:2302.09606 (2023).",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "11": {
163
+ "title": "L.b. surgeon uses robot in operation.",
164
+ "author": "La Ganga, M. L.",
165
+ "venue": "\\JournalTitleLos Angeles Times (1985).",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "12": {
171
+ "title": "Comparison of robot-assisted radical prostatectomy and open radical prostatectomy outcomes: a systematic review and meta-analysis.",
172
+ "author": "Seo, H.-J. et al.",
173
+ "venue": "\\JournalTitleYonsei medical journal 57, 1165\u20131177 (2016).",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "13": {
179
+ "title": "Trends in the adoption of robotic surgery for common surgical procedures.",
180
+ "author": "Sheetz, K. H., Claflin, J. & Dimick, J. B.",
181
+ "venue": "\\JournalTitleJAMA network open 3, e1918911\u2013e1918911 (2020).",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "14": {
187
+ "title": "The evidence behind robot-assisted abdominopelvic surgery: a systematic review.",
188
+ "author": "Dhanani, N. H. et al.",
189
+ "venue": "\\JournalTitleAnnals of internal medicine 174, 1110\u20131117 (2021).",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "15": {
195
+ "title": "Is robotic surgery cost-effective: no.",
196
+ "author": "Lotan, Y.",
197
+ "venue": "\\JournalTitleCurrent opinion in urology 22, 66\u201369 (2012).",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "16": {
203
+ "title": "Supervised autonomous robotic soft tissue surgery.",
204
+ "author": "Shademan, A. et al.",
205
+ "venue": "\\JournalTitleScience translational medicine 8, 337ra64\u2013337ra64 (2016).",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "17": {
211
+ "title": "Autonomous robotic laparoscopic surgery for intestinal anastomosis.",
212
+ "author": "Saeidi, H. et al.",
213
+ "venue": "\\JournalTitleScience robotics 7, eabj2908 (2022).",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "18": {
219
+ "title": "Autonomous medical needle steering in vivo.",
220
+ "author": "Kuntz, A. et al.",
221
+ "venue": "\\JournalTitleScience Robotics 8, eadf7614 (2023).",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "19": {
227
+ "title": "Autonomous robotic suction to clear the surgical field for hemostasis using image-based blood flow detection.",
228
+ "author": "Richter, F. et al.",
229
+ "venue": "\\JournalTitleIEEE Robotics and Automation Letters 6, 1383\u20131390 (2021).",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "20": {
235
+ "title": "A brief survey of deep reinforcement learning.",
236
+ "author": "Arulkumaran, K., Deisenroth, M. P., Brundage, M. & Bharath, A. A.",
237
+ "venue": "\\JournalTitlearXiv preprint arXiv:1708.05866 (2017).",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "21": {
243
+ "title": "Learning quadrupedal locomotion over challenging terrain.",
244
+ "author": "Lee, J., Hwangbo, J., Wellhausen, L., Koltun, V. & Hutter, M.",
245
+ "venue": "\\JournalTitleScience robotics 5, eabc5986 (2020).",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "22": {
251
+ "title": "Legged locomotion in challenging terrains using egocentric vision.",
252
+ "author": "Agarwal, A., Kumar, A., Malik, J. & Pathak, D.",
253
+ "venue": "In Conference on Robot Learning, 403\u2013415 (PMLR, 2023).",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "23": {
259
+ "title": "Deep reinforcement learning for the control of robotic manipulation: a focussed mini-review.",
260
+ "author": "Liu, R., Nageotte, F., Zanne, P., de Mathelin, M. & Dresp-Langley, B.",
261
+ "venue": "\\JournalTitleRobotics 10, 22 (2021).",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "24": {
267
+ "title": "Learning fine-grained bimanual manipulation with low-cost hardware.",
268
+ "author": "Zhao, T. Z., Kumar, V., Levine, S. & Finn, C.",
269
+ "venue": "\\JournalTitlearXiv preprint arXiv:2304.13705 (2023).",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "25": {
275
+ "title": "Robot autonomy for surgery.",
276
+ "author": "Yip, M. & Das, N.",
277
+ "venue": "In The Encyclopedia of MEDICAL ROBOTICS: Volume 1 Minimally Invasive Surgical Robotics, 281\u2013313 (World Scientific, 2019).",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "26": {
283
+ "title": "A study on overfitting in deep reinforcement learning.",
284
+ "author": "Zhang, C., Vinyals, O., Munos, R. & Bengio, S.",
285
+ "venue": "\\JournalTitlearXiv preprint arXiv:1804.06893 (2018).",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "27": {
291
+ "title": "Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations.",
292
+ "author": "Van Den Berg, J. et al.",
293
+ "venue": "In 2010 IEEE International Conference on Robotics and Automation, 2074\u20132081 (IEEE, 2010).",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "28": {
299
+ "title": "Model predictive optimization for imitation learning from demonstrations.",
300
+ "author": "Hu, Y. et al.",
301
+ "venue": "\\JournalTitleRobotics and Autonomous Systems 163, 104381 (2023).",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "29": {
307
+ "title": "Guided reinforcement learning with efficient exploration for task automation of surgical robot.",
308
+ "author": "Huang, T., Chen, K., Li, B., Liu, Y.-H. & Dou, Q.",
309
+ "venue": "\\JournalTitlearXiv preprint arXiv:2302.09772 (2023).",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "30": {
315
+ "title": "An algorithmic perspective on imitation learning.",
316
+ "author": "Osa, T. et al.",
317
+ "venue": "\\JournalTitleFoundations and Trends in Robotics 7, 1\u2013179 (2018).",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "31": {
323
+ "title": "How to train your robot with deep reinforcement learning: lessons we have learned.",
324
+ "author": "Ibarz, J. et al.",
325
+ "venue": "\\JournalTitleThe International Journal of Robotics Research 40, 698\u2013721 (2021).",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "32": {
331
+ "title": "Octo: An open-source generalist robot policy.",
332
+ "author": "Octo Model Team et al.",
333
+ "venue": "https://octo-models.github.io (2023).",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "33": {
339
+ "title": "On the opportunities and risks of foundation models.",
340
+ "author": "Bommasani, R. et al.",
341
+ "venue": "\\JournalTitlearXiv preprint arXiv:2108.07258 (2021).",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "34": {
347
+ "title": "Foundation models for generalist medical artificial intelligence.",
348
+ "author": "Moor, M. et al.",
349
+ "venue": "\\JournalTitleNature 616, 259\u2013265 (2023).",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "35": {
355
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
356
+ "author": "Touvron, H. et al.",
357
+ "venue": "\\JournalTitlearXiv preprint arXiv:2307.09288 (2023).",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "36": {
363
+ "title": "Attention is all you need.",
364
+ "author": "Vaswani, A. et al.",
365
+ "venue": "\\JournalTitleAdvances in neural information processing systems 30 (2017).",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "37": {
371
+ "title": "An image is worth 16x16 words: Transformers for image recognition at scale.",
372
+ "author": "Dosovitskiy, A. et al.",
373
+ "venue": "\\JournalTitlearXiv preprint arXiv:2010.11929 (2020).",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "38": {
379
+ "title": "The rise of robots in surgical environments during covid-19.",
380
+ "author": "Zemmar, A., Lozano, A. M. & Nelson, B. J.",
381
+ "venue": "\\JournalTitleNature Machine Intelligence 2, 566\u2013572 (2020).",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "39": {
387
+ "title": "A review on the 3d printing of functional structures for medical phantoms and regenerated tissue and organ applications.",
388
+ "author": "Wang, K., Ho, C.-C., Zhang, C. & Wang, B.",
389
+ "venue": "\\JournalTitleEngineering 3, 653\u2013662 (2017).",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "40": {
395
+ "title": "A call for change. can 3d printing replace cadavers for surgical training?",
396
+ "author": "Ghazi, A.",
397
+ "venue": "\\JournalTitleUrologic Clinics 49, 39\u201356 (2022).",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "41": {
403
+ "title": "Conservative q-learning for offline reinforcement learning.",
404
+ "author": "Kumar, A., Zhou, A., Tucker, G. & Levine, S.",
405
+ "venue": "\\JournalTitleAdvances in Neural Information Processing Systems 33, 1179\u20131191 (2020).",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "42": {
411
+ "title": "Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions.",
412
+ "author": "Chebotar, Y. et al.",
413
+ "venue": "\\JournalTitlearXiv preprint arXiv:2309.10150 (2023).",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "43": {
419
+ "title": "A gentle introduction to conformal prediction and distribution-free uncertainty quantification.",
420
+ "author": "Angelopoulos, A. N. & Bates, S.",
421
+ "venue": "\\JournalTitlearXiv preprint arXiv:2107.07511 (2021).",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "44": {
427
+ "title": "Robots that ask for help: Uncertainty alignment for large language model planners.",
428
+ "author": "Ren, A. Z. et al.",
429
+ "venue": "\\JournalTitlearXiv preprint arXiv:2307.01928 (2023).",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "45": {
435
+ "title": "Cataracts, DOI: 10.21227/ac97-8m18 (2021).",
436
+ "author": "ALHAJJ, H., Lamard, M., Conze, P.-h., Cochener, B. & Quellec, G.",
437
+ "venue": null,
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "46": {
443
+ "title": "Cataract-101: video dataset of 101 cataract surgeries.",
444
+ "author": "Schoeffmann, K. et al.",
445
+ "venue": "In Proceedings of the 9th ACM multimedia systems conference, 421\u2013425 (2018).",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "47": {
451
+ "title": "Detecting surgical tools by modelling local appearance and global shape.",
452
+ "author": "Bouget, D. et al.",
453
+ "venue": "\\JournalTitleIEEE transactions on medical imaging 34, 2603\u20132617 (2015).",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "48": {
459
+ "title": "Endonet: a deep architecture for recognition tasks on laparoscopic videos.",
460
+ "author": "Twinanda, A. P. et al.",
461
+ "venue": "\\JournalTitleIEEE transactions on medical imaging 36, 86\u201397 (2016).",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "49": {
467
+ "title": "Cholecseg8k: a semantic segmentation dataset for laparoscopic cholecystectomy based on cholec80.",
468
+ "author": "Hong, W.-Y. et al.",
469
+ "venue": "\\JournalTitlearXiv preprint arXiv:2012.12453 (2020).",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "50": {
475
+ "title": "Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos.",
476
+ "author": "Nwoye, C. I. et al.",
477
+ "venue": "\\JournalTitleMedical Image Analysis 78, 102433 (2022).",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "51": {
483
+ "title": "Heidelberg colorectal data set for surgical data science in the sensor operating room.",
484
+ "author": "Maier-Hein, L. et al.",
485
+ "venue": "\\JournalTitleScientific data 8, 101 (2021).",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "52": {
491
+ "title": "Towards holistic surgical scene understanding.",
492
+ "author": "Valderrama, N. et al.",
493
+ "venue": "In International conference on medical image computing and computer-assisted intervention, 442\u2013452 (Springer, 2022).",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "53": {
499
+ "title": "Jhu-isi gesture and skill assessment working set (jigsaws): A surgical activity dataset for human motion modeling.",
500
+ "author": "Gao, Y. et al.",
501
+ "venue": "In MICCAI workshop: M2cai, vol. 3 (2014).",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "54": {
507
+ "title": "Desk: A robotic activity dataset for dexterous surgical skills transfer to medical robots.",
508
+ "author": "Madapana, N. et al.",
509
+ "venue": "In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 6928\u20136934 (IEEE, 2019).",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "55": {
515
+ "title": "Peg transfer workflow recognition challenge report: Does multi-modal data improve recognition?",
516
+ "author": "Huaulm\u00e9, A. et al.",
517
+ "venue": "\\JournalTitlearXiv preprint arXiv:2202.05821 (2022).",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "56": {
523
+ "title": "A surgical dataset from the da vinci research kit for task automation and recognition.",
524
+ "author": "Rivas-Blanco, I., Del-Pulgar, C. J. P., Mariani, A., Tortora, G. & Reina, A. J.",
525
+ "venue": "In 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), 1\u20136 (IEEE, 2023).",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "57": {
531
+ "title": "A real-time spatiotemporal ai model analyzes skill in open surgical videos.",
532
+ "author": "Goodman, E. D. et al.",
533
+ "venue": "\\JournalTitlearXiv preprint arXiv:2112.07219 (2021).",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "58": {
539
+ "title": "Medical big data is not yet available: why we need realism rather than exaggeration.",
540
+ "author": "Kim, H.-S., Kim, D.-J. & Yoon, K.-H.",
541
+ "venue": "\\JournalTitleEndocrinology and Metabolism 34, 349\u2013354 (2019).",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "59": {
547
+ "title": "Many researchers were not compliant with their published data sharing statement: a mixed-methods study.",
548
+ "author": "Gabelica, M., Boj\u010di\u0107, R. & Puljak, L.",
549
+ "venue": "\\JournalTitleJournal of Clinical Epidemiology 150, 33\u201341 (2022).",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "60": {
555
+ "title": "Prevalence and predictors of data and code sharing in the medical and health sciences: systematic review with meta-analysis of individual participant data.",
556
+ "author": "Hamilton, D. G. et al.",
557
+ "venue": "\\JournalTitlebmj 382 (2023).",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "61": {
563
+ "title": "Automatic analysis of available source code of top artificial intelligence conference papers.",
564
+ "author": "Lin, J. et al.",
565
+ "venue": "\\JournalTitleInternational Journal of Software Engineering and Knowledge Engineering 32, 947\u2013970 (2022).",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "62": {
571
+ "title": "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences.",
572
+ "author": "Rives, A. et al.",
573
+ "venue": "\\JournalTitleProceedings of the National Academy of Sciences 118, e2016239118 (2021).",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "63": {
579
+ "title": "Highly accurate protein structure prediction with alphafold.",
580
+ "author": "Jumper, J. et al.",
581
+ "venue": "\\JournalTitleNature 596, 583\u2013589 (2021).",
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "64": {
587
+ "title": "Towards generalist foundation model for radiology.",
588
+ "author": "Wu, C., Zhang, X., Zhang, Y., Wang, Y. & Xie, W.",
589
+ "venue": "\\JournalTitlearXiv preprint arXiv:2308.02463 (2023).",
590
+ "url": null
591
+ }
592
+ },
593
+ {
594
+ "65": {
595
+ "title": "Medfmc: A real-world dataset and benchmark for foundation model adaptation in medical image classification.",
596
+ "author": "Wang, D. et al.",
597
+ "venue": "\\JournalTitlearXiv preprint arXiv:2306.09579 (2023).",
598
+ "url": null
599
+ }
600
+ },
601
+ {
602
+ "66": {
603
+ "title": "Nonsurgical factors that influence the outcome of bariatric surgery: a review.",
604
+ "author": "Hsu, L. G. et al.",
605
+ "venue": "\\JournalTitlePsychosomatic medicine 60, 338\u2013346 (1998).",
606
+ "url": null
607
+ }
608
+ },
609
+ {
610
+ "67": {
611
+ "title": "Impact of obesity on surgical outcomes after colorectal resection.",
612
+ "author": "Benoist, S., Panis, Y., Alves, A. & Valleur, P.",
613
+ "venue": "\\JournalTitleThe American journal of surgery 179, 275\u2013281 (2000).",
614
+ "url": null
615
+ }
616
+ },
617
+ {
618
+ "68": {
619
+ "title": "Psychosocial factors and surgical outcomes: an evidence-based literature review.",
620
+ "author": "Rosenberger, P. H., Jokl, P. & Ickovics, J.",
621
+ "venue": "\\JournalTitleJAAOS-Journal of the American Academy of Orthopaedic Surgeons 14, 397\u2013405 (2006).",
622
+ "url": null
623
+ }
624
+ },
625
+ {
626
+ "69": {
627
+ "title": "Machine learning for technical skill assessment in surgery: a systematic review.",
628
+ "author": "Lam, K. et al.",
629
+ "venue": "\\JournalTitleNPJ digital medicine 5, 24 (2022).",
630
+ "url": null
631
+ }
632
+ },
633
+ {
634
+ "70": {
635
+ "title": "Evaluation of deep learning models for identifying surgical actions and measuring performance.",
636
+ "author": "Khalid, S., Goldenberg, M., Grantcharov, T., Taati, B. & Rudzicz, F.",
637
+ "venue": "\\JournalTitleJAMA network open 3, e201664\u2013e201664 (2020).",
638
+ "url": null
639
+ }
640
+ },
641
+ {
642
+ "71": {
643
+ "title": "An assessment tool to provide targeted feedback to robotic surgical trainees: development and validation of the end-to-end assessment of suturing expertise (ease).",
644
+ "author": "Haque, T. F. et al.",
645
+ "venue": "\\JournalTitleUrology practice 9, 532\u2013539 (2022).",
646
+ "url": null
647
+ }
648
+ },
649
+ {
650
+ "72": {
651
+ "title": "Early-and late-career surgeon deficiencies in complex cases.",
652
+ "author": "Moon, M. R.",
653
+ "venue": "\\JournalTitleThe Journal of Thoracic and Cardiovascular Surgery 164, 1023\u20131025 (2022).",
654
+ "url": null
655
+ }
656
+ }
657
+ ],
658
+ "url": "http://arxiv.org/html/2401.00678v1"
659
+ }
20240101/2401.00682v1.json ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "The Smooth Trajectory Estimator for LMB Filters This research has been funded by the Australian Research Council through the Linkage project LP200301507.",
3
+ "abstract": "This paper proposes a smooth-trajectory estimator for the labelled multi-Bernoulli (LMB) filter by exploiting the special structure of the generalised labelled multi-Bernoulli (GLMB) filter. We devise a simple and intuitive approach to store the best association map when approximating the GLMB random finite set (RFS) to the LMB RFS. In particular, we construct a smooth-trajectory estimator (i.e., an estimator over the entire trajectories of labelled estimates) for the LMB filter based on the history of the best association map and all of the measurements up to the current time. Experimental results under two challenging scenarios demonstrate significant tracking accuracy improvements with negligible additional computational time compared to the conventional LMB filter. The source code is publicly available at https://tinyurl.com/ste-lmb, aimed at promoting advancements in MOT algorithms.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Multi-object tracking (MOT)\naims to identify varying numbers of objects and their trajectories in the presence of noisy data. Because of noisy sensors resulting in misdetections and false alarms, as well as the randomness of object\ndisappearances and appearances (i.e., the object\u2019s birth and death processes), solving the MOT problems is extremely more challenging than the single-object tracking problem [1 ###reference_1###, 2 ###reference_2###]. Additionally, MOT plays crucial roles in various applications ranging from aerospace [3 ###reference_3###], robotics [4 ###reference_4###, 5 ###reference_5###], surveillance [1 ###reference_1###, 2 ###reference_2###], to cell biology [6 ###reference_6###, 7 ###reference_7###]. Although there are various approaches to MOT, most align with three principal frameworks: joint probabilistic data association (JPDA)[8 ###reference_8###, 2 ###reference_2###, 9 ###reference_9###], multiple hypothesis tracking (MHT)[1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###], and random finite set (RFS) [10 ###reference_10###, 11 ###reference_11###].\nThe RFS framework, a recent development in MOT, has gained significant attention in the past twenty years due to its capacity to manage intricate tracking scenarios. This approach considers the multi-object state as a finite set, utilising finite set statistics techniques for temporal estimations. Given its robust mathematical foundation, several RFS-based filters have emerged, including the probability hypothesis density (PHD)[12 ###reference_12###], cardinalised probability hypothesis density (CPHD)[13 ###reference_13###], multi-Bernoulli (MB)[10 ###reference_10###, 14 ###reference_14###], and Poisson multi-Bernoulli mixture filter (PMBM)[15 ###reference_15###]. Notably, these filters estimate only the multi-object states, omitting trajectory details.\nFor estimating the multi-object trajectories (i.e., the history of multi-object states), one can utilise the labelled RFSs by augmenting unique labels/identities to individual object states [1 ###reference_1###, 10 ###reference_10###]. Significantly, trajectories play a pivotal role in capturing the motion and interaction of objects within a setting.\nConcurrently, labels are essential in differentiating individual trajectories and conveying trajectory-related information.\nIn particular, based on the theory of labelled RFSs, the generalised labelled multi-Bernoulli (GLMB) filter [16 ###reference_16###, 17 ###reference_17###] stands as the inaugural exact closed-form solution for multi-object tracking, efficiently approximated using Gibbs sampling [18 ###reference_18###, 19 ###reference_19###]. Owing to its reliability and adaptability, the GLMB filter has been utilised in diverse applications like lineage tracking [7 ###reference_7###, 20 ###reference_20###], track-before-detect [21 ###reference_21###, 22 ###reference_22###], distributed MOT [23 ###reference_23###, 24 ###reference_24###], path planning [25 ###reference_25###, 26 ###reference_26###, 27 ###reference_27###], multi-sensor [28 ###reference_28###], multi-scan [29 ###reference_29###], and large-scale [30 ###reference_30###] MOT. The LMB filter [31 ###reference_31###], an approximation of GLMB\u2019s first moment, significantly curtails association hypotheses by categorising tracks and measurements into distinct, statistically independent groups. Yet, being a GLMB filter derivative, the LMB filter can encounter issues like track fragmentation and label switching.\nSmoothing produces superior tracking performance compared to filtering since smoothing considers the full history of the states up to the current time, whereas filtering only considers the most recent state [32 ###reference_32###, 33 ###reference_33###]. Another weakness of filtering compared to smoothing in the context of MOT is the track fragmentation and label switching since low existence probability tracks might be killed and re-born with new track labels [29 ###reference_29###]. GLMB smoothing has been proposed in [29 ###reference_29###] by solving multi-scan MOT problems via Gibbs sampling, and further generalised to multi-scan multi-sensor GLMB in [34 ###reference_34###]. However, computing an exact multi-object posterior in the multi-scan MOT is an NP-hard problem and is typically approximated via Gibbs sampling to solve the multi-dimensional assignment problems. An alternative approach entails a proficient GLMB smoothing algorithm centred on multi-object trajectory estimates rather than multi-object posteriors, as presented in [35 ###reference_35###]. This method maintains computational complexity akin to the conventional GLMB filter but offers considerable enhancements in tracking performance.\nIn this work, drawing inspiration from [35 ###reference_35###], we introduce a novel smooth trajectory estimator algorithm for LMB, termed STE-LMB. This algorithm performs on multi-object trajectory estimates rather than multi-object posteriors, leveraging the unique architecture of the GLMB filter. Specifically, during the LMB filter\u2019s update phase, while converting the GLMB density as an LMB density, we concurrently record the best association map linking each labelled track to the measurements at the present time step. This process yields a comprehensive association history for each labelled track, enabling the crafting of a smooth trajectory estimator across each labelled estimate\u2019s full trajectory, anchored on all associated measurements up to the present time. Thus, our smooth trajectory estimator executes forward filtering from an object\u2019s birth time to the present and then backward smoothing from the current time to its inception for every labelled track. This methodology not only mitigates track fragmentation and label switching but also curtails localisation discrepancies typically seen in the conventional LMB filter.\nThis paper is structured as follows: Section II delivers essential background on labelled RFSs and the LMB/GLMB filters. Our proposed smooth trajectory estimator algorithm is detailed in Section III. Numerical experiments and a comparative analysis with the standard LMB filter are covered in Section IV. Finally, Section V summarises our conclusions."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Background",
15
+ "text": "The section offers foundational knowledge on labelled RFSs, and an overview of the associated LMB/GLMB filter."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Notations",
21
+ "text": "Using the notation from [31 ###reference_31###], lowercase characters like denote single-object states, whereas uppercase ones like symbolise multi-object states. Boldface characters such as represent labelled states and their densities, and blackboard characters like stand for spaces. For any set , signifies the class of its finite subsets. The indicator function of is represented as and its cardinality as . For a function , its multi-object exponential is defined as , with . We also introduce the generalised Kronecker delta function , which is one if and zero otherwise. The inner product is concisely represented as ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Labelled Random Finite Sets",
27
+ "text": "Labelled multi-object representation. At time , a surviving object is described by a labelled state . Here, the state , while the distinct label in combines its birth time, , and a unique identifier, , distinguishing objects with identical birth times. The trajectory for the shared label from time to is the time-sequential sequence . The collection of surviving objects, each with a unique label, at time is encapsulated by the labelled multi-object state . The set of labels from is symbolised as .\nWithin the window interval for a sequence of labelled multi-object states , the trajectory corresponding to the label is defined as follows [29 ###reference_29###]:\nwhere the start and end times of label within the window interval are denoted by and , respectively. As a result, the sequence can be represented by\nLabelled multi-Bernoulli (LMB) RFS. An LMB RFS denoted as is characterised by the parameter set . Here, signifies the existence probability of label , whilst represents the spatial distribution corresponding to label , ensuring that . The LMB density is detailed in [31 ###reference_31###]:\nwhere is a distinct label indicator; ; ; . For clarity, we represent the LMB density as .\n-Generalised labelled multi-Bernoulli (-GLMB) RFS. A -GLMB RFS delineates the statistical interrelations between objects by accounting for multiple hypotheses. These consist of a set of track labels, denoted as , and a respective association history symbolised as . The -GLMB density is detailed in[18 ###reference_18###]\nFor simplicity, we denote the -GLMB density as ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Generalised Labelled Multi-Bernoulli (GLMB) Filter",
33
+ "text": "Given the current -GLMB filter density at time as in (6 ###reference_###) and the LMB birth model, the -GLMB density at time (indicated by ) based on measurement set is defined accordingly [18 ###reference_18###]:\nwhere , , , and\nHere, represents the collection of positive 1-1 mappings . The survival probability of the label is given by , and the single-object state transition density is defined by . The birth space at is . The birth probability associated with the label is , and its related spatial distribution is ;\nrepresents the detection probability corresponding to the label . The spatial clutter intensity, distributed as Poisson, is denoted by . Lastly, the likelihood of producing measurement from the single-object state associated with label is given by ."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III The Proposed Method",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A LMB Filtering Recursion",
45
+ "text": "The LMB filter approximates the GLMB filter by matching the first moment (PHD); hence, the LMB filter is often referred to as the PHD filter for multi-object trajectory estimation.\nSuppose the LMB filtering density at the current time is . Since the LMB filter is closed under the prediction but not the update step, given the LMB birth model, a joint prediction and update step with measurement for the LMB density yields the GLMB density,\nwhich is then approximated as an LMB density:\nwith the same first moment as by choosing"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Smooth Trajectory Estimator for LMB Filter",
51
+ "text": "Given the LMB density, extracting multi-object estimates using optimal methods like the joint or marginal multi-object estimators is intractable [11 ###reference_11###]. Typically, a less-than-optimal estimator is used by determining the maximum a posterior (MAP) cardinality estimate from the cardinality distribution, denoted as , given by\nFrom the cardinality distribution , we can compute the estimated cardinality , given by\nBy sorting the existence probability vector in descending order, we can choose the foremost labels based on the highest existence probability within and let be the smallest existence probability from these top labels. Therefore, the standard MAP multi-object state estimates at time is computed as follows:\nNotably, in the standard LMB filter, the association map was marginalised over the association space and discarded (see (16 ###reference_###) and (17 ###reference_###)). However, this association map is crucial to constructing the smooth trajectory estimator for estimating trajectory using all measurements and should not be discarded. In this work, we propose to store the best weighted association map of each label , i.e.,\nand the set of the best association map of all labels in is denoted as .\nThe association history of each label is recursively stored from the birth time to the current time , i.e.,\nLet be the set of the best association history of all labels in . As a result, by utilising the entire association history of the trajectory with label , we can efficiently estimate the entire history of this trajectory. The detailed algorithm is provided in Algorithm 1 ###reference_### where is denoted as the recursive multi-object trajectory estimates from time to . In particular, to estimate the entire history of each label , we apply forward filtering via any applicable single-object-tracking filter (e.g., Kalman Filter, Unscented Kalman Filter, or Particle Filter) in lines and then backward smoothing via Rauch\u2013Tung\u2013Striebel (RTS) smoother [36 ###reference_36###] in lines ."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-A Scenario 1 - Linear",
63
+ "text": "We examine a straightforward linear setting from [17 ###reference_17###], where we track a variable number of mobile objects (as many as ) with different birth and death instances. This occurs within a 2D region measuring m by m. The duration of this scenario stands at s.\n###figure_1### ###figure_2### Object dynamic model: Employing a linear constant velocity model for object dynamics in a 2D setting, each object\u2019s state, , encapsulates its kinematic status. The dynamic density is expressed as , where symbolises a Gaussian density. Given that and , with being a 2x2 identity matrix, denoting the Kronecker tensor product, a sampling interval of s, and m/s2. Each object has a 0.99 survival probability, represented as . For every time increment, the birth density is represented by the LMB density , comprising . Here, and , , , , , .\nMeasurement model: Each detected object with a detection probability generates a noisy 2D position measurement with measurement likelihood , where , is the zero matrix, , m. The measurement set at each time also contains the clutters that follow the Poisson model with a uniform clutter density resulting in an average of clutters per time step. Importantly, due to the linear nature of the dynamic and measurement models, we employ the Kalman filter alongside the conventional RTS smoother for STE-LMB as detailed in Algorithm 1 ###reference_###.\nFig. 1 ###reference_### depicts the estimated trajectories versus true trajectories using a) LMB and b) STE-LMB. Comprehensive comparative analysis, averaged across 100 Monte-Carlo (MC) trials between LMB and STE-LMB, is depicted in Fig. 2 ###reference_###. The results confirm that using the smooth trajectory estimator, we can estimate the multi-object trajectories correctly in terms of cardinality (i.e., the number of time-varying objects), OSPA [37 ###reference_37###] and OSPA(2) [38 ###reference_38###, 30 ###reference_30###] errors."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B Scenario 2 - Non-Linear",
69
+ "text": "In the following section, we examine a non-linear setting from [31 ###reference_31###] wherein we track an uncertain, fluctuating count of mobile objects (maximally ) that possess different birth and death timings. This takes place within a 2D space spanning m by m. The duration of this scenario is seconds.\n###figure_3### ###figure_4### ###figure_5### Object dynamic model: Given the non-linear dynamics of the object, we adopt the coordinated turn (CT) model. The single object state is denoted by , wherein represents the kinematic state and signifies the turning rate. The transition density for the CT model is defined as . Here, . The matrix is represented as , with m/s2 and rad/s.\nwhere , s is the sampling interval. At every interval, each object progresses to the subsequent time through the dynamic density , having a survival probability of . We adopt an LMB birth model denoted by . Specifically, , , , , , , , .\nMeasurement model: For each detected object , a range-and-bearing measurement is produced with a detection probability . The measurement likelihood is given by where . The matrix is represented by , with m and rad. Each measurement at any given time step is further disrupted with clutters (false alarms). The clutter follows a Poisson distribution with a uniform clutter density , resulting in an average of clutters per observation. Crucially, due to the non-linear characteristics of both the dynamic and measurement models, we utilise the Unscented Kalman filter and the Unscented RTS smoother for the STE-LMB filter, as detailed in Algorithm 1 ###reference_###.\nFig. 3 ###reference_### illustrates the comparison between estimated trajectories and true trajectories in a specific run, using both the LMB and STE-LMB approaches. The detailed comparison results averaged over 100 MC trials, are depicted in Fig. 4 ###reference_###. These results validate that employing the smooth trajectory estimator enables accurate estimation of multi-object trajectories in terms of cardinality (i.e., the number of objects that change over time), OSPA, and OSPA(2) errors. Fig. 5 ###reference_### presents the percentage of the smooth trajectory estimator\u2019s computational time in relation to the total filtering time for both two considered scenarios. The additional computational time from the smooth trajectory estimator is insignificant (i.e., less than ) compared to the total filtering time, which demonstrates the efficiency of our method."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "We have devised an innovative and efficient smooth trajectory estimator for the LMB filter. By adopting the intuitive strategy of retaining the optimal association map during the conversion from the GLMB density to the LMB density, our approach offers an efficient smooth trajectory estimator for the LMB filter. This facilitates accurate detection and tracking of a fluctuating number of mobile objects amidst noisy measurements, whilst substantially reducing label switching and track fragmentation. Experimental outcomes highlight our method\u2019s superiority over the existing LMB filter, achieved with only a marginal increase in computational time."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {
81
+ "1": {
82
+ "figure_path": "2401.00682v1_figure_1.png",
83
+ "caption": "Figure 1: Scenario 1 (Linear): Truth vs Estimates using a) LMB filter, and b) STE-LMB filter.",
84
+ "url": "http://arxiv.org/html/2401.00682v1/x1.png"
85
+ },
86
+ "2": {
87
+ "figure_path": "2401.00682v1_figure_2.png",
88
+ "caption": "Figure 2: Scenario 1 (Linear) - performance comparison results averaged over 100100100100 Monte Carlo trials: a) Carnality estimation, b) OSPA distance, and c) OSPA(2) distance.",
89
+ "url": "http://arxiv.org/html/2401.00682v1/x2.png"
90
+ },
91
+ "3": {
92
+ "figure_path": "2401.00682v1_figure_3.png",
93
+ "caption": "Figure 3: Scenario 2 (Non-Linear): Truth vs Estimates using a) LMB filter, and b) STE-LMB filter.",
94
+ "url": "http://arxiv.org/html/2401.00682v1/x3.png"
95
+ },
96
+ "4": {
97
+ "figure_path": "2401.00682v1_figure_4.png",
98
+ "caption": "Figure 4: Scenario 2 (Non-Linear) - performance comparison results averaged over 100100100100 Monte Carlo trials: a) Carnality estimation, b) OSPA distance, and c) OSPA(2) distance.",
99
+ "url": "http://arxiv.org/html/2401.00682v1/x4.png"
100
+ },
101
+ "5": {
102
+ "figure_path": "2401.00682v1_figure_5.png",
103
+ "caption": "Figure 5: Percentage of the smooth trajectory estimator\u2019s computational time in relation to the total filtering time averaged over 100100100100 Monte Carlo trials.",
104
+ "url": "http://arxiv.org/html/2401.00682v1/x5.png"
105
+ }
106
+ },
107
+ "validation": true,
108
+ "references": [],
109
+ "url": "http://arxiv.org/html/2401.00682v1"
110
+ }