text
stringlengths 0
1.92k
| source
stringlengths 32
167
|
|---|---|
In order to investigate the possibility of the recently observed $X(5568)$
being a $0^{+}$ tetraquark state, we make an improvement to the study of the
related various configuration states in the framework of the QCD sum rules.
Particularly, to ensure the quality of the analysis, condensates up to
dimension $12$ are included to inspect the convergence of operator product
expansion (OPE) and improve the final results of the studied states. We note
that some condensate contributions could play an important role on the OPE
side. By releasing the rigid OPE convergence criterion, we arrive at the
numerical value $5.57^{+0.35}_{-0.23}~\mbox{GeV}$ for the scalar-scalar
diquark-antidiquark $0^{+}$ state, which agrees with the experimental data for
the $X(5568)$ and could support its interpretation in terms of a $0^{+}$
tetraquark state with the scalar-scalar configuration. The corresponding result
for the axial-axial current is calculated to be
$5.77^{+0.44}_{-0.33}~\mbox{GeV}$, which is still consistent with the mass of
$X(5568)$ in view of the uncertainty. The feasibility of $X(5568)$ being a
tetraquark state with the axial-axial configuration therefore cannot be
definitely excluded. For the pseudoscalar-pseudoscalar and the vector-vector
cases, their unsatisfactory OPE convergence make it difficult to find
reasonable work windows to extract the hadronic information.
|
http://arxiv.org/abs/1705.03741v2
|
In this paper we consider 0-th order pseudodifferential operators on the
circle. We show that inside any interval disjoint from critical values of the
principal symbol, the spectrum is absolutely continuous with possibly finitely
many embedded eigenvalues. We also give an example of embedded eigenvalues.
|
http://arxiv.org/abs/1909.06316v1
|
Here, we study the neutrinoless double-$\beta$ ($0\nu\beta\beta$) decay between the ground state and the first $2^+$ state of $^{76}\mbox{Ge} \rightarrow {}^{76}\mbox{Se}$, $^{82}\mbox{Se} \rightarrow{}^{82}\mbox{Kr}$, $^{130}\mbox{Te} \rightarrow {}^{130}\mbox{Xe}$ and $^{136}\mbox{Xe} \rightarrow {}^{136}\mbox{Ba}$ systems. The relevant nuclear matrix elements (NMEs) involved in the process are calculated within the formalism of the microscopic interacting boson model (IBM-2). The IBM-2 has been widely used to obtain predictions for nuclear observables, such as the spectrum, but also to explore the possible emergence of beyond-the-Standard Model effects in the weak interactions of nuclei. Our calculations are carried out by considering the exchange of a Majorana neutrino between two nucleons ($2N$-mechanism). In addition to NMEs, we calculate the associated leptonic phase-space factors (PSFs) using electron radial wave functions, which are obtained by solving numerically the Dirac equation of a screened Coulomb potential that takes into account finite nuclear size. By combining our IBM-2 results for the NMEs with those for the PSFs along with experimental half-life limits, we can set limits on the $\langle \lambda \rangle$ and $\langle \eta \rangle$ couplings of left-right (L-R) models.
|
https://arxiv.org/abs/2301.02007v1
|
We review our results in\,\cite{ANR22} for the masses and couplings of $T_{ccqq'}\, (J^P=0^+)$ states from (inverse) QCD Laplace sum rule (LSR), their ratios ${\cal R}$ and double ratio of sum rules (DRSR) within stability criteria and including Factorized Next-to-Leading Order (FNLO) Perturbative (PT) corrections and Lowest Order (LO) QCD condensates up to $\langle G^3 \rangle$. We show that combining ${\cal R}$ and DRSR can provide more precise results. Calibrated to the observed $X_c(3872)$ and $T^{1^+}_{cc}(3875)$, ${\cal R}$ combined with DRSR lead to a more precise prediction of $M_{T^{0^+}_{cc}}=3883(3)~\rm{MeV}$. In a similar way, calibrated to the new prediction of $T^{0^+}_{cc}$ ${\cal R} \oplus$DRSR lead to the improved mass predictions: $M_{T^{0^+}_{cc\bar{s}\bar{u}}}=3927(6)~\rm{MeV}$ and $M_{T^{0^+}_{cc\bar{s}\bar{s}}}=3993(11)~\rm{MeV}$. We extend our analysis to the bottom sector and compare our results with the ones from different LSR predictions and some other determinations (lattice, quark and potential models,...) in the literature.
|
https://arxiv.org/abs/2212.10184v1
|
The uncertainty in the nuclear matrix elements (NMEs) of $0\nu\beta\beta$ decay for $^{76}$Ge, $^{82}$Se, $^{128}$Te, $^{130}$Te, and $^{136}$Xe in the self-consistent quasiparticle random phase approximation (QRPA) method is investigated by using eighteen Skyrme interactions supplemented with either a volume- or surface-type of pairing interactions. The NMEs for the isotopes concerned (except $^{136}$Xe) are less sensitive to the particle-hole ($ph$) interactions, while strongly dependent on the employed isovector particle-particle ($pp$) pairing interactions even though the pairing strengths are optimized to the same pairing gap. The results indicate that a precise determination of the isovector $pp$ pairing interaction in the Skyrme energy density functional is of importance to reduce the uncertainty in the NMEs within the QRPA framework.
|
https://arxiv.org/abs/2302.04423v1
|
We develop the formalism for calculating the decay rate of neutrinoless double beta decay to the $2^+$ excited states within L-R symmetric model. We consider the effects from induced hadronic currents up to NLO. The QRPA method in a spherical basis is adopted for the nuclear many-body calculation and the corresponding nuclear matrix elements are given. Also, the phase space factors are obtained with numerical electron wave functions. Our results suggest that the nuclear matrix elements are nucleus dependent and they are generally smaller than that of the decay to the ground states. And finally, we give a naive analysis of how current experiment data constrains the L-R symmetric model.
|
https://arxiv.org/abs/2208.08595v2
|
Quantum states are usually fragile which makes quantum computation being not as stable as classical computation. Quantum correction codes can protect quantum states but need a large number of physical qubits to code a single logic qubit. Alternatively, the protection at the hardware level has been recently developed to maintain the coherence of the quantum information by using symmetries. However, it generally has to pay the expense of increasing the complexity of the quantum devices. In this work, we show that the protection at the hardware level can be approached without increasing the complexity of the devices. The interplay between the spin-orbit coupling and the Zeeman splitting in the semiconductor allows us to tune the Josephson coupling in terms of the spin degree of freedom of Cooper pairs, the hallmark of the superconducting spintronics. This leads to the implementation of the parity-protected 0-$\pi$ superconducting qubit with only one highly transparent superconductor-semiconductor Josephson junction, which makes our proposal immune from the various fabrication imperfections.
|
https://arxiv.org/abs/2110.07516v1
|
We demonstrate an approach that allows taking videos at very high-speeds of
over 100,000 frames per second (fps) by exploiting the fast sampling rate of
the standard rolling-shutter readout mechanism, common to most conventional
sensors, and a compressive-sampling acquisition scheme. Our approach is
directly applied to a conventional imaging system by the simple addition of a
diffuser to the pupil plane, randomly encoding the entire field-of-view to each
camera row, while maintaining diffraction-limited resolution. A short video is
reconstructed from a single camera frame via a compressed-sensing
reconstruction algorithm, exploiting inherent sparsity of the imaged scene.
|
http://arxiv.org/abs/2004.09614v1
|
Podcasts are a large and growing repository of spoken audio. As an audio format, podcasts are more varied in style and production type than broadcast news, contain more genres than typically studied in video data, and are more varied in style and format than previous corpora of conversations. When transcribed with automatic speech recognition they represent a noisy but fascinating collection of documents which can be studied through the lens of natural language processing, information retrieval, and linguistics. Paired with the audio files, they are also a resource for speech processing and the study of paralinguistic, sociolinguistic, and acoustic aspects of the domain. We introduce the Spotify Podcast Dataset, a new corpus of 100,000 podcasts. We demonstrate the complexity of the domain with a case study of two tasks: (1) passage search and (2) summarization. This is orders of magnitude larger than previous speech corpora used for search and summarization. Our results show that the size and variability of this corpus opens up new avenues for research.
|
https://aclanthology.org/2020.coling-main.519
|
Abstract: Modern electronic and photonic devices rely on single-crystalline thin film semiconductors for high performance and reproducibility. The emerging halide perovskites have extraordinary electronic and photonic properties and can be synthesized via low cost solution-based methods. They have been used in a variety of devices with performance approaching or over the devices based on conventional materials. However, their solution based growth method is intrinsically challenge to grow large scale single-crystalline thin film due to the random nucleation and isotropous growth of the crystal. Here, we report the growth of centimeter-scale perovskite single-crystalline thin films by controlling the nucleation density and growth rate of the crystal under a spatially confined growth condition. The hydrophobic treatment on substrates inhibits nucleation and accelerates the growth of single-crystalline thin film, providing enough space for initial nucleus growing up quickly without touching each other. Single-crystalline perovskite thin-film with an aspect ratio of 1000 (1 cm in side length, 10 μm in thickness) has been successfully grown. The low trap density and the high mobility of the as-grown thin film show a high crystallinity. The photodetector based on the perovskite thin film has achieved a gain ~ 10^4, benefitting from the short transit time of the carries due to the high mobility and thin thickness of the active layer. Our work opens up a new route to grow large scale perovskite single-crystalline thin films, providing a platform to develop high- performance devices.
Cite this article: Deng, YH., Yang, ZQ. & Ma, RM. Growth of centimeter-scale perovskite single-crystalline thin film via surface engineering. Nano Convergence 7, 25 (2020). https://doi.org/10.1186/s40580-020-00236-5
DOI: https://doi.org/10.1186/s40580-020-00236-5
|
https://link.springer.com/article/10.1186/s40580-020-00236-5#Fig1
|
Real-time operation of a software-defined, GPU-based optical receiver is demonstrated over a 100-span straight-line optical link. Performance of minimum-phase Kramers-Kronig 4-, 8-, 16-, 32-, and 64-QAM signals are evaluated at various distances.
|
https://arxiv.org/abs/2104.06311v1
|
We introduce a benchmark of 10,000 instances with heterogeneous characteristics for the capacitated vehicle routing problem. We also provide optimal solutions for almost all of them along with a generator to produce additional training and validation data. This benchmark aims to permit a more systematic comparison of machine learning based search algorithms on this important problem. We also emit recommendations regarding the correct use of this dataset.
|
https://openreview.net/forum?id=yHiMXKN6nTl
|
Subset selection from massive data with noised information is increasingly
popular for various applications. This problem is still highly challenging as
current methods are generally slow in speed and sensitive to outliers. To
address the above two issues, we propose an accelerated robust subset selection
(ARSS) method. Specifically in the subset selection area, this is the first
attempt to employ the $\ell_{p}(0<p\leq1)$-norm based measure for the
representation loss, preventing large errors from dominating our objective. As
a result, the robustness against outlier elements is greatly enhanced.
Actually, data size is generally much larger than feature length, i.e. $N\gg
L$. Based on this observation, we propose a speedup solver (via ALM and
equivalent derivations) to highly reduce the computational cost, theoretically
from $O(N^{4})$ to $O(N{}^{2}L)$. Extensive experiments on ten benchmark
datasets verify that our method not only outperforms state of the art methods,
but also runs 10,000+ times faster than the most related method.
|
http://arxiv.org/abs/1409.3660v4
|
The advent of the James Webb Space Telescope has revealed a wealth of new galaxies just a few hundred Myr after the Big Bang. Some of these galaxies exhibit unusual elemental abundances that are difficult to explain with stellar populations today. While Wolf-Rayet stars in multiple-burst populations, very massive or rapidly-rotating primordial stars, general relativistic explosions of metal-enriched supermassive stars, or the precursors of globular clusters can in principle account for the supersolar nitrogen to oxygen ratios in the galaxies GN-z11 and CEERS 1019, no known stars or supernovae can explain the far higher N/O ratio of 0.46 in GS 3073 at redshift $z =$ 5.55. Here we show that the extreme nitrogen abundances in GS 3073 can be produced by 1000 - 10,000 M$_{\odot}$ primordial (Pop III) stars. We find that these are the only candidates that can account for its large N/O ratios and its C/O and Ne/O ratios. GS 3073 is thus the first conclusive evidence in the fossil abundance record of the existence of supermassive Pop III stars at cosmic Dawn.
|
https://arxiv.org/abs/2502.04435v2
|
Recent advances in speech synthesis have enabled many useful applications like audio directions in Google Maps, screen readers, and automated content generation on platforms like TikTok. However, these systems are mostly dominated by voices sourced from data-rich geographies with personas representative of their source data. Although 3000 of the world's languages are domiciled in Africa, African voices and personas are under-represented in these systems. As speech synthesis becomes increasingly democratized, it is desirable to increase the representation of African English accents. We present Afro-TTS, the first pan-African accented English speech synthesis system able to generate speech in 86 African accents, with 1000 personas representing the rich phonological diversity across the continent for downstream application in Education, Public Health, and Automated Content Creation. Speaker interpolation retains naturalness and accentedness, enabling the creation of new voices.
|
https://arxiv.org/abs/2406.11727v2
|
We report the lowest frequency measurements of gamma-ray burst (GRB) 171205A with the upgraded Giant Metrewave Radio Telescope (uGMRT) covering a frequency range from 250--1450 MHz and a period of $4-937$ days. It is the first GRB afterglow detected at 250--500 MHz frequency range and the second brightest GRB detected with the uGMRT. Even though the GRB is observed for nearly 1000 days, there is no evidence of transition to non-relativistic regime. We also analyse the archival ${\it Chandra}$ X-ray data on day $\sim 70$ and day $\sim 200$. We also find no evidence of a jet break from the analysis of combined data. We fit synchrotron afterglow emission arising from a relativistic, isotropic, self-similar deceleration as well as from a shock-breakout of wide-angle cocoon. Our data also allow us to discern the nature and the density of the circumburst medium. We find that the density profile deviates from a standard constant density medium and suggests that the GRB exploded in a stratified wind like medium. Our analysis shows that the lowest frequency measurements covering the absorbed part of the light curves are critical to unravel the GRB environment. Our data combined with other published measurements indicate that the radio afterglow has contribution from two components: a weak, possibly slightly off-axis jet and a surrounding wider cocoon, consistent with the results of Izzo et al. (2019). The cocoon emission likely dominates at early epochs, whereas the jet starts to dominate at later epochs, resulting in flatter radio lightcurves.
|
https://arxiv.org/abs/2012.05166v1
|
Negotiations began 1n 1968 for a telescope facility at Perth Observatory for NASA's International Planetary Patrol Network. 1,000 days later the telescope saw first light. The facility bears no resemblance to other observatories. Inside a dome, the telescope sits on a 42 ft tall concrete pier with a wrap-around staircase and concrete legs. Surrounding forest is similar in height to the dome, the design of which is counter intuitive. This study investigated why, at the risk of compromising performance, there was a departure from standard design, and to to identify drivers for the decision making. Observatory visitors learn of a government architect, Tadeusz Andrzejaczek who made whimsical, successive increases to the height of the structure. Though designed in collaboration with Acting Government Astronomer, Bertrand Harris, it is improbable that a public servant architect would have such influence over a scientific installation. Vibration amelioration was met by designing massive strength and rigidity into the structure. Thermal expansion and wind stresses were reduced using features such as shade fins and protective walls, and ground thermal disturbance was addressed by simply making it tall. Seeing measurements were not a significant design consideration. The facility exists with its current floor height because of successive approvals for modification. The initial design was by Harris and requests for redesigns came from him but in close negotiation the Andrzejaczek who desired a structure of futuristic shape and proportions. Harris's designs were influenced by his personal English background and the Old Perth Observatory where he worked as an astronomer. Andrzejaczek's design was influenced by an observatory in his birth city, his alignment with contemporary designers and his artistic flair.
|
https://arxiv.org/abs/2008.05146v1
|
Strategies for ultrafast optical control of magnetism have been a topic of
intense research for several decades because of the potential impact in
technologies such as magnetic memory, spintronics, and quantum computation, as
well as the opportunities for non-linear optical control and modulation in
applications such as optical isolation and non-reciprocity. Here we report the
first experimental quantification of optically induced magnetization in
plasmonic Au nanoparticles due to the inverse Faraday effect (IFE). The induced
magnetic moment in nanoparticles is found to be ~1,000x larger than that
observed in bulk Au, and ~20x larger than the magnetic moment from optimized
magnetic nanoparticle colloids such as magnetite. Furthermore, the
magnetization and demagnetization kinetics are instantaneous within the
sub-picosecond time resolution of our study, supporting a mechanism of coherent
transfer of angular momentum from the circularly polarized excitation to the
orbital angular momentum of the electron gas.
|
http://arxiv.org/abs/1904.11425v1
|
4D Gaussian Splatting (4DGS) has recently gained considerable attention as a method for reconstructing dynamic scenes. Despite achieving superior quality, 4DGS typically requires substantial storage and suffers from slow rendering speed. In this work, we delve into these issues and identify two key sources of temporal redundancy. (Q1) \textbf{Short-Lifespan Gaussians}: 4DGS uses a large portion of Gaussians with short temporal span to represent scene dynamics, leading to an excessive number of Gaussians. (Q2) \textbf{Inactive Gaussians}: When rendering, only a small subset of Gaussians contributes to each frame. Despite this, all Gaussians are processed during rasterization, resulting in redundant computation overhead. To address these redundancies, we present \textbf{4DGS-1K}, which runs at over 1000 FPS on modern GPUs. For Q1, we introduce the Spatial-Temporal Variation Score, a new pruning criterion that effectively removes short-lifespan Gaussians while encouraging 4DGS to capture scene dynamics using Gaussians with longer temporal spans. For Q2, we store a mask for active Gaussians across consecutive frames, significantly reducing redundant computations in rendering. Compared to vanilla 4DGS, our method achieves a $41\times$ reduction in storage and $9\times$ faster rasterization speed on complex dynamic scenes, while maintaining comparable visual quality. Please see our project page at https://4DGS-1K.github.io.
|
https://arxiv.org/abs/2503.16422v1
|
Capturing high frame rate and high dynamic range (HFR&HDR) color videos in high-speed scenes with conventional frame-based cameras is very challenging. The increasing frame rate is usually guaranteed by using shorter exposure time so that the captured video is severely interfered by noise. Alternating exposures could alleviate the noise issue but sacrifice frame rate due to involving long-exposure frames. The neuromorphic spiking camera records high-speed scenes of high dynamic range without colors using a completely different sensing mechanism and visual representation. We introduce a hybrid camera system composed of a spiking and an alternating-exposure RGB camera to capture HFR&HDR scenes with high fidelity. Our insight is to bring each camera's superiority into full play. The spike frames, with accurate fast motion information encoded, are first reconstructed for motion representation, from which the spike-based optical flows guide the recovery of missing temporal information for middle- and long-exposure RGB images while retaining their reliable color appearances. With the strong temporal constraint estimated from spike trains, both missing and distorted colors cross RGB frames are recovered to generate time-consistent and HFR color frames. We collect a new Spike-RGB dataset that contains 300 sequences of synthetic data and 20 groups of real-world data to demonstrate 1000 FPS HDR videos outperforming HDR video reconstruction methods and commercial high-speed cameras.
|
http://openaccess.thecvf.com//content/CVPR2023/html/Chang_1000_FPS_HDR_Video_With_a_Spike-RGB_Hybrid_Camera_CVPR_2023_paper.html
|
Scaling up self-supervised learning has driven breakthroughs in language and vision, yet comparable progress has remained elusive in reinforcement learning (RL). In this paper, we study building blocks for self-supervised RL that unlock substantial improvements in scalability, with network depth serving as a critical factor. Whereas most RL papers in recent years have relied on shallow architectures (around 2 - 5 layers), we demonstrate that increasing the depth up to 1024 layers can significantly boost performance. Our experiments are conducted in an unsupervised goal-conditioned setting, where no demonstrations or rewards are provided, so an agent must explore (from scratch) and learn how to maximize the likelihood of reaching commanded goals. Evaluated on simulated locomotion and manipulation tasks, our approach increases performance by $2\times$ - $50\times$. Increasing the model depth not only increases success rates but also qualitatively changes the behaviors learned.
|
https://arxiv.org/abs/2503.14858v1
|
In this paper we present a new approach for pupil segmentation. It can be computed and trained very efficiently, making it ideal for online use for high speed eye trackers as well as for energy saving pupil detection in mobile eye tracking. The approach is inspired by the BORE and CBF algorithms and generalizes the binary comparison by Haar features. Since these features are intrinsically very susceptible to noise and fluctuating light conditions, we combine them with conditional pupil shape probabilities. In addition, we also rank each feature according to its importance in determining the pupil shape. Another advantage of our method is the use of statistical learning, which is very efficient and can even be used online. https://atreus.informatik.uni-tuebingen.de/seafile/d/8e2ab8c3fdd444e1a135/?p=%2FStatsPupil&mode=list
|
https://arxiv.org/abs/2102.01921v1
|
In digital cameras, we find a major limitation: the image and video form inherited from a film camera obstructs it from capturing the rapidly changing photonic world. Here, we present vidar, a bit sequence array where each bit represents whether the accumulation of photons has reached a threshold, to record and reconstruct the scene radiance at any moment. By employing only consumer-level CMOS sensors and integrated circuits, we have developed a vidar camera that is 1,000x faster than conventional cameras. By treating vidar as spike trains in biological vision, we have further developed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1,000x faster than human vision. We demonstrate the utility of the vidar camera and the super vision system in an assistant referee and target pointing system. Our study is expected to fundamentally revolutionize the image and video concepts and related industries, including photography, movies, and visual media, and to unseal a new spiking neural network-enabled speed-free machine vision era.
|
https://arxiv.org/abs/2201.09302v1
|
Scenario generation is one of the essential steps in scenario-based testing and, therefore, a significant part of the verification and validation of driver assistance functions and autonomous driving systems. However, the term scenario generation is used for many different methods, e.g., extraction of scenarios from naturalistic driving data or variation of scenario parameters. This survey aims to give a systematic overview of different approaches, establish different categories of scenario acquisition and generation, and show that each group of methods has typical input and output types. It shows that although the term is often used throughout literature, the evaluated methods use different inputs and the resulting scenarios differ in abstraction level and from a systematical point of view. Additionally, recent research and literature examples are given to underline this categorization.
|
https://arxiv.org/abs/2304.10850v1
|
A deep learning model capable of solving any Sudoku grid (so far).
The model uses the Sudoku grid symmetry and it iteratively solves the grid filling digit step by step. A trial-and-error algorithm is also used if the model gets stuck on a step.
|
https://www.linkedin.com/posts/sebastien-guissart_you-didnt-expect-it-but-here-it-is-after-activity-7281942649626877952-27Pe?utm_source=share&utm_medium=member_desktop
|
The recent development of reasoning language models (RLMs) represents a novel evolution in large language models. In particular, the recent release of DeepSeek-R1 has generated widespread social impact and sparked enthusiasm in the research community for exploring the explicit reasoning paradigm of language models. However, the implementation details of the released models have not been fully open-sourced by DeepSeek, including DeepSeek-R1-Zero, DeepSeek-R1, and the distilled small models. As a result, many replication studies have emerged aiming to reproduce the strong performance achieved by DeepSeek-R1, reaching comparable performance through similar training procedures and fully open-source data resources. These works have investigated feasible strategies for supervised fine-tuning (SFT) and reinforcement learning from verifiable rewards (RLVR), focusing on data preparation and method design, yielding various valuable insights. In this report, we provide a summary of recent replication studies to inspire future research. We primarily focus on SFT and RLVR as two main directions, introducing the details for data construction, method design and training procedure of current replication studies. Moreover, we conclude key findings from the implementation details and experimental results reported by these studies, anticipating to inspire future research. We also discuss additional techniques of enhancing RLMs, highlighting the potential of expanding the application scope of these models, and discussing the challenges in development. By this survey, we aim to help researchers and developers of RLMs stay updated with the latest advancements, and seek to inspire new ideas to further enhance RLMs.
|
https://arxiv.org/abs/2505.00551v3
|
Effective driving style analysis is critical to developing human-centered intelligent driving systems that consider drivers' preferences. However, the approaches and conclusions of most related studies are diverse and inconsistent because no unified datasets tagged with driving styles exist as a reliable benchmark. The absence of explicit driving style labels makes verifying different approaches and algorithms difficult. This paper provides a new benchmark by constructing a natural dataset of Driving Style (100-DrivingStyle) tagged with the subjective evaluation of 100 drivers' driving styles. In this dataset, the subjective quantification of each driver's driving style is from themselves and an expert according to the Likert-scale questionnaire. The testing routes are selected to cover various driving scenarios, including highways, urban, highway ramps, and signalized traffic. The collected driving data consists of lateral and longitudinal manipulation information, including steering angle, steering speed, lateral acceleration, throttle position, throttle rate, brake pressure, etc. This dataset is the first to provide detailed manipulation data with driving-style tags, and we demonstrate its benchmark function using six classifiers. The 100-DrivingStyle dataset is available via https://github.com/chaopengzhang/100-DrivingStyle-Dataset
|
https://arxiv.org/abs/2406.07894v1
|
The issue of hallucinations in large language models (LLMs) remains a critical barrier to the adoption of AI in enterprise and other high-stakes applications. Despite advancements in retrieval-augmented generation (RAG) systems, current state-of-the-art methods fail to achieve more than 80% accuracy in generating faithful and factually correct outputs, even when provided with relevant and accurate context. In this work, we introduce Acurai, a novel systematic approach that achieves 100% hallucination-free responses in LLMs by reformatting queries and context data prior to input. Leveraging a deep understanding of LLM internal representations, the importance of noun-phrase dominance, and the role of discrete functional units (DFUs), Acurai ensures alignment between input context and generated output. We validate this method using the RAGTruth corpus, demonstrating its ability to eliminate 100% hallucinations for both GPT-4 and GPT-3.5 Turbo. Acurai sets a new standard for achieving consistent, accurate, and faithful AI responses, marking a significant step forward in the development of trustworthy AI systems.
|
https://arxiv.org/abs/2412.05223v2
|
In this paper, we demonstrate the communication capabilities of light-fidelity (LiFi) systems based on highbrightness and high-bandwidth integrated laser-based sources in a surface mount device (SMD) packaging platform. The laserbased source is able to deliver 450 lumens of white light illumination and the resultant light brightness is over 1000 cd mm2. It is demonstrated that a wavelength division multiplexing (WDM) LiFi system with ten parallel channels is able to deliver over 100 Gbps data rate with the assistance of Volterra filter-based nonlinear equalisers. In addition, an aggregated transmission data rate of 4.8 Gbps has been achieved over a link distance of 500 m with the same type of SMD light source. This work demonstrates the scalability of LiFi systems that employ laserbased light sources, particularly in their capacity to enable highspeed short range, as well as long-range data transmission.
|
https://arxiv.org/abs/2402.16144v1
|
Emerging communication and cryptography applications call for reliable, fast, unpredictable random number generators. Quantum random number generation (QRNG) allows for the creation of truly unpredictable numbers thanks to the inherent randomness available in quantum mechanics. A popular approach is using the quantum vacuum state to generate random numbers. While convenient, this approach was generally limited in speed compared to other schemes. Here, through custom co-design of opto-electronic integrated circuits and side-information reduction by digital filtering, we experimentally demonstrated an ultrafast generation rate of 100 Gbps, setting a new record for vacuum-based quantum random number generation by one order of magnitude. Furthermore, our experimental demonstrations are well supported by an upgraded device-dependent framework that is secure against both classical and quantum side-information and that also properly considers the non-linearity in the digitization process. This ultrafast secure random number generator in the chip-scale platform holds promise for next generation communication and cryptography applications.
|
https://arxiv.org/abs/2209.04339v2
|
We demonstrated for the first time quantum-safe high-speed 100 Gbps site-to-site IPsec tunnels secured using Quantum Key Distribution (QKD) technology. The demonstration was conducted between two JPMorgan Chase Data Centers (DCs) in an air-gapped environment over 46 km of deployed telecom fiber across Singapore achieving 45 days of continuous operation. Two different Virtual Private Network (VPN) tunnel configurations were tested: (1) a QKD-secured VPN tunnel configuration with a maximum throughput of 80 Gbps and (2) a multi-VPN tunnel configuration exhibiting 12 QKD-secured VPN tunnels with a throughput of 8.39 Gbps per tunnel resulting in an aggregated throughput of 99.62 Gbps for all tunnels. For the QKD system performance, we achieved an average Secret Key Rate (SKR) of 7.4 kbps (about 29 AES-256 keys per second), an average Quantum Bit Error Rate (QBER) of 0.8% and an average visibility of 98.6%. We utilized the ETSI-QKD-014 REST-based Application Programming Interface (API) to exchange the QKD generated keys between the key management server in the QKD system and the next-generation firewalls in order to encrypt and decrypt the data. The data was encrypted by the quantum-safe keys using the AES-256-GCM cipher suite with a key refresh rate of 120 seconds without affecting the VPN tunnel connectivity and performance
|
https://arxiv.org/abs/2405.04415v1
|
Demands on Field-Programmable Gate Array (FPGA) data transport have been increasing over the years as frame sizes and refresh rates increase. As the bandwidths requirements increase the ability to implement data transport protocol layers using "soft" programmable logic becomes harder and start to require harden IP blocks implementation. To reduce the number of physical links and interconnects, it is common for data acquisition systems to require interleaving of streams on the same link (e.g. streaming data and streaming register access). This paper presents a way to leverage existing FPGA harden IP blocks to achieve a robust, low latency 100 Gb/s point-to-point link with minimal programmable logic overhead geared towards the needs of data acquisition systems with interleaved streaming requirements.
|
https://arxiv.org/abs/2203.15671v3
|
In this paper, we experimentally demonstrate that a silicon dual-drive
Mach-Zehnder modulator (DD-MZM) has great potential for next-generation data
center interconnections (DCIs). For intra-data center interconnections, 120
Gb/s Nyquist 4-ary pulse amplitude modulation (PAM-4) signal is successfully
generated with a silicon DD-MZM operating at C-band and transmitted over 2 km
standard single-mode fiber (SSMF) with a bit error rate (BER) of 5.55x10-4. For
inter-data center interconnections, single sideband (SSB) modulation is chosen
to avoid power fading caused by fiber chromatic dispersion and square-law
detection. We report the generation and transmission of 112 Gb/s Nyquist SSB
PAM-4 signal by using the same silicon DD-MZM and Kramers-Kronig (KK) direct
detection. A two-tap digital post filter and maximum likelihood sequence
detection (MLSD) are applied to compensate for the limited system bandwidth.
After 80 km SSMF transmission, the BER is 2.46x10-3 that is below the 7% HD-FEC
threshold of 3.8x10-3. To the best of our knowledge, our work reports the
highest single-lane bitrate of 80 km SSB transmission based on a silicon
DD-MZM. Our study also shows the feasibility of silicon photonic modulator for
DCI applications in the future.
|
http://arxiv.org/abs/1811.11096v1
|
An integrated hybrid thin-film lithium niobate (TFLN) electro-optic Mach-Zehnder modulator (MZM) is shown at near-infrared wavelengths. The design uses TFLN bonded to planarized silicon nitride waveguide circuits, and does not require etching or patterning of TFLN. The push-pull MZM achieves a half-wave voltage length product ($V_\pi L$) of 0.8 V$.$cm at 784 nm. MZM devices with 0.4 cm and 0.8 cm modulation length show a broadband electro-optic response with a 3 dB bandwidth beyond 100 GHz, with the latter showing a bandwidth to half-wave voltage ratio of 100 GHz/V.
|
https://arxiv.org/abs/2211.13348v1
|
Electro-optic modulators provide a key function in optical transceivers and increasingly in photonic programmable Application Specific Integrated Circuits (ASICs) for machine learning and signal processing. However, both foundry ready silicon based modulators and conventional material based devices utilizing Lithium niobate fall short in simultaneously providing high chip packaging density and fast speed. Current driven ITO based modulators have the potential to achieve both enabled by efficient light matter interactions. Here, we introduce micrometer compact Mach Zehnder Interferometer (MZI) based modulators capable of exceeding 100 GHz switching rates. Integrating ITO thin films atop a photonic waveguide, spectrally broadband, and compact MZI phase shifter. Remarkably, this allows integrating more than 3500 of these modulators within the same chip area as only one single silicon MZI modulator. The modulator design introduced here features a holistic photonic, electronic, and RF-based optimization and includes an asymmetric MZI tuning step to optimize the Extinction Ratio (ER) to Insertion Loss (IL) and dielectric thickness sweep to balance the tradeoffs between ER and speed. Driven by CMOS compatible bias voltage levels, this device is the first to address next generation modulator demands for processors of the machine intelligence revolution, in addition to the edge and cloud computing demands as well as optical transceivers alike.
|
https://arxiv.org/abs/2112.10926v2
|
Predicting the performance of LLMs on individual task instances is essential to ensure their reliability in high-stakes applications. To do so, a possibility is to evaluate the considered LLM on a set of task instances and train an assessor to predict its performance based on features of the instances. However, this approach requires evaluating each new LLM on a sufficiently large set of task instances to train an assessor specific to it. In this work, we leverage the evaluation results of previously tested LLMs to reduce the number of evaluations required to predict the performance of a new LLM. In practice, we propose to test the new LLM on a small set of reference instances and train a generic assessor which predicts the performance of the LLM on an instance based on the performance of the former on the reference set and features of the instance of interest. We conduct empirical studies on HELM-Lite and KindsOfReasoning, a collection of existing reasoning datasets that we introduce, where we evaluate all instruction-fine-tuned OpenAI models until the January 2024 version of GPT4. When predicting performance on instances with the same distribution as those used to train the generic assessor, we find this achieves performance comparable to the LLM-specific assessors trained on the full set of instances. Additionally, we find that randomly selecting the reference instances performs as well as some advanced selection methods we tested. For out of distribution, however, no clear winner emerges and the overall performance is worse, suggesting that the inherent predictability of LLMs is low.
|
https://arxiv.org/abs/2409.03563v1
|
Pre-training is notoriously compute-intensive and academic researchers are notoriously under-resourced. It is, therefore, commonly assumed that academics can't pre-train models. In this paper, we seek to clarify this assumption. We first survey academic researchers to learn about their available compute and then empirically measure the time to replicate models on such resources. We introduce a benchmark to measure the time to pre-train models on given GPUs and also identify ideal settings for maximizing training speed. We run our benchmark on a range of models and academic GPUs, spending 2,000 GPU-hours on our experiments. Our results reveal a brighter picture for academic pre-training: for example, although Pythia-1B was originally trained on 64 GPUs for 3 days, we find it is also possible to replicate this model (with the same hyper-parameters) in 3x fewer GPU-days: i.e. on 4 GPUs in 18 days. We conclude with a cost-benefit analysis to help clarify the trade-offs between price and pre-training time. We believe our benchmark will help academic researchers conduct experiments that require training larger models on more data. We fully release our codebase at: https://github.com/apoorvkh/academic-pretraining.
|
https://arxiv.org/abs/2410.23261v1
|
A target using a paisley pattern generates 100-kT-level magnetic fields. Laser irradiation induces local charge separation on the target, which creates surface currents along the concave surface, generating a magnetic field. For a laser intensity of $10^{21}$ W/cm$^2$, the target generates a 150-kT magnetic field. We developed a simple model to describe the magnetic field as a function of laser intensity and target radius. A double paisley configuration extends the lifetime of the magnetic field to the picosecond scale. The paisley design generates comparable results even if it is simplified. Thus, it is a robust and modular target suitable for magnetic field applications such as 100-kT magnetic field generation and magnetic reconnection.
|
https://arxiv.org/abs/2202.00193v1
|
Long-context capability is considered one of the most important abilities of LLMs, as a truly long context-capable LLM enables users to effortlessly process many originally exhausting tasks -- e.g., digesting a long-form document to find answers vs. directly asking an LLM about it. However, existing real-task-based long-context evaluation benchmarks have two major shortcomings. First, benchmarks like LongBench often do not provide proper metrics to separate long-context performance from the model's baseline ability, making cross-model comparison unclear. Second, such benchmarks are usually constructed with fixed input lengths, which limits their applicability across different models and fails to reveal when a model begins to break down. To address these issues, we introduce a length-controllable long-context benchmark and a novel metric that disentangles baseline knowledge from true long-context capabilities. Experiments demonstrate the superiority of our approach in effectively evaluating LLMs.
|
https://arxiv.org/abs/2505.19293v1
|
An efficient error reconciliation scheme is important for post-processing of
quantum key distribution (QKD). Recently, a multi-matrix low-density
parity-check codes based reconciliation algorithm which can provide remarkable
perspectives for high efficiency information reconciliation was proposed. This
paper concerns the improvement of reconciliation performance. Multi-matrix
algorithm is implemented and optimized on the graphics processing unit (GPU) to
obtain high reconciliation throughput. Experimental results indicate that
GPU-based algorithm can highly improve reconciliation throughput to an average
85.67 Mbps and a maximum 102.084 Mbps with typical code rate and efficiency.
This is the best performance of reconciliation on GPU platform to our
knowledge.
|
http://arxiv.org/abs/2001.07979v1
|
Metrics can be used by businesses to make more objective decisions based on data. Software startups in particular are characterized by the uncertain or even chaotic nature of the contexts in which they operate. Using data in the form of metrics can help software startups to make the right decisions amidst uncertainty and limited resources. However, whereas conventional business metrics and software metrics have been studied in the past, metrics in the spe-cific context of software startup are not widely covered within academic literature. To promote research in this area and to create a starting point for it, we have conducted a multi-vocal literature review focusing on practitioner literature in order to compile a list of metrics used by software startups. Said list is intended to serve as a basis for further research in the area, as the metrics in it are based on suggestions made by practitioners and not empirically verified.
|
https://arxiv.org/abs/1901.04819v1
|
Based on the dual-chirped optical parametric amplification and type-I BiB$_3$O$_6$(BiBO) crystals, the generation of $>$100 mJ, 10.4 fs, 10 Hz, carrier-to-envelope phase (CEP)-stable laser pulses, which are centered at 1.7 $\mu$m, is demonstrated; it produces a peak power of 10 TW. CEP-dependent high harmonic generation is implemented to confirm the sub-two-cycle pulse duration and CEP stabilization of infrared (IR) laser pulses. As far as we know, the obtained pulse energy and peak power represent the highest values for sub-two-cycle CEP-stable IR optical parametric amplification. Additionally, the prospects of achieving high-energy water window isolated attosecond pulses via our developed laser source are discussed.
|
https://arxiv.org/abs/2202.03658v2
|
A scintillating bolometer technology based on $^{100}$Mo-enriched lithium
molybdate (Li$_2$$^{100}$MoO$_4$) crystals has been developed by LUMINEU to
search for neutrinoless double-beta ($0\nu 2\beta$) decay of $^{100}$Mo. The
results of several low temperature tests at underground environments have
proved the reproducibility of high detector performance and crystal
radiopurity: in particular $\sim$5--6~keV FWHM energy resolution and at least
9$\sigma$ rejection of $\alpha$'s in the vicinity of the $0\nu 2\beta$ decay of
$^{100}$Mo (3034 keV) and below 10~$\mu$Bq/kg bulk activity of $^{228}$Th and
$^{226}$Ra. A modest acquired exposure (0.1~kg$\times$yr) is a limiting factor
of the LUMINEU experiment sensitivity to the $0\nu 2\beta$ decay half-life of
$^{100}$Mo ($T_{1/2}$ $\geq$ 0.7$\times$10$^{23}$ yr at 90\% C.L.), however the
two-neutrino $2\beta$ decay has been measured with the best up to-date
accuracy, $T_{1/2}$ = $\left[6.92 \pm 0.06(\mathrm{stat.}) \pm
0.36(\mathrm{syst.})\right] \times 10^{18}$ yr. The applicability of the
LUMINEU technology for a tonne-scale $0\nu 2\beta$ decay bolometric project
CUPID is going to be demonstrated by the CUPID-0/Mo experiment with $\sim$5~kg
of $^{100}$Mo embedded in forty 0.2~kg Li$_2$$^{100}$MoO$_4$ scintillating
bolometers. A first phase of the experiment with twenty Li$_2$$^{100}$MoO$_4$
detectors is in preparation at the Modane underground laboratory (France) to
start by the end of 2017.
|
http://arxiv.org/abs/1709.07846v1
|
As a consequence of their work on average Selmer ranks of elliptic curves with marked points, Bhargava and Ho proved that $100\%$ of elliptic curves over $\mathbb{Q}$ with an additional marked point have positive rank. In this note we provide an alternate proof which extends the result to global fields of characteristic not two or three.
|
https://arxiv.org/abs/2504.01965v1
|
We study the universal family of odd hyperelliptic curves of genus $g \geq 1$ over $\mathbb{Q}$. We relate the heights of $\mathbb{Q}$-points of Jacobians of curves in this family to the reduction theory of the representation of $\mathrm{SO}_{2g+1}$ on self-adjoint $(2g + 1) \times(2g + 1)$-matrices. Using this theory, we show that in a density 1 subset, the Jacobians of these curves have no nontrivial rational points of small height.
|
https://arxiv.org/abs/2405.10224v1
|
We consider a specific family of analytic functions $g_{\alpha,T}(s)$, satisfying certain functional equations and approximating to linear combinations of the Riemann zeta-function and its derivatives of the form $c_0\zeta(s)+c_1\frac{\zeta'(s)}{\log T}+c_2\frac{\zeta''(s)}{(\log T)^2}+\dots+c_{K}\frac{\zeta^{(K)}(s)}{(\log T)^{K}}$. We also consider specific mollifiers of the form $M(s)D(s)$ for these linear combinations, where $M(s)$ is the classical mollifier, that is, a short Dirichlet polynomial for $1/\zeta(s)$, and the Dirichlet polynomial $D(s)$ is also short but with large and irregular Dirichlet coefficients, and arises from substitution for $w$, in Runge's complex approximation polynomial for $f(w)=\frac1{c_0+w}$, of the Selberg approximation for $\frac{c_1}{\log T}\frac{\zeta'}{\zeta}(s)+\frac{c_2}{(\log T)^2}\frac{\zeta''}{\zeta}(s)+\dots+\frac{c_{K}}{(\log T)^{K}}\frac{\zeta^{(K)}}{\zeta}(s)$ (analogous to Selberg's classical approximation for $\frac{\zeta'}{\zeta}(s)$). Exploiting the functional equations previously mentioned (concerning translation of the variable $s$), together with the mean-square asymptotics of the Levinson-Conrey method and the Selberg approximation theory (with some additional results) we show that almost all of the zeros of the Riemann zeta-function are on the critical line.
|
https://arxiv.org/abs/1805.07741v6
|
Throughout this manuscript the zeros are counted with multiplicity. We denote by $N(T)$ the number of zeros $\rho$ of $\zeta(s)$ in the critical strip upto height $T$ where $T>3$ is not an ordinate of zero of $\zeta(s)$. Denote by $N_0(T)$ the number of zeros $\rho$ of $\zeta(s)$ on the critical line upto height $T$. We first show that there exists $\epsilon_0>0$ such that $\xi(s)$ has no zeros on the boundary of a small rectangle $R_\epsilon$ defined as $R_\epsilon=\{\sigma+it\in\mathbb{C}\mid \frac{1}{2}-\epsilon\leq \sigma\leq \frac{1}{2}+\epsilon,\ 0\leq t\leq T\}$ whenever $0<\epsilon<\epsilon_0$. Secondly if $N_\epsilon(T)$ is the number of zeros $\rho$ of $\zeta(s)$ inside the rectangle $R_\epsilon$ then we prove that $N_\epsilon (T)=N_0(T)$ for $\epsilon$ sufficiently small depending on the height $T$. We use the Littlewood's lemma on the rectangle $R_\epsilon$ along with the Hadamard product of $\xi(s)$ and the asymptotic for the logarithmic derivative of $\zeta(s)$ to prove that as $T\to \infty$, $$N_0(T)=\frac{T}{2\pi}\log\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}+\mathcal{O}(\log T)$$ Also if $\kappa$ is the proportion of zeros of $\zeta(s)$ on the critical line $$\kappa:=\liminf_{T\to \infty} \frac{N_0(T)}{N(T)}$$ then we prove as a consequence that $\kappa=1$.
|
https://arxiv.org/abs/2205.00811v10
|
Quantum many-body systems present substantial technical challenges from both analytical and numerical perspectives. Despite these difficulties, some progress has been made, including studies of interacting atomic gases and interacting quantum spins. Furthermore, the potential for criticality to enhance engine performance has been demonstrated, suggesting a promising direction for future investigation. Here, we explore the performance of a quantum Otto cycle using a long-range Ising chain as the working substance. We consider an idealized cycle consisting of two adiabatic transformations and two perfect thermalizations, eliminating dissipation. Analyzing both engine and refrigerator modes, we investigate the influence of particle number, varied from $10$ to $100$, on efficiencies and behavior near the critical point of the phase transition, which we characterize using a scaling factor. We also examine how internal factors, specifically, the power-law exponent, the number of particles, and the hot and cold reservoir temperatures, affect the system's operation in different modes. Our results reveal that these factors have a different impact compared to their classical counterparts.
|
https://arxiv.org/abs/2502.01469v1
|
100 prisoners and a light bulb is a long standing mathematical puzzle. The problem was studied mostly in 2002 [5], 2003 [1], and 2004 [3]. Solutions in published articles had average number of visits above 3850, but best solutions on forums had (declared) average number of visits around 3500. I spent some time in 2007-2009 to optimize the communication strategy and I pushed the average number of visits below 3390, seems no new ideas appear after it. Recently I have met several people familiar with published papers from 2002-2003 but not knowing newer results. Even after 2009 several papers on the topic were published where the new results were not mentioned [4]. Whole book was written about the problem [2]. This is why I am writing this summary.
|
https://arxiv.org/abs/2208.00771v1
|
We describe the results of observations with the 100m Robert C. Byrd Green Bank Telescope (GBT) in the HI line of 105 nearby dwarf galaxies, 60 of which were discovered recently in the DESI Legacy Imaging Surveys. Of 105 objects observed, we detected 77 galaxies with the following median parameters: an HI-flux of 0.69 Jy km/s, a heliocentric velocity of 732 km/s, and a $W_{50}$ line width of 32 km/s. 70 are isolated late-type objects and 35 are new probable satellites of nearby spiral galaxies (NGC 628, NGC 2787, NGC 3556, NGC 4490, NGC 4594 and NGC 5055). The detected galaxies are predominantly gas-rich systems with a median gas-to-stellar-mass ratio of 1.87. In general, they follow the classic Tully-Fisher relation obtained for large disk-dominated spiral galaxies if their $M_{21}$ magnitudes are used instead of B-magnitudes.
|
https://arxiv.org/abs/2505.19248v1
|
A 100um thick silicon detector with 1mm2 pad readout optimized for
sub-nanosecond time resolution has been developed and tested. Coupled to a
purposely developed amplifier based on SiGe HBT technology, this detector was
characterized at the H8 beam line at the CERN SPS. An excellent time resolution
of (106+-1)ps for silicon detectors was measured with minimum ionizing
particles.
|
http://arxiv.org/abs/1511.04231v1
|
Magnetic sensing is present in our everyday interactions with consumer
electronics, and also demonstrates potential for measurement of extremely weak
biomagnetic fields, such as those of the heart and brain. In this work, we
leverage the many benefits of the micro-electromechanical systems (MEMS)
devices to fabricate a small, low power, inexpensive sensor whose resolution is
in the range of weak biomagnetic fields. The sensor works at room temperature,
and is suitable for consumer electronics integration. At present, such
biomagnetic fields can only be measured by expensive mechanisms such as optical
pumping and superconducting quantum interference devices (SQUIDs). Thus, our
sensor suggests the opening of a large phase space for medical and consumer
applications. The prototype fabrication is achieved by assembling
micro-objects, including a permanent micromagnet, onto a post-release
commercial MEMS accelerometer. With this system, we demonstrate a room
temperature MEMS magnetometer, whose design is only sensitive to gradient
magnetic fields and is generally insensitive to the Earth's uniform field. In
air, the sensor's response is linear with a resolution of 1.1 nT cm-1 and spans
over 3 decades of dynamic range to 4.6 {\mu}T cm-1. In 1 mTorr vacuum with 20
dB magnetic shielding, the sensor achieved 100 pT cm-1 resolution at resonance.
The theoretical floor of this design is 110 fT cm-1 Hz-1/2 with a resolution of
13 fT cm-1, thus these devices hold promise for both magnetocardiography (MCG)
and magnetoencephalography (MEG) applications.
|
http://arxiv.org/abs/1911.10250v1
|
Frequency-resolved optical gating (FROG) is widely used to measure ultrashort
laser pulses, also providing an excellent indication of pulse-shape
instabilities by disagreement between measured and retrieved FROG traces. FROG,
however, requires -- but currently lacks -- an extremely reliable
pulse-retrieval algorithm. So, this work provides one. It uses a simple
procedure for directly retrieving the precise pulse spectrum from the measured
trace. Additionally, it implements a multi-grid scheme, also quickly yielding a
vastly improved guess for the spectral phase before implementing the entire
measured trace. As a result, it achieves 100% convergence for the three most
common variants of FROG for pulses with time-bandwidth products as high as 100,
even with traces contaminated with noise. Here we consider the
polarization-gate (PG) and transient-grating (TG) variants of FROG, which
measure amplified, UV, and broadly tunable pulses. Convergence occurs for all
of the >20,000 simulated noisy PG/TG FROG traces considered and is also faster.
|
http://arxiv.org/abs/1811.11100v2
|
Japan has committed to carbon neutrality by 2050. Emissions from the electricity sector amount to 42% of the total. Solar photovoltaics (PV) and wind comprise three quarters of global net capacity additions because of low and falling prices. This provides an opportunity for Japan to make large reductions in emissions while also reducing its dependence on energy imports. This study shows that Japan has 14 times more solar and offshore wind resources than needed to supply 100% renewable electricity. A 40 year hourly energy balance model is presented of Japan's electricity system using historical data. Pumped hydro energy storage, high voltage interconnection and dispatchable capacity (hydro, biomass and hydrogen energy) are included to balance variable generation and demand. Differential evolution is used to find the least-cost solution under various constraints. The levelized cost of electricity is found to be USD 86 per MWh for a PV-dominated system, and USD 110 per MWh for a wind-dominated system. These costs can be compared with the average system prices on the spot market in Japan of USD 102 per MWh. In summary, Japan can be self-sufficient for electricity supply at competitive costs.
|
https://arxiv.org/abs/2109.08363v1
|
https://aclanthology.org/N12-4001
|
|
Meaning is a fundamental concept in Natural Language Processing (NLP), given its aim to build systems that mean what they say to you, and understand what you say to them. In order for NLP to scale beyond partial, task-specific solutions, it must be informed by what is known about how humans use language to express and understand communicative intents. The purpose of this tutorial is to present a selection of useful information about semantics and pragmatics, as understood in linguistics, in a way that{'}s accessible to and useful for NLP practitioners with minimal (or even no) prior training in linguistics. The tutorial content is based on a manuscript in progress I am co-authoring with Prof. Alex Lascarides of the University of Edinburgh.
|
https://aclanthology.org/P18-5001
|
Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r^2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.
|
http://openaccess.thecvf.com/content_cvpr_2014/html/Zhang_100_Times_Faster_2014_CVPR_paper.html
|
100 years after Smoluchowski introduces his approach to stochastic processes,
they are now at the basis of mathematical and physical modeling in cellular
biology: they are used for example to analyse and to extract features from
large number (tens of thousands) of single molecular trajectories or to study
the diffusive motion of molecules, proteins or receptors. Stochastic modeling
is a new step in large data analysis that serves extracting cell biology
concepts. We review here the Smoluchowski's approach to stochastic processes
and provide several applications for coarse-graining diffusion, studying
polymer models for understanding nuclear organization and finally, we discuss
the stochastic jump dynamics of telomeres across cell division and stochastic
gene regulation.
|
http://arxiv.org/abs/1612.08381v1
|
A report by Brillouin (from Perrin's laboratory) on the rate of adsorption of `granules' to a glass plate [\textit{Ann. Chim. Phys.} 27 (1912) 412--23] prompted Marian von Smoluchowski (MvS) to interpret the data in terms of his newly developed theory of restricted Brownian motion. Placing an adsorbing wall at $x=0$, he modelled the particle concentration $n(x,t)$ as that solution of the diffusion equation which vanished at the wall, a boundary condition (BC) hereafter called SBC. A gaping discrepancy between his theory and Brillouin's data elicited a suggestion from MvS (that a particle might not adhere to the wall on every impact), but no further action -- other than that of applying his theory to spherically symmtric systems. In a paper written before, but published shortly after MvS's untimely death [\textit{Proc. Roy. Acad. Amst.} 20 (1918) 642--58], H. C. Burger erected a new and sturdier framework, which led him to an alternative BC, $D(\partial n/\partial x)_{x=0}=\varkappa n(0,t)$, applicable to a surface with an arbitrary absorption probability ($1\leq\varepsilon\leq 0$); a fallacy (that subsequently claimed more victims, including the present author) prevented him from deducing the correct expression for $\varkappa$. Burger's approach became ``The Road Not Taken'', while the SBC became the cornerstone of colloidal coagulation and bimoleculer reaction kinetics. Burger's approach (but not the ABC) was partly rediscovered by Kolmogorov, and used by Sveshnikov and Fuchs.The emended version of Burger's BC is shown here to coincide with that deduced from the Klein-Kramers equation [\textit{Phys. Rev. Lett.} \textbf{49} (1982) 304--07; \textit{J. Chem. Phys.} \textbf{78} (1983) 2710--12] and the Lorentz model of random flights [\textit{J. Phys. Chem.} \textbf{86} (1982) 4750--56].
|
https://arxiv.org/abs/2404.17021v1
|
An elementary survey of mathematical cosmology is presented. We cover certain key ideas and developments in a qualitative way, from the time of the Einstein static universe in 1917 until today. We divide our presentation into four main parts, the first part containing important cosmologies discovered until 1960. The second period (1960-80) contains discussions of geometric extensions of the standard cosmology, singularities, chaotic behaviour, and the initial input of particle physics ideas into cosmology. Our survey for the third period (1980-2000) continues with brief descriptions of the main ideas of inflation, the multiverse, quantum, Kaluza-Klein, and string cosmologies, wormholes and baby universes, cosmological stability, and modified gravity. The last period which ends today includes various more advanced topics such as M-theoretic cosmology, braneworlds, the landscape, topological issues, the measure problem, genericity, dynamical singularities, and dark energy. We emphasize certain threads that run throughout the whole period of development of theoretical cosmology and underline their importance in the overall structure of the field. We end this outline with an inclusion of the abstracts of all papers contributed to the Philosophical Transactions of the Royal Society A, Theme Issue `The Future of Mathematical Cosmology'.
|
https://arxiv.org/abs/2203.16443v1
|
Robust and credible material flow data are required to support the ongoing efforts to reconcile the economic and social benefits of plastics with their human and environmental health impacts. This study presents a global, but regionalized, life cycle material flow analysis (MFA) of all plastic polymers and applications for the period 1950-2020. It also illustrates how this dataset can be used to generate possible scenarios for the next 30 years. The historical account documents how the relentless growth of plastic production and use has consistently outpaced waste management systems worldwide and currently generates on the order of 60 Mt of mismanaged plastic waste annually. The scenarios show that robust interventions are needed to avoid annual plastic waste mismanagement from doubling by 2050.
|
https://arxiv.org/abs/2411.13618v1
|
We are experiencing a period of extreme intellectual effervescence in the
area of cosmology. A huge volume of observational data in unprecedented
quantity and quality and a more consistent theoretical framework propelled
cosmology to an era of precision, turning the discipline into a cutting-edge
area of contemporary science. Observations with type Ia Supernovae (SNe Ia),
showed that the expanding Universe is accelerating, an unexplained fact in the
traditional decelerated model. Identifying the cause of this acceleration is
the most fundamental problem in the area. As in the scientific renaissance, the
solution will guide the course of the discipline in the near future and the
possible answers (whether dark energy, some extension of general relativity or
a still unknown mechanism) should also leverage the development of physics. In
this context, without giving up a pedagogical approach, we present an overview
of both the main theoretical results and the most significant observational
discoveries of cosmology in the last 100 years. The saga of cosmology will be
presented in a trilogy. In this article (Part I), based on the articles by
Einstein, de Sitter, Friedmann, Lema\^itre and Hubble, we will describe the
period between the origins of cosmology and the discovery of Universal
expansion (1929). In Part II, we will see the period from 1930 to 1997, closing
with the old standard decelerated model. The Part III will be entirely devoted
to the accelerated model of the universe, the cosmic paradigm of the XXI
century.
|
http://arxiv.org/abs/1709.03693v1
|
The Cosmological Constant $\Lambda$, in different incarnations, has been with
us for 100 years. Many surveys of dark energy are underway, indicating so far
that the data are consistent with a dark energy equation of state of $w=-1$,
i.e. a $\Lambda$ term in Einstein's equation, although time variation of $w$ is
not yet ruled out. The ball is now back in the theoreticians' court, to explain
the physical meaning of $\Lambda$. We discuss sociological aspects of this
field, in particular to what extent the agreement on the cold dark matter +
$\Lambda$ concordance model is a result of the globalization of research and
over-communication.
|
http://arxiv.org/abs/1704.00069v1
|
We take the occasion of this article to review one hundred years of the physical and mathematical study of the Ising model. The model, introduced by Lenz in 1920, has been at the cornerstone of many major revolutions in statistical mechanics. We wish, through its history, to outline some of these amazing developments. We restrict our attention to the ferromagnetic nearest-neighbour model on the hypercubic lattice, and essentially focus on what happens at or near the so-called critical point.
|
https://arxiv.org/abs/2208.00864v1
|
Einsteins general theory of relativity is one of the most important
accomplishments in the history of science. Its experimental verification a
century ago is therefore an essential milestone that is worth celebrating in
full. We reassess the importance of one of the two expeditions that made these
measurements possible, a story that involves a sense of adventure and
scientific ingenuity in equal measure.
|
http://arxiv.org/abs/1907.10687v1
|
We discuss the asymptotics of the eigenvalue counting function for partial
differential operators and related expressions paying the most attention to the
sharp asymptotics. We consider Weyl asymptotics, asymptotics with Weyl
principal parts and correction terms and asymptotics with non-Weyl principal
parts. Semiclassical microlocal analysis, propagation of singularities and
related dynamics play crucial role.
We start from the general theory, then consider Schr\"odinger and Dirac
operators with the strong magnetic field and, finally, applications to the
asymptotics of the ground state energy of heavy atoms and molecules with or
without a magnetic field.
|
http://arxiv.org/abs/1608.03963v2
|
Current systems and formalisms for representing incomplete information generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, or the handling and processing of the data, e.g. for query evaluation, is intractable. In this paper, we present a decomposition-based approach to addressing this problem. We introduce world-set decompositions (WSDs), a space-efficient formalism for representing any finite set of possible worlds over relational databases. WSDs are therefore a strong representation system for any relational query language. We study the problem of efficiently evaluating relational algebra queries on sets of worlds represented by WSDs. We also evaluate our technique experimentally in a large census data scenario and show that it is both scalable and efficient.
|
https://arxiv.org/abs/cs/0606075v2
|
We demonstrate a self-homodyne detection method to stabilize a continuous-wave 1550-nm laser to a 1-km optical fiber delay line, achieving a frequency instability of 6.3x10<sup>-15</sup> at a 16-ms averaging time. This result, limited by fiber thermal noise, is achieved without the need for a vacuum system, highlighting the potential of our approach for ultra-stable laser systems in non-laboratory environments. The system utilizes only a few passive fiber optic components and a single balanced photodetector, significantly simplifying the laser stabilization process while maintaining high performance. The entire optical setup is compactly packaged in a portable metal air-tight case.
|
https://arxiv.org/abs/2409.04681v2
|
In the $\{-1,0,1\}$-APSP problem the goal is to compute all-pairs shortest
paths (APSP) on a directed graph whose edge weights are all from $\{-1,0,1\}$.
In the (min,max)-product problem the input is two $n\times n$ matrices $A$ and
$B$, and the goal is to output the (min,max)-product of $A$ and $B$.
This paper provides a new algorithm for the $\{-1,0,1\}$-APSP problem via a
simple reduction to the target-(min,max)-product problem where the input is
three $n\times n$ matrices $A,B$, and $T$, and the goal is to output a Boolean
$n\times n$ matrix $C$ such that the $(i,j)$ entry of $C$ is 1 if and only if
the $(i,j)$ entry of the (min,max)-product of $A$ and $B$ is exactly the
$(i,j)$ entry of the target matrix $T$. If (min,max)-product can be solved in
$T_{MM}(n) = \Omega(n^2)$ time then it is straightforward to solve
target-(min,max)-product in $O(T_{MM}(n))$ time. Thus, given the recent result
of Bringmann, K\"unnemann, and Wegrzycki [STOC 2019], the $\{-1,0,1\}$-APSP
problem can be solved in the same time needed for solving approximate APSP on
graphs with positive weights.
Moreover, we design a simple algorithm for target-(min,max)-product when the
inputs are restricted to the family of inputs generated by our reduction. Using
fast rectangular matrix multiplication, the new algorithm is faster than the
current best known algorithm for (min,max)-product.
|
http://arxiv.org/abs/1911.06132v1
|
In recent years, Large Language Models have revolutionized the field of natural language processing, showcasing an impressive rise predominantly in English-centric domains. These advancements have set a global benchmark, inspiring significant efforts toward developing Arabic LLMs capable of understanding and generating the Arabic language with remarkable accuracy. Despite these advancements, a critical challenge persists: the potential bias in Arabic LLMs, primarily attributed to their reliance on datasets comprising English data that has been translated into Arabic. This reliance not only compromises the authenticity of the generated content but also reflects a broader issue -the scarcity of original quality Arabic linguistic data. This study aims to address the data scarcity in the Arab world and to encourage the development of Arabic Language Models that are true to both the linguistic and nuances of the region. We undertook a large-scale data mining project, extracting a substantial volume of text from the Common Crawl WET files, specifically targeting Arabic content. The extracted data underwent a rigorous cleaning and deduplication process, using innovative techniques to ensure the integrity and uniqueness of the dataset. The result is the 101 Billion Arabic Words Dataset, the largest Arabic dataset available to date, which can significantly contribute to the development of authentic Arabic LLMs. This study not only highlights the potential for creating linguistically and culturally accurate Arabic LLMs but also sets a precedent for future research in enhancing the authenticity of Arabic language models.
|
https://arxiv.org/abs/2405.01590v1
|
We present our second catalog of quadruple star candidates, containing 101 systems discovered in TESS Full-Frame Image data. The targets were initially detected as eclipsing binary stars with the help of supervised machine learning methods applied to sectors Sectors 1 through 54. A dedicated team of citizen scientists subsequently identified through visual inspection two sets of eclipses following two different periods. All 101 systems presented here pass comprehensive photocenter motion tests confirming that both sets of eclipses originate from the target star. Some of the systems exhibit prominent eclipse time variations suggesting dynamical interactions between the two component binary stars. One target is an eclipsing quintuple candidate with a (2+1)+2 hierarchical configuration, such that the (2+1) subsystem produces eclipses on the triple orbit as well. Another has recently been confirmed as the second shortest period quadruple reported to date. This catalog provides ephemerides, eclipse depths and durations, sample statistics, and highlights potentially interesting targets for future studies.
|
https://arxiv.org/abs/2309.14200v1
|
We present explicit formulas - that are also computer code - for 101
real-life quantitative trading alphas. Their average holding period
approximately ranges 0.6-6.4 days. The average pair-wise correlation of these
alphas is low, 15.9%. The returns are strongly correlated with volatility, but
have no significant dependence on turnover, directly confirming an earlier
result based on a more indirect empirical analysis. We further find empirically
that turnover has poor explanatory power for alpha correlations.
|
http://arxiv.org/abs/1601.00991v3
|
Various properties of Jovian trojan asteroids such as composition, rotation periods, and photometric amplitudes, or the rate of binarity in the population can provide information and constraints on the evolution of the group and of the Solar System itself. Here we present new photometric properties of 45 Jovian trojans from the K2 mission of the Kepler space telescope, and present phase-folded light curves for 44 targets, including (11351) Leucus, one of the targets of the Lucy mission. We extend our sample to 101 asteroids with previous K2 Trojan measurements, then compare their combined amplitude- and frequency distributions to other ground-based and space data. We show that there is a dichotomy in the periods of Trojans with a separation at $\sim 100$ hr. We find that 25% of the sample are slow rotators (P$\geq$30 hr), which excess can be attributed to binary objects. We also show that 32 systems can be classified as potential detached binary systems. Finally, we calculate density and rotation constraints for the asteroids. Both the spin barrier and fits to strengthless ellipsoid models indicate low densities and thus compositions similar to cometary and TNO populations throughout the sample. This supports the scenario of outer Solar System origin for Jovian trojans.
|
https://arxiv.org/abs/2102.09447v1
|
Social media is a great source of data for users reporting information and regarding their health and how various things have had an effect on them. This paper presents various approaches using Transformers and Large Language Models and their ensembles, their performance along with advantages and drawbacks for various tasks of SMM4H'24 - Classifying texts on impact of nature and outdoor spaces on the author's mental health (Task 3), Binary classification of tweets reporting their children's health disorders like Asthma, Autism, ADHD and Speech disorder (task 5), Binary classification of users self-reporting their age (task 6).
|
https://arxiv.org/abs/2410.15998v1
|
We present results on the world's first over 100 PFLOPS single precision lattice QCD quark solver on the japanese new supercomputer Fugaku. We achieve a factor 38 time speedup from the supercomputer K on the same problem size, $192^4$, with 102 PFLOPS, 10% floating-point operation efficiency against single precision floating-point operation peak. The evaluation region is the single precision BiCGStab for a Clover-Wilson Dirac matrix with Schwarz Alternating Procedure domain decomposition preconditioning using Jacobi iteration for the local domain matrix inversion.
|
https://arxiv.org/abs/2109.10687v1
|
We quantify the accuracy of different non-self-consistent and self-consistent
spin-orbit coupling (SOC) treatments in Kohn-Sham and hybrid density-functional
theory by providing a band structure benchmark set for the valence and
low-lying conduction energy bands of 103 inorganic compounds, covering chemical
elements up to Po. Reference energy band structures for the PBE density
functional are obtained using the full-potential (linearized) augmented plane
wave code Wien2k, employing its self-consistent treatment of SOC including
Dirac-like p$^{1/2}$ orbitals in the basis set. We use this benchmark set to
benchmark a computationally simpler, non-self-consistent all-electron treatment
of SOC based on scalar-relativistic orbitals and numeric atom-centered orbital
basis functions. For elements up to Z$\approx$50, both treatments agree
virtually exactly. For the heaviest elements considered (Tl, Pb, Bi, Po), the
band structure changes due to SOC are captured with a relative deviation of 11%
or less. For different density functionals (PBE vs. the hybrid HSE06), we show
that the effect of spin-orbit coupling is usually similar but can be dissimilar
if the qualitative features of the predicted underlying scalar-relativistic
band structures do not agree. All band structures considered in this work are
available online via the NOMAD Repository to aid in future benchmark studies
and methods development.
|
http://arxiv.org/abs/1705.01804v2
|
Generating a powerful and quasistatic magnetic field within the confines of a tabletop laboratory experiment has proven to be a persistent challenge. The creation of magnetized high-energy-density plasma through such experiments presents significant opportunities for exploring several terrestrial as well as astrophysical phenomena, apart from controlling relativistic electron transport, directly relevant for fusion schemes. Here we demonstrate that the modest magnetic field (10$^{-3}$ megagauss ) in a common, readily available Neodymium magnet is amplified to 10's of megagauss levels lasting a few picoseconds, when excited by an ultraintense, femtosecond laser pulse. The experimental findings are strongly supported by particle-in-cell simulations, which not only validate the observations but also unveil a potential dynamo mechanism responsible for the enhancement and amplification of the axial magnetic field. These outcomes are of utmost importance in comprehending the intricacies of relativistic electron transport and the realm of magnetized laboratory astrophysics.
|
https://arxiv.org/abs/2504.15094v2
|
An ultrafast laser delivering 10.4 kW average output power based on coherent combination of twelve stepindex fiber amplifiers is presented. The system emits close-to-transform-limited 254 fs pulses at 80 MHz repetition rate, has a high beam quality (M2<=1.2), and a low relative intensity noise of 0.56% in the frequency range of from 1 Hz to 1 MHz. Automated spatiotemporal alignment allows for hands-off operation.
|
https://arxiv.org/abs/2101.08501v1
|
Aims. GRB 190829A (z = 0.0785), detected by Fermi and Swift with two emission episodes separated by a quiescent gap of ~40 s, was also observed by the H.E.S.S. telescopes at Very-High Energy (VHE). We present the 10.4m GTC observations of the afterglow of GRB 190829A and underlying supernova and compare it against a similar GRB 180728A and discuss the implications on underlying physical mechanisms producing these two GRBs. Methods. We present multi-band photometric data along with spectroscopic follow-up observations taken with the 10.4m GTC telescope. Together with the data from the prompt emission, the 10.4m GTC data are used to understand the emission mechanisms and possible progenitor. Results. A detailed analysis of multi-band data of the afterglow demands cooling frequency to pass between the optical and X-ray bands at early epochs and dominant with underlying SN 2019oyw later on. Conclusions. Prompt emission temporal properties of GRB 190829A and GRB 180728A are similar, however the two pulses seem different in the spectral domain. We found that the supernova (SN) 2019oyw associated with GRB 190829A, powered by Ni decay, is of Type Ic-BL and that the spectroscopic/photometric properties of this SN is consistent with those observed for SN 1998bw but evolved comparatively early.
|
https://arxiv.org/abs/2009.04021v1
|
We report $^{105}$Pd NMR and NQR measurements on a single crystal of
Ce$_3$Pd$_{20}$Si$_6$, where antiferroquadrupolar and antiferromagnetic orders
develop at low temperature. From the analysis of NQR and NMR spectra, we have
determined the electric field gradient (EFG) tensors and the anisotropic Knight
shift ($K$) components for both inequivalent Pd sites - Pd($32f$) and
Pd($48h$). The observed EFG values are in excellent agreement with our
state-of-the-art DFT calculations. The principal values of the quadrupolar
coupling are $(20.37 \pm 0.02)$ MHz and $(5.45 \pm 0.02)$ MHz, for the
Pd($32f$) and Pd($48h$) site, respectively, which is large compared to the
Larmor frequency defined by the gyromagnetic constant $\gamma = 1.94838$ MHz/T
for $^{105}$Pd. Therefore, the complete knowledge of $K$ and the EFG tensors is
crucial to establish the correspondence between NMR spectra and
crystallographic sites, which is needed for a complete analysis of the magnetic
structure, static spin susceptibility, and the spin-lattice relaxation rate
data and a better understanding of the groundstate of Ce$_3$Pd$_{20}$Si$_6$.
|
http://arxiv.org/abs/1911.09952v2
|
Based on the technique of periodically poled lithium niobate (PPLN)
waveguide, up-conversion single-photon detection at 1.064-{\mu}m is
demonstrated. We have achieved a system photon detection efficiency (DE) of
32.5% with a very low noise count rate (NCR) of 45 counts per second (cps) by
pumping with a 1.55-{\mu}m-band single frequency laser using the
long-wavelength pumping technique and exploiting volume Bragg grating (VBG) as
a narrow band filter. Replacing the VBG with a combination of adequate
dielectric filters, a DE of up to 38% with a NCR of 700 cps is achieved, making
the overall system more stable and practical. The up-conversion single-photon
detector (SPD) operating at 1.064 {\mu}m can be a promising robust counter and
find usage in many fields.
|
http://arxiv.org/abs/1703.10838v1
|
To solve the Cd puzzle (spherical nucleus puzzle), I have proposed the concept ``spherical-like nucleus''. Since shape coexistence often occurs in such nuclei, explicit spherical-like spectra are not easily identified. In this Letter, I finally find the direct evidence for the existence of the spherical-like nucleus. $^{106}$Pd is in fact a typical spherical-like nucleus. The low-lying parts, up to the $10_{1}^{+}$ state, under 4000 keV, of the spherical-like spectra are verified. By comparison, new theory outperforms the IBM-2. This result completely disproves the possibility of the phonon excitations of the spherical nucleus in the Cd-Pd nuclei region.
|
https://arxiv.org/abs/2501.10925v3
|
I investigate $^{10}$B+$\alpha$ cluster states of $^{14}$N with a
$^{10}$B+$\alpha$ cluster model. Near the $\alpha$-decay threshold energy, I
obtain $K^\pi=3^+$ and $K^\pi=1^+$ rotational bands having
$^{10}$B($3^+$)+$\alpha$ and $^{10}$B($1^+$)+$\alpha$ components, respectively.
I assign the band-head state of the $K^\pi=3^+$ band to the experimental $3^+$
at $E_x$=13.19 MeV of $^{14}$N observed in $\alpha$ scattering reactions by
$^{10}$B and show that the calculated $\alpha$-decay width is consistent with
the experimental data. I discuss an $\alpha$-cluster motion around the $^{10}$B
cluster and show that $^{10}$B+$\alpha$ cluster states contain significant
components of a linear-chain 3$\alpha$ configuration, in which an $\alpha$
cluster is localized in the longitudinal direction around the deformed $^{10}$B
cluster.
|
http://arxiv.org/abs/1505.05591v1
|
The Cluster Shell Model (CSM)describes light nuclei in terms of $k-\alpha$ particles and $x$ extra nucleons, in which the extra nucleon move in the deformed field generated by the geometric configuration of $\alpha$-particles. We present the first study for the case $x=2$ nucleons in application to $^{10}Be$ as a cluster of two $\alpha$-particles and two neutrons.
|
https://arxiv.org/abs/2309.14505v1
|
We present a determination of optical potentials for $^{10}$Be-nucleus collisions using the double-folding method to compute the real part and Kramers-Kronig dispersion relations to derive the imaginary part. As microscopic inputs we use chiral effective field theory nucleon-nucleon interactions at next-to-next-to-leading order combined with state-of-the-art nucleonic densities. With these potentials, we compute elastic scattering cross sections for the exotic nucleus 10 Be off various targets, and compare them to experiment. Without any fitting parameter, we obtain good agreement with data. For collisions on light targets, we observe significant uncertainty related to the short-range physics, whereas for heavy targets that uncertainty remains small.
|
https://arxiv.org/abs/2205.13987v2
|
Significant suppression of radiation in 3D structured media with small refractive index 1.4-1.6, such as of glass or polymers, is a desirable feature yet to be obtained. For periodical structures this is realised at frequencies of the complete photonic band gap (CPBG), which up to now was demonstrated to open for materials with refractive index of at least 1.9. We present here a quasiperiodic 3D structure consisting of multiple overlapping gratings with a homogeneous distribution of Bragg peaks on a sphere in reciprocal space, which allows efficient suppression of emission. Recently we have presented the theoretical model, considering interaction with the neighbouring gratings only, that estimates a finite CPBG for arbitrarily small refractive indices and thus complete emission suppression in infinite structures. However, numerical simulations demonstrate a finite leakage of power from emitter not predicted by the model. Still the simulations show -10 dB suppression in 3D structures with optimised number of gratings. Astonishingly, as we show here, this limit is almost independent of the refractive index contrast. Also, the structures with a defined number of gratings show maximal suppression at certain refractive indices, losing the suppression even at higher refractive indices. The -10 dB suppression is demonstrated for refractive index contrast as low as 1.30.
|
https://arxiv.org/abs/2209.15463v1
|
We remark that the two 10D massive deformations of the $N=2$ maximal type IIA
supergravity (Romans and HLW supergravity) are associated to the low energy
limit of the uplift to 10D of M2-brane torus bundles with parabolic monodromy
linearly and non-linearly realized respectively. Romans supergravity
corresponds to M2-brane compactified on a twice-punctured torus bundle.
|
http://arxiv.org/abs/1511.04784v1
|
Using the pure spinor master action for 10D super-Yang-Mills in the gauge $b_{0}V = Q\Xi$, tree-level scattering amplitudes are calculated through the perturbiner method, and shown to match those obtained from pure spinor CFT techniques. We find kinematic numerators made of nested $b$-ghost operators, and show that the Siegel gauge condition $b_{0}V = 0$ gives rise to color-kinematics duality satisfying numerators whose Jacobi identity follows from the Jacobi identity of a kinematic algebra.
|
https://arxiv.org/abs/2108.11708v1
|
We dimensionally reduce the bosonic sector of 10D Euclidean type IIA
supergravity over a Calabi-Yau three-fold. The resulting theory describes the
bosonic sector of 4D, N = 2 Euclidean supergravity coupled to vector- and
hyper-multiplets.
We show that the scalar target manifold of the vector-multiplets is
projective special para-Kahler, and is therefore of split signature, whereas
the target manifold of the hyper-multiplets is (positive-definite) quaternionic
Kahler.
|
http://arxiv.org/abs/1503.05095v3
|
The realization of high-frequency unipolar quantum optoelectronic devices enables the demonstration of high bitrate free space data transmission in the second atmospheric window. Data-bits are written onto the laser emission using a large bandwidth amplitude modulator that operates by shifting the absorption of an optical transition in and out of the laser frequency.
|
https://arxiv.org/abs/2110.06572v1
|
A coherent XY machine (CXYM) is a physical spin simulator that can simulate the XY model by mapping XY spins onto the continuous phases of non-degenerate optical parametric oscillators (NOPOs). Here, we demonstrated a large-scale CXYM with >47,000 spins by generating 10-GHz-clock time-multiplexed NOPO pulses via four-wave mixing in a highly nonlinear fiber inside a fiber ring cavity. By implementing a unidirectional coupling from the i-th pulse to the (i+1)-th pulse with a variable 1-pulse delay planar lightwave circuit interferometer, we successfully controlled the effective temperature of a one-dimensional XY spin network within two orders of magnitude.
|
https://arxiv.org/abs/2307.03333v2
|
Coherent frequency division of high-stability optical sources permits the extraction of microwave signals with ultra-low phase noise, enabling their application to systems with stringent timing precision. To date, the highest performance systems have required tight phase stabilization of laboratory grade optical frequency combs to Fabry-Perot optical reference cavities for faithful optical-to-microwave frequency division. This requirement limits the technology to highly-controlled laboratory environments. Here, we employ a transfer oscillator technique, which employs digital and RF analog electronics to coherently suppress additive optical frequency comb noise. This relaxes the stabilization requirements and allows for the extraction of multiple independent microwave outputs from a single comb, while at the same time, permitting low-noise microwave generation from combs with higher noise profiles. Using this method we transferred the phase stability of two high-Finesse optical sources at 1157 nm and 1070 nm to two independent 10 GHz signals using a single frequency comb. We demonstrated absolute phase noise below -106 dBc/Hz at 1-Hz from carrier with corresponding 1 second fractional frequency instability below $2\times10^{-15}$. Finally, the latter phase noise levels were attainable for comb linewidths broadened up to 2 MHz, demonstrating the potential for out-of lab use with low SWaP lasers.
|
https://arxiv.org/abs/2110.00593v1
|
In practical satellite-based quantum key distribution (QKD) systems, the preparation and transmission of polarization-encoding photons suffer from complex environmental effects and high channel-loss. Consequently, the hinge to enhancing the secure key rate (SKR) lies in achieving robust, low-error and high-speed polarization modulation. Although the schemes that realize self-compensation exhibit remarkable robustness. Their modulation speed is constrained to approximately 2 GHz to avoid the interaction between the electrical signal and the reverse optical pulses. Here we utilize the non-reciprocity of the lithium niobate modulators and eliminate the modulation on the reverse optical pulses. As this characteristic is widely available in the radio-frequency band, the modulation speed is no longer limited by the self-compensating optics and can be further increased. The measured average intrinsic QBER of the different polarization states at 10 GHz system repetition frequency is as low as 0.53% over 10 min without any compensation. And the experiment simulation shows that the proposed scheme extends the transmission distance to more than 350 km. Our work can be be efficient performed to the high-speed and high-loss satellite-based quantum communication scenario.
|
https://arxiv.org/abs/2411.08358v1
|
Generation of quantum light source is a promising technique to overcome the standard quantum limit in precision measurement. Here, we demonstrate an experimental generation of quadrature squeezing resonating on the cesium D2 line down to 10 Hz for the first time. The maximum squeezing in audio frequency band is 5.57 dB. Moreover, we have presented a single-photon modulation locking to control the squeezing angle, while effectively suppressing the influence of laser noise on low-frequency squeezing. The whole system operates steadily for hours. The generated low-frequency quantum light source can be applied in quantum metrology,light-matter interaction investigation and quantum memory in the audio frequency band and even below.
|
https://arxiv.org/abs/2209.07920v3
|
We propose a novel procedure to generate pseudo mandarin speech data named as CAMP (character audio mix up), which aims at generating audio from a character scale. We also raise a method for building a mandarin character scale audio database adaptive to CAMP named as META-AUDIO, which makes full use of audio data and can greatly increase the data diversity of the database. Experiments show that our CAMP method is simple and quite effective. For example, we train models with 10 hours of audio data in AISHELL-1 and pseudo audio data generated by CAMP, and achieve a competitive 11.07 character error rate (CER). Besides, we also perform training with only 10 hours of audio data in AIDATATANG dataset and pseudo audio data generated by CAMP, which again achieves a competitive 8.26 CER.
|
https://arxiv.org/abs/2210.13067v1
|
The widespread deployment of InfRared Small-Target Detection(IRSTD) algorithms on edge devices necessitates the exploration of model compression techniques. Binary neural networks (BNNs) are distinguished by their exceptional efficiency in model compression. However, the small size of infrared targets introduces stringent precision requirements for the IRSTD task, while the inherent precision loss during binarization presents a significant challenge. To address this, we propose the Binarized Infrared Small-Target Detection Network (BiisNet), which preserves the core operations of binarized convolutions while integrating full-precision features into the network's information flow. Specifically, we propose the Dot-Binary Convolution, which retains fine-grained semantic information in feature maps while still leveraging the binarized convolution operations. In addition, we introduce a smooth and adaptive Dynamic Softsign function, which provides more comprehensive and progressively finer gradient during back-propagation, enhancing model stability and promoting an optimal weight distribution.Experimental results demonstrate that BiisNet not only significantly outperforms other binary architectures but also demonstrates strong competitiveness among state-of-the-art full-precision models.
|
https://arxiv.org/abs/2503.02662v1
|
We consider the integers having the property of reversing when multiplied by
a specific integer k. First, we proved that k should be either 1, 4 or 9.
Second, we classify these integers as (10, 1)- reverse multiples, (10, 4)-
reverse multiples and (10, 9)- reverse multiples. Then we conclude their
general form.
|
http://arxiv.org/abs/1503.07848v2
|
Fix a planar graph $G$ and a list-assignment $L$ with $|L(v)|=10$ for all $v\in V(G)$. Let $\alpha$ and $\beta$ be $L$-colorings of $G$. A recoloring sequence from $\alpha$ to $\beta$ is a sequence of $L$-colorings, beginning with $\alpha$ and ending with $\beta$, such that each successive pair in the sequence differs in the color on a single vertex of $G$. We show that there exists a constant $C$ such that for all choices of $\alpha$ and $\beta$ there exists a recoloring sequence $\sigma$ from $\alpha$ to $\beta$ that recolors each vertex at most $C$ times. In particular, $\sigma$ has length at most $C|V(G)|$. This confirms a conjecture of Dvo\v{r}\'{a}k and Feghali. For our proof, we introduce a new technique for quickly showing that many configurations are reducible. We believe this method may be of independent interest and will have application to other problems in this area.
|
https://arxiv.org/abs/2411.00679v2
|
We report the first quantum key distribution (QKD) systems capable of
delivering sustainable, real-time secure keys continuously at rates exceeding
10 Mb/s. To achieve such rates, we developed high speed post-processing
modules, achieving maximum data throughputs of 60 MC/s, 55 Mb/s, and 108 Mb/s
for standalone operation of sifting, error correction and privacy amplification
modules, respectively. The photonic layer of the QKD systems features
high-speed single photon detectors based on self-differencing InGaAs avalanche
photodiodes, phase encoding using asymmetric Mach-Zehnder interferometer, and
active stabilization of the interferometer phase and photon polarisation. An
efficient variant of the decoy-state BB84 protocol is implemented for security
analysis, with a large dataset size of $10^8$ bits selected to mitigate
finite-size effects. Over a 2 dB channel, a record secure key rate of 13.72
Mb/s has been achieved averaged over 4.4 days of operation. We confirm the
robustness and long-term stability on a second QKD system continuously running
for 1 month without any user intervention.
|
http://arxiv.org/abs/1807.04484v1
|
High resolution images are widely used in our daily life, whereas high-speed video capture is challenging due to the low frame rate of cameras working at the high resolution mode. Digging deeper, the main bottleneck lies in the low throughput of existing imaging systems. Towards this end, snapshot compressive imaging (SCI) was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction. During acquisition, multiple high-speed images are encoded and collapsed to a single measurement. After this, algorithms are employed to retrieve the video frames from the coded snapshot. Recently developed Plug-and-Play (PnP) algorithms make it possible for SCI reconstruction in large-scale problems. However, the lack of high-resolution encoding systems still precludes SCI's wide application. In this paper, we build a novel hybrid coded aperture snapshot compressive imaging (HCA-SCI) system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask. We further implement a PnP reconstruction algorithm with cascaded denoisers for high quality reconstruction. Based on the proposed HCA-SCI system and algorithm, we achieve a 10-mega pixel SCI system to capture high-speed scenes, leading to a high throughput of 4.6G voxels per second. Both simulation and real data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.
|
https://arxiv.org/abs/2106.15765v2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.