id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
d39cb985de21cceb2bcfeb220184e4283b42b4d92625a03970aac73c89c43e7a
|
2026-01-04T08:26:00+00:00
|
Covered calls: my new passive income strategy for 2026?
|
Aside from dividend shares, genuine passive income isn’t easy to come by. But there’s a strategy I’ve been considering recently involving options. Selling covered calls can be a way of generating extra income from a portfolio of stocks, even if they don’t pay dividends. So is it something I should start doing in 2026? Call options allow someone to buy a particular stock at a specified price (the strike price) before a certain time. And one way to make money is by selling these to someone else. Doing this without owning the stock in question is one of the fastest ways to blow up a portfolio. But the risk is much more limited for someone who owns the shares themselves. The share price can go up past the specified price on the option and in that situation, the investor has to sell it below its new market value. And that results in missed returns. If this doesn’t happen, though, the investor gets to keep whatever premium the buyer pays for the option. And that looks like it can be a valuable source of passive income. As an example, I own Amazon (NASDAQ:AMZN) shares in my portfolio. Right now, I can sell, for $108, a call option with a $250 strike price that expires at the end of January. That means I get $108 right now. And if the share price stays below $250 (it’s currently $232) until the end of the month, nothing happens other than I keep the cash as passive income. If the stock goes above that level – suppose it reaches $260 – I’ll have to sell it at $250. So my potential profit until February is limited to 7.75% plus the $108 I get for selling the option. A US option represents 100 shares, so $108 is another 0.45%. And that’s the equation I have to weigh up – a 0.45% guaranteed return to limit my profits to 8.2% until the end of the month. I think the stock market underestimates Amazon in two ways. Two of these are the strength of its advertising platform and its artificial intelligence (AI) potential going forward. The firm’s advertising business has been growing strongly and its Trainium chips are more efficient than their Nvidia counterparts. These look like impressive growth avenues to me. The biggest risk is Amazon’s AI infrastructure spending. The stock market is just starting to question whether the company’s capital commitments on this front are going to pay off. My own view is that the share price adequately reflects these risks. That’s why it’s on my buy list and why I don’t plan on selling covered calls to try and generate extra income. The passive income generated by selling call options could help reduce my losses if Amazon shares fall in the next month. And I’m definitely not ruling this out as a possibility. Selling covered calls, though, isn’t a strategy I want to pursue right now. Being able to benefit from rising share prices is an important part of my investing plan. Forgoing this on a temporary basis by selling options might seem like an attractive idea. But if I’m right about the stocks I own, it’s something I could live to regret in the long term. The post Covered calls: my new passive income strategy for 2026? appeared first on The Motley Fool UK. When investing expert Mark Rogers has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for nearly a decade has provided thousands of paying members with top stock recommendations from the UK and US markets. And right now, Mark thinks there are 6 standout stocks that investors should consider buying. Want to see if Amazon made the list? More reading Stephen Wright has positions in Amazon. The Motley Fool UK has recommended Amazon and Nvidia. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors.
|
https://www.fool.co.uk/2026/01/04/covered-calls-my-new-passive-income-strategy-for-2026/
|
Business & Finance
|
svg
|
03fe4757e2bd78a47d1ced278ad16c40c84978e3143070ba258c850a48d7aaa4
|
2026-01-04T08:16:00+00:00
|
Why this FTSE 250 stock is my first buy in 2026
|
I’ve received a dividend in my Stocks and Shares ISA and I’m going to use it to buy shares in Vistry (LSE:VTY). That will make it my first investment of 2026. The stock did fairly well in 2025, but I think there’s a lot more to come. And I’m not convinced the market as a whole is fully appreciating this company’s potential. Vistry shares are still 50% below where they were at the end of 2024. The big reason for this is that the company is recovering from an accounting irregularity in one part of its business. This has been weighing on profits in 2024 and 2025. But while it’s still expected to have an impact on the firm’s financial performance in 2026, the impact is set to be much lower. The hit to profits was £91.5m in 2024 and £50m in 2025. In 2026, though, the anticipated impact is likely to be around £10m – well below previous years. In other words, the effects of the accounting issue are starting to wear off in quite a big way. But the stock is still trading at a major discount to where it was before the news. That’s a big reason I’m buying the stock, but it’s not the only one. Vistry is a housebuilder, but it doesn’t come with a lot of the structural challenges that other industry participants do. Unlike other builders, the company focuses on building through partnerships with local authorities, housing associations, and private landlords. This has two advantages. The first is that it means there are pre-arranged buyers, which offers protection in a weak housing market. The second is that the firm doesn’t have to finance all of its work itself. That means it’s in a stronger position to return cash to shareholders. And it has around £700m – or 35% of its current market value – left on its existing plan to return in 2026. I’m very positive about Vistry for 2026 and beyond. But its unique structure brings some important risks, which it’s important to focus on. Operating through partnerships requires co-operation with other organisations that might have different incentives. And that can lead to slower decision-making or disagreements. Another potential issue is that changes in government priorities can impact the viability of projects. Vistry can’t control these and it’s much more exposed to them than other builders. Neither of these has been a major issue recently, but they can’t be ignored. So while I see a buying opportunity, I’m mindful that investments always come with their own risks. I’m expecting Vistry to do well in 2026. Profits should increase as the impact of the accounting irregularities wears off and the partnership structure should facilitate strong capital returns. Despite this, the stock is well below where it was a couple of years ago. That’s why it’s the first stock I’m buying for my portfolio in 2026 and I’m planning to keep buying if the stock stays down. The post Why this FTSE 250 stock is my first buy in 2026 appeared first on The Motley Fool UK. When investing expert Mark Rogers has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for nearly a decade has provided thousands of paying members with top stock recommendations from the UK and US markets. And right now, Mark thinks there are 6 standout stocks that investors should consider buying. Want to see if Vistry Group Plc made the list? More reading Stephen Wright has positions in Vistry Group Plc. The Motley Fool UK has recommended Vistry Group Plc. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors.
|
https://www.fool.co.uk/2026/01/04/why-this-ftse-250-stock-is-my-first-buy-in-2026/
|
Business & Finance
|
svg
|
02b9a9cc9aecfcb86231c6c1914194c8b8e2eed41d82520a2212fd7220001667
|
2026-01-04T08:15:00+00:00
|
How much do you need in a Stocks and Shares ISA to target £10,000 of passive income in 2026?
|
It’s that time of year when I look back and see how my Stocks and Shares ISA has performed over the past 12 months. To be honest, it’s done okay but I think there’s room for improvement. To try and do better in 2026, I’m sticking with my strategy of holding mainly dividend shares. I like the idea of generating an income stream from doing very little. But I prefer to reinvest the payouts I receive. In my opinion, compounding is the secret to long-term investment success. Over the years, the UK stock market has established a reputation for being home to some of the best dividend payers around. For example, at the end of 2025, the S&P 500 was yielding around 1.3%. By contrast, based on amounts paid over the past year, the FTSE 100’s yield is 3.2%. This might not sound like much of a difference. But £10,000 invested over 25 years would grow by £8,167 more at the higher rate. This assumes all dividends are used to buy more shares. Of course, my example ignores any capital growth (or losses). But assuming the Footsie’s going to yield 3.2% in 2026, it means a Stocks and Shares ISA would need to be worth £312,500 to generate £10,000 of passive income. However, the top 10-yielding shares on the index are currently returning 6.6%. Using this figure, an ISA would have to be valued at £151,515 to earn a five-figure second income this year. However, there can never be any guarantees when it comes to dividends. For those that don’t have this kind of money in an ISA, there’s no need to be disheartened. By taking a long-term view, I reckon it’s possible to get there over time. Land Securities Group (LSE:LAND) is one high-yielding share I believe is worth considering. It’s a real estate investment trust (REIT), which means it must pay dividends each year equivalent to 90% of its qualifying profit. Please note that tax treatment depends on the individual circumstances of each client and may be subject to change in future. The content in this article is provided for information purposes only. It is not intended to be, neither does it constitute, any form of tax advice. In recent years, the group’s been focusing on campuses, retail parks and logistics facilities. It says it’s seeing “clear positive momentum†across all parts of its business. In November, it upgraded its profit guidance for the year ending 31 March 2026, saying it expects net rental income to grow by 4%-5%. Previously, it was predicting a rise of 3%-4%. This could be good news for those hoping for an increase in its already-generous dividend. Based on amounts declared over the past 12 months, the stock’s currently (29 December) yielding 6.7%. But like many in the sector, its balance sheet contains plenty of debt. This means its level of borrowing is high relative to its EBITDA (earnings before interest, tax, depreciation, and amortisation). Another potential issue is that the commercial property sector’s particularly sensitive to an economic downturn. Even so, the group’s shares trade at a significant discount to its net asset value. Also, its portfolio contains some high-profile properties with most of its tenancy agreements providing for inflation-linked rent increases. When combined with its attractive dividend, I think the stock’s worth considering as part of a well-balanced diversified portfolio.  However, for those who are wary of the property sector, I reckon there are plenty of other UK shares paying attractive dividends at the moment. The post How much do you need in a Stocks and Shares ISA to target £10,000 of passive income in 2026? appeared first on The Motley Fool UK. When investing expert Mark Rogers has a stock tip, it can pay to listen. After all, the flagship Motley Fool Share Advisor newsletter he has run for nearly a decade has provided thousands of paying members with top stock recommendations from the UK and US markets. And right now, Mark thinks there are 6 standout stocks that investors should consider buying. Want to see if Land Securities Group Plc made the list? More reading James Beard has no position in any of the shares mentioned. The Motley Fool UK has recommended Land Securities Group Plc. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors.
|
https://www.fool.co.uk/2026/01/04/how-much-do-you-need-in-a-stocks-and-shares-isa-to-target-10000-of-passive-income-in-2026/
|
Business & Finance
|
svg
|
c6668c3e157e72cefd46bda28a9a590dd880bcd51be1a5fa9b7dcf31ba02f100
|
2026-01-04T07:21:12+00:00
|
'I'll be PM this time next year,' Starmer tells BBC
|
The prime minister tells BBC's Sunday with Laura Kuenssberg that May's upcoming elections will not be a "referendum" on his government.
|
https://www.bbc.com/news/articles/clygv1ngynjo?at_medium=RSS&at_campaign=rss
|
World & Politics
| |
6796b4d7e322c9b91dc13119342bfe14b1a90768b326cb38d61ef64bde158925
|
2026-01-04T03:56:20+00:00
|
Hawaii’s ‘Green Fee’ is in effect, but not all of it: What does that mean?
|
The Green Fee increases the Transient Accommodations Tax by .75% to an 11% rate statewide.
|
https://thehill.com/homenews/nexstar_media_wire/5670966-hawaii-green-fee-cruise-tax/
|
World & Politics
| |
8014f9d0e56204ea6b9764de23904cbeb6d9b930f37bd073200f52f314786e85
|
2026-01-04T03:25:59+00:00
|
Maduro arrives in US to face charges after ouster
|
Venezuelan President Nicolás Maduro is now on U.S. soil after the Trump administration captured and removed him from power. Late on Saturday, Maduro reportedly arrived at the Metropolitan Detention Center in Brooklyn, N.Y., after being flown into the city by helicopter, senior law enforcement officials told Jackie Koppell and Andrew Fischer Espitallier of NewsNation, The…
|
https://thehill.com/regulation/court-battles/5671654-venezuelan-president-maduro-detained/
|
World & Politics
| |
4050e0cff854e7770655cd10fe885f35d48c70f6bd341e4ed83754814ff51cc5
|
2026-01-04T02:12:55+00:00
|
Texas leaders react to Venezuela airstrikes and Maduro’s capture
|
United States elected officials reacted to the early morning airstrike and capture of Venezuelan President Nicolás Maduro Saturday.
|
https://thehill.com/homenews/nexstar_media_wire/5671628-texas-leaders-react-venezuela-operation/
|
World & Politics
| |
627bdfd2f3ab9d1466fdce4eebc34ea5b7d9aba497f366f2024619a694d231c8
|
2026-01-04T10:00:00+00:00
|
Did any cat breeds develop naturally?
|
Humans have undoubtedly bred cats to create certain breeds, but did any of these feline breeds emerge naturally?
|
https://www.livescience.com/animals/cats/did-any-cat-breeds-develop-naturally
|
Science
| |
f4f42b270d0cd8ea2a79d8b678a4a660ddfcf608d95e9517b978cafef00b9c6e
|
2026-01-04T07:22:00+00:00
|
Womanizer Coupons: Save 15% in December
|
Enjoy discounts on Duo Premium and more sex toys with our Womanizer coupon codes.
|
https://www.wired.com/story/womanizer-coupon/
|
Technology
| |
67d98406c07e60277c5412701a3cc6aab3a2502bbe301e1dfaea922e4ff2a95d
|
2026-01-04T06:50:00+00:00
|
Hydrow Discount Code: Save Up to $150 | January 2026
|
Save on rowers and accessories with Hydrow coupons, including an exclusive discount of $50.
|
https://www.wired.com/story/hydrow-discount-code/
|
Technology
| |
f1f498d524b6d99ec98848eab20dcd684a1af31e796b40629310b0c180149d49
|
2026-01-04T06:12:00+00:00
|
eBay Coupon Codes and Deals: Up to 60% Off Select Items
|
Save up to 60% on a selection of items at eBay, including electronics, home products, card games, and more.
|
https://www.wired.com/story/ebay-coupon-code/
|
Technology
| |
9f88f830bb4c6f081abc3b472d8d9b23ac4cfad51c2a39ac90f8c26afa787ce6
|
2026-01-04T06:00:00+00:00
|
20% Off Sephora Promo Code | January 2026
|
Earn more points on skincare purchases when you use our Sephora coupon.
|
https://www.wired.com/story/sephora-promo-code/
|
Technology
| |
2d3dc855adae7715719c8803cf072c00f92e267190d32b0b3679dc951c42bd04
|
2026-01-04T05:30:38+00:00
|
What We’re Expecting at CES 2026
|
Take a wild guess what tech buzzword is gonna color every gadget at the year's largest technology show.
|
https://gizmodo.com/what-were-expecting-at-ces-2026-2000704751
|
Technology
| |
a879a5f0f025a40e434e01893d8e6e1a03dbacaed15b9dc3981f281508656ec1
|
2026-01-06T00:00:00
|
The poetic life and death of a glow-worm
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-03990-w
|
https://www.nature.com/articles/d41586-025-03990-w
|
Academic Papers
|
svg
|
89e38174ff0758a757f29212f137ad57cdfb60c741d2f3b5755a9ad8f42d5004
|
2026-01-06T00:00:00
|
Rethink how we build AI to enable effective climate-change mitigation
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-04123-z
|
https://www.nature.com/articles/d41586-025-04123-z
|
Academic Papers
|
svg
|
c78548f7c501f0f8a478e0da7598cfb5f560fb4ce7a187e6344ea759cf4fd5a0
|
2026-01-06T00:00:00
|
Help small-scale gold miners to transition away from mercury use
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-04121-1
|
https://www.nature.com/articles/d41586-025-04121-1
|
Academic Papers
|
svg
|
23cc7268e159bf19b25e79bab3511f4d8c9e525b9a1b0e23e20c5201116d2546
|
2026-01-06T00:00:00
|
Retire ‘seminal’ from the scientific vocabulary
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-04124-y
|
https://www.nature.com/articles/d41586-025-04124-y
|
Academic Papers
|
svg
|
491a067812571f35cf04184efb0f4c29c7038732ef4a18fc20620119819cd9f7
|
2026-01-06T00:00:00
|
To improve resilience to climate change, track what endures
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-04122-0
|
https://www.nature.com/articles/d41586-025-04122-0
|
Academic Papers
|
svg
|
8a3bfbce105300fb32d5b3b42ab9cc805eefaf918f5edbc262f704d0c01595fd
|
2026-01-06T00:00:00
|
Why cancer can come back years later — and how to stop it
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-04149-3
|
https://www.nature.com/articles/d41586-025-04149-3
|
Academic Papers
|
svg
|
0f6d4b81e549758cdf261b26392d1bd682a4ab66dcfe7a9bbcf9c8e11a796868
|
2026-01-06T00:00:00
|
Defossilize our chemical world
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-026-00005-0
|
https://www.nature.com/articles/d41586-026-00005-0
|
Academic Papers
|
svg
|
11e8bc2a24432e0077f0b92894aee9f0932986d9030bd33ba6a2c3f2bc4277b3
|
2026-01-06T00:00:00
|
Artificial skin mimics the octopus’s art of disguise
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-025-03984-8
|
https://www.nature.com/articles/d41586-025-03984-8
|
Academic Papers
|
svg
|
3444dd4ef526ab43c4ac5ac208d18138977b523b41536903de71532f1b261ad8
|
2026-01-06T00:00:00
|
Jellyfish sleep like humans — even though they don’t have brains
|
Nature, Published online: 06 January 2026; doi:10.1038/d41586-026-00044-7
|
https://www.nature.com/articles/d41586-026-00044-7
|
Academic Papers
|
svg
|
a663b7b9b897e8c901da56ea3decef9c34fba520caa4645e50d24a6eadb8a06d
|
2026-01-05T00:00:00
|
I see Mozambique’s baboons as windows into hominid evolution
|
Nature, Published online: 05 January 2026; doi:10.1038/d41586-025-04153-7
|
https://www.nature.com/articles/d41586-025-04153-7
|
Academic Papers
|
svg
|
b68a513671e74dc9a9541b0d8141a346fa010ab5f1d2e676a25f8b8efb2a78c5
|
2026-01-05T00:00:00
|
Sunyaev–Zeldovich detection of hot intracluster gas at redshift 4.3
|
Nature, Published online: 05 January 2026; doi:10.1038/s41586-025-09901-3
|
https://www.nature.com/articles/s41586-025-09901-3
|
Academic Papers
|
svg
|
2b6748e4824ae83a0fb70d211e4e6f73aa892f999f9ecc55a8d1f7fa0d7b1014
|
2026-01-05T00:00:00
|
These women helped to shape quantum mechanics — it’s time to recognize them
|
Nature, Published online: 05 January 2026; doi:10.1038/d41586-025-04151-9
|
https://www.nature.com/articles/d41586-025-04151-9
|
Academic Papers
|
svg
|
6dbcaaf5391fce66d69085e55c6ddc742a1ee317367d3fd136253135117c21df
|
2026-01-05T00:00:00
|
Daily briefing: The human cells in our bodies that aren’t genetically ours
|
Nature, Published online: 05 January 2026; doi:10.1038/d41586-026-00043-8
|
https://www.nature.com/articles/d41586-026-00043-8
|
Academic Papers
|
svg
|
7a1faf02f1c4665293eae82be6541eb464b3e0f0c184603bd66b754b01fc7836
|
2026-01-05T00:00:00
|
Pandemic PhDs: graduates anxious, but optimistic, about the future
|
Nature, Published online: 05 January 2026; doi:10.1038/d41586-025-04152-8
|
https://www.nature.com/articles/d41586-025-04152-8
|
Academic Papers
|
svg
|
0aef402efe925058e507a30b4af0c3fc1d89c4b9cb66a8b8a956d7bb305a99c7
|
2026-01-04T00:00:00
|
Author Correction: Repulsions instruct synaptic partner matching in an olfactory circuit
|
Nature, Published online: 04 January 2026; doi:10.1038/s41586-025-10089-9
|
https://www.nature.com/articles/s41586-025-10089-9
|
Academic Papers
|
svg
|
fb182efedcecaf5430416021fa3074476fc3ddbd6fe9f49fe501c5a214a33f83
|
2026-01-04T00:00:00
|
Author Correction: Rewiring an olfactory circuit by altering cell-surface combinatorial code
|
Nature, Published online: 04 January 2026; doi:10.1038/s41586-025-10090-2
|
https://www.nature.com/articles/s41586-025-10090-2
|
Academic Papers
|
svg
|
19c84cedcc2b1a36348fb49521dd0fba718e7deb8a784fc10be45e883e2501b1
|
2026-01-07T00:00:00-05:00
|
GCRank: A Generative Contextual Comprehension Paradigm for Takeout Ranking Model
|
arXiv:2601.02361v1 Announce Type: new Abstract: The ranking stage serves as the central optimization and allocation hub in advertising systems, governing economic value distribution through eCPM and orchestrating the user-centric blending of organic and advertising content. Prevailing ranking models often rely on fragmented modules and hand-crafted features, limiting their ability to interpret complex user intent. This challenge is further amplified in location-based services such as food delivery, where user decisions are shaped by dynamic spatial, temporal, and individual contexts. To address these limitations, we propose a novel generative framework that reframes ranking as a context comprehension task, modeling heterogeneous signals in a unified architecture. Our architecture consists of two core components: the Generative Contextual Encoder (GCE) and the Generative Contextual Fusion (GCF). The GCE comprises three specialized modules: a Personalized Context Enhancer (PCE) for user-specific modeling, a Collective Context Enhancer (CCE) for group-level patterns, and a Dynamic Context Enhancer (DCE) for real-time situational adaptation. The GCF module then seamlessly integrates these contextual representations through low-rank adaptation. Extensive experiments confirm that our method achieves significant gains in critical business metrics, including click-through rate and platform revenue. We have successfully deployed our method on a large-scale food delivery advertising platform, demonstrating its substantial practical impact. This work pioneers a new perspective on generative recommendation and highlights its practical potential in industrial advertising systems.
|
https://arxiv.org/abs/2601.02361
|
Academic Papers
|
svg
|
5480a5abfe202c5bf867bca1a86bab8df727431fd56fe1c3f5ca936e0e57244c
|
2026-01-07T00:00:00-05:00
|
The Impact of LLM-Generated Reviews on Recommender Systems: Textual Shifts, Performance Effects, and Strategic Platform Control
|
arXiv:2601.02362v1 Announce Type: new Abstract: The rise of generative AI technologies is reshaping content-based recommender systems (RSes), which increasingly encounter AI-generated content alongside human-authored content. This study examines how the introduction of AI-generated reviews influences RS performance and business outcomes. We analyze two distinct pathways through which AI content can enter RSes: user-centric, in which individuals use AI tools to refine their reviews, and platform-centric, in which platforms generate synthetic reviews directly from structured metadata. Using a large-scale dataset of hotel reviews from TripAdvisor, we generate synthetic reviews using LLMs and evaluate their impact across the training and deployment phases of RSes. We find that AI-generated reviews differ systematically from human-authored reviews across multiple textual dimensions. Although both user- and platform-centric AI reviews enhance RS performance relative to models without textual data, models trained on human reviews consistently achieve superior performance, underscoring the quality of authentic human data. Human-trained models generalize robustly to AI content, whereas AI-trained models underperform on both content types. Furthermore, tone-based framing strategies (encouraging, constructive, or critical) substantially enhance platform-generated review effectiveness. Our findings highlight the strategic importance of platform control in governing the generation and integration of AI-generated reviews, ensuring that synthetic content complements recommendation robustness and sustainable business value.
|
https://arxiv.org/abs/2601.02362
|
Academic Papers
|
svg
|
7a891da3af3bd566deb65d77ef6573ae227a03d6704dfa6c6a665faf6bc3fc5b
|
2026-01-07T00:00:00-05:00
|
Acceptance of cybernetic avatars for capability enhancement: a large-scale survey
|
arXiv:2601.02363v1 Announce Type: new Abstract: Avatar embodiment experiences have the potential to enhance human capabilities by extending human senses, body, and mind. This study investigates social acceptance of robotic and virtual avatars as enablers of capability enhancement in six domains: identity exploration, well-being and behavioral transformation, expanded travel capabilities, expanded bodily and sensory abilities, cognitive augmentation, and immortality. We conducted a large-scale survey (n = 1001) in Dubai to explore acceptance of sixteen capability enhancement scenarios within these domains. The highest levels of agreement were observed for multilingual communication (77.5%) and learning capabilities (68.7%), followed by assisting individuals with reduced mobility (64.5%) and behavioral transformation (59.5%). Scenarios involving immortality through consciousness transfer received the least support (34.9%). These findings contribute to a deeper understanding of public attitudes toward avatar-based human enhancement and offer practical guidance for the responsible design, development, and integration of cybernetic avatars in the society, ensuring their societal acceptance and fostering a harmonious human-avatar coexistence.
|
https://arxiv.org/abs/2601.02363
|
Academic Papers
|
svg
|
aa374b4022f5f882d1a3f3ee6566d1533f164435420034c1c6dcec21d8133d48
|
2026-01-07T00:00:00-05:00
|
Towards Trustworthy LLM-Based Recommendation via Rationale Integration
|
arXiv:2601.02364v1 Announce Type: new Abstract: Traditional recommender systems (RS) have been primarily optimized for accuracy and short-term engagement, often overlooking transparency and trustworthiness. Recently, platforms such as Amazon and Instagram have begun providing recommendation rationales to users, acknowledging their critical role in fostering trust and enhancing engagement; however, most existing systems still treat them as post-hoc artifacts. We propose an LLM-based recommender (LLM-Rec) that not only predicts items but also generates logically grounded rationales. Our approach leverages a self-annotated rationale dataset and instruction tuning in a rationale-first format, where the model generates an explanation before outputting the recommended item. By adopting this strategy and representing rationales in a chain-of-thought (CoT) style, LLM-Rec strengthens both interpretability and recommendation performance. Experiments on the Fashion and Scientific domains of the Amazon Review dataset demonstrate significant improvements over well-established baselines. To encourage reproducibility and future research, we publicly release a rationale-augmented recommendation dataset containing user histories, rationales, and recommended items.
|
https://arxiv.org/abs/2601.02364
|
Academic Papers
|
svg
|
64ae5fa6ee658ec073e4f41628acf648590f979d1cd5e66b0b76a593931a9197
|
2026-01-07T00:00:00-05:00
|
FUSE : Failure-aware Usage of Subagent Evidence for MultiModal Search and Recommendation
|
arXiv:2601.02365v1 Announce Type: new Abstract: Multimodal creative assistants decompose user goals and route tasks to subagents for layout, styling, retrieval, and generation. Retrieval quality is pivotal, yet failures can arise at several stages: understanding user intent, choosing content types, finding candidates (recall), or ranking results. Meanwhile, sending and processing images is costly, making naive multimodal approaches impractical. We present FUSE: Failure-aware Usage of Subagent Evidence for MultiModal Search and Recommendation. FUSE replaces most raw-image prompting with a compact Grounded Design Representation (GDR): a selection aware JSON of canvas elements (image, text, shape, icon, video, logo), structure, styles, salient colors, and user selection provided by the Planner team. FUSE implements seven context budgeting strategies: comprehensive baseline prompting, context compression, chain-of-thought reasoning, mini-shot optimization, retrieval-augmented context, two-stage processing, and zero-shot minimalism. Finally, a pipeline attribution layer monitors system performance by converting subagent signals into simple checks: intent alignment, content-type/routing sanity, recall health (e.g., zero-hit and top-match strength), and ranking displacement analysis. We evaluate the seven context budgeting variants across 788 evaluation queries from diverse users and design templates (refer Figure 3). Our systematic evaluation reveals that Context Compression achieves optimal performance across all pipeline stages, with 93.3% intent accuracy, 86.8% routing success(with fallbacks), 99.4% recall, and 88.5% NDCG@5. This approach demonstrates that strategic context summarization outperforms both comprehensive and minimal contextualization strategies.
|
https://arxiv.org/abs/2601.02365
|
Academic Papers
|
svg
|
c1fba15505e01381c2dda5562d9312c1a4a8d33b1fc7aac4041f5c9f14e450c1
|
2026-01-07T00:00:00-05:00
|
TextBridgeGNN: Pre-training Graph Neural Network for Cross-Domain Recommendation via Text-Guided Transfer
|
arXiv:2601.02366v1 Announce Type: new Abstract: Graph-based recommendation has achieved great success in recent years. The classical graph recommendation model utilizes ID embedding to store essential collaborative information. However, this ID-based paradigm faces challenges in transferring to a new domain, making it hard to build a pre-trained graph recommendation model. This phenomenon primarily stems from two inherent challenges: (1) the non-transferability of ID embeddings due to isolated domain-specific ID spaces, and (2) structural incompatibility between heterogeneous interaction graphs across domains. To address these issues, we propose TextBridgeGNN, a pre-training and fine-tuning framework that can effectively transfer knowledge from a pre-trained GNN to downstream tasks. We believe the key lies in how to build the relationship between domains. Specifically, TextBridgeGNN uses text as a semantic bridge to connect domains through multi-level graph propagation. During the pre-training stage, textual information is utilized to break the data islands formed by multiple domains, and hierarchical GNNs are designed to learn both domain-specific and domain-global knowledge with text features, ensuring the retention of collaborative signals and the enhancement of semantics. During the fine-tuning stage, a similarity transfer mechanism is proposed. This mechanism initializes ID embeddings in the target domain by transferring from semantically related nodes, successfully transferring the ID embeddings and graph pattern. Experiments demonstrate that TextBridgeGNN outperforms existing methods in cross-domain, multi-domain, and training-free settings, highlighting its ability to integrate Pre-trained Language Model (PLM)-driven semantics with graph-based collaborative filtering without costly language model fine-tuning or real-time inference overhead.
|
https://arxiv.org/abs/2601.02366
|
Academic Papers
|
svg
|
15f4d50281b57e9c95cb93cd7f9ead264dde85f441df9b3ff3341f08e2d152c5
|
2026-01-07T00:00:00-05:00
|
Cross-Platform Digital Discourse Analysis of the Israel-Hamas Conflict: Sentiment, Topics, and Event Dynamics
|
arXiv:2601.02367v1 Announce Type: new Abstract: The Israeli-Palestinian conflict remains one of the most polarizing geopolitical issues, with the October 2023 escalation intensifying online debate. Social media platforms, particularly Telegram, have become central to real-time news sharing, advocacy, and propaganda. In this study, we analyze Telegram, Twitter/X, and Reddit to examine how conflict narratives are produced, amplified, and contested across different digital spheres. Building on our previous work on Telegram discourse during the 2023 escalation, we extend the analysis longitudinally and cross-platform using an updated dataset spanning October 2023 to mid-2025. The corpus includes more than 187,000 Telegram messages, 2.1 million Reddit comments, and curated Twitter/X posts. We combine Latent Dirichlet Allocation (LDA), BERTopic, and transformer-based sentiment and emotion models to identify dominant themes, emotional dynamics, and propaganda strategies. Telegram channels provide unfiltered, high-intensity documentation of events; Twitter/X amplifies frames to global audiences; and Reddit hosts more reflective and deliberative discussions. Our findings reveal persistent negative sentiment, strong coupling between humanitarian framing and solidarity expressions, and platform-specific pathways for the diffusion of pro-Palestinian and pro-Israeli narratives. This paper offers three contributions: (1) a multi-platform, FAIR-compliant dataset on the Israel-Hamas war, (2) an integrated pipeline combining topic modeling, sentiment and emotion analysis, and spam filtering for large-scale conflict discourse, and (3) empirical insights into how platform affordances and affective publics shape the evolution of digital conflict communication.
|
https://arxiv.org/abs/2601.02367
|
Academic Papers
|
svg
|
14cd00aa12acb4c004fe1ffc13d8ccf1b52c632095c2e895ef6309105b11fb34
|
2026-01-07T00:00:00-05:00
|
Distillation-based Scenario-Adaptive Mixture-of-Experts for the Matching Stage of Multi-scenario Recommendation
|
arXiv:2601.02368v1 Announce Type: new Abstract: Multi-scenario recommendation is pivotal for optimizing user experience across diverse contexts. While Multi-gate Mixture-of-Experts (MMOE) thrives in ranking, its transfer to the matching stage is hindered by the blind optimization inherent to independent two-tower architectures and the parameter dominance of head scenarios. To address these structural and distributional bottlenecks, we propose Distillation-based Scenario-Adaptive Mixture-of-Experts (DSMOE). Specially, we devise a Scenario-Adaptive Projection (SAP) module to generate lightweight, context-specific parameters, effectively preventing expert collapse in long-tail scenarios. Concurrently, we introduce a cross-architecture knowledge distillation framework, where an interaction-aware teacher guides the two-tower student to capture complex matching patterns. Extensive experiments on real-world datasets demonstrate DSMOE's superiority, particularly in significantly improving retrieval quality for under-represented, data-sparse scenarios.
|
https://arxiv.org/abs/2601.02368
|
Academic Papers
|
svg
|
118918a43c915a5c473bbf6959bd90b5d6a82b823cd1b8ac039b7c88f955d4d4
|
2026-01-07T00:00:00-05:00
|
Fair Distribution of Digital Payments: Balancing Transaction Flows for Regulatory Compliance
|
arXiv:2601.02369v1 Announce Type: new Abstract: The concentration of digital payment transactions in just two UPI apps like PhonePe and Google Pay has raised concerns of duopoly in India s digital financial ecosystem. To address this, the National Payments Corporation of India (NPCI) has mandated that no single UPI app should exceed 30 percent of total transaction volume. Enforcing this cap, however, poses a significant computational challenge: how to redistribute user transactions across apps without causing widespread user inconvenience while maintaining capacity limits? In this paper, we formalize this problem as the Minimum Edge Activation Flow (MEAF) problem on a bipartite network of users and apps, where activating an edge corresponds to a new app installation. The objective is to ensure a feasible flow respecting app capacities while minimizing additional activations. We further prove that Minimum Edge Activation Flow is NP-Complete. To address the computational challenge, we propose scalable heuristics, named Decoupled Two-Stage Allocation Strategy (DTAS), that exploit flow structure and capacity reuse. Experiments on large semi-synthetic transaction network data show that DTAS finds solutions close to the optimal ILP within seconds, offering a fast and practical way to enforce transaction caps fairly and efficiently.
|
https://arxiv.org/abs/2601.02369
|
Academic Papers
|
svg
|
927bb9f107c16883e0edb81fdf37fcda0f00285bc86579758386d66f55631dea
|
2026-01-07T00:00:00-05:00
|
LLM-as-evaluator in Strategy Research: A Normative, Variance-Aware Protocol
|
arXiv:2601.02370v1 Announce Type: new Abstract: Large language models (LLMs) are becoming essential tools for strategy scholars who need to evaluate text corpora at scale. This paper provides a systematic analysis of the reliability of LLM-as-evaluator in strategy research. After classifying the typical ways in which LLMs can be deployed for evaluation purposes in strategy research, we draw on the specialised AI literature to analyse their properties as measurement instruments. Our empirical analysis reveals substantial instability in LLMs' evaluation output, stemming from multiple factors: the specific phrasing of prompts, the context provided, sampling procedures, extraction methods, and disagreements across different models. We quantify these effects and demonstrate how this unreliability can compromise the validity of research inferences drawn from LLM-generated evaluations. To address these challenges, we develop a comprehensive protocol that is variance-aware, normative, and auditable. We provide practical guidance for flexible implementation of this protocol, including approaches to preregistration and transparent reporting. By establishing these methodological standards, we aim to elevate LLM-based evaluation of business text corpora from its current ad hoc status to a rigorous, actionable, and auditable measurement approach suitable for scholarly research.
|
https://arxiv.org/abs/2601.02370
|
Academic Papers
|
svg
|
ff8f4a545afb48f42e594dbd7fbf3638348a7cdd7fbecb758b0aa2c7afc9bd51
|
2026-01-07T00:00:00-05:00
|
Permission Manifests for Web Agents
|
arXiv:2601.02371v1 Announce Type: new Abstract: The rise of Large Language Model (LLM)-based web agents represents a significant shift in automated interactions with the web. Unlike traditional crawlers that follow simple conventions, such as robots.txt, modern agents engage with websites in sophisticated ways: navigating complex interfaces, extracting structured information, and completing end-to-end tasks. Existing governance mechanisms were not designed for these capabilities. Without a way to specify what interactions are and are not allowed, website owners increasingly rely on blanket blocking and CAPTCHAs, which undermine beneficial applications such as efficient automation, convenient use of e-commerce services, and accessibility tools. We introduce agent-permissions.json, a robots.txt-style lightweight manifest where websites specify allowed interactions, complemented by API references where available. This framework provides a low-friction coordination mechanism: website owners only need to write a simple JSON file, while agents can easily parse and automatically implement the manifest's provisions. Website owners can then focus on blocking non-compliant agents, rather than agents as a whole. By extending the spirit of robots.txt to the era of LLM-mediated interaction, and complementing data use initiatives such as AIPref, the manifest establishes a compliance framework that enables beneficial agent interactions while respecting site owners' preferences.
|
https://arxiv.org/abs/2601.02371
|
Academic Papers
|
svg
|
3baa297abfdc71839a9daf50e6e3b28094d8eef9a648959ef8b6a08a45bb7aa8
|
2026-01-07T00:00:00-05:00
|
Improving News Recommendations through Hybrid Sentiment Modelling and Reinforcement Learning
|
arXiv:2601.02372v1 Announce Type: new Abstract: News recommendation systems rely on automated sentiment analysis to personalise content and enhance user engagement. Conventional approaches often struggle with ambiguity, lexicon inconsistencies, and limited contextual understanding, particularly in multi-source news environments. Existing models typically treat sentiment as a secondary feature, reducing their ability to adapt to users' affective preferences. To address these limitations, this study develops an adaptive, sentiment-aware news recommendation framework by integrating hybrid sentiment analysis with reinforcement learning. Using the BBC News dataset, a hybrid sentiment model combines VADER, AFINN, TextBlob, and SentiWordNet scores to generate robust article-level sentiment estimates. Articles are categorised as positive, negative, or neutral, and these sentiment states are embedded within a Q-learning architecture to guide the agent in learning optimal recommendation policies. The proposed system effectively identifies and recommends articles with aligned emotional profiles while continuously improving personalisation through iterative Q-learning updates. The results demonstrate that coupling hybrid sentiment modelling with reinforcement learning provides a feasible, interpretable, and adaptive approach for user-centred news recommendation.
|
https://arxiv.org/abs/2601.02372
|
Academic Papers
|
svg
|
c45db4ab48206193bddb43bcd9c6dc2275d41ac4ae10cf0c56ed393d35897442
|
2026-01-07T00:00:00-05:00
|
A Deep-SIC Channel Estimator Scheme in NOMA Network
|
arXiv:2601.02373v1 Announce Type: new Abstract: In 5G and next-generation mobile ad-hoc networks, reliable handover is a key requirement, which guarantees continuity in connectivity, especially for mobile users and in high-density scenarios. However, conventional handover triggers based on instantaneous channel measurements are prone to failures and the ping-pong effect due to outdated or inaccurate channel state information. To address this, we introduce Deep-SIC, a knowledge-based channel prediction model that employs a Transformer-based approach to predict channel quality and optimise handover decisions. Deep-SIC is a unique model that utilises Partially Decoded Data (PDD), a byproduct of successive interference cancellation (SIC) in NOMA, as a feedback signal to improve its predictions continually. This special purpose enables learners to learn quickly and stabilise their learning. Our model learns 68\% faster than existing state-of-the-art algorithms, such as Graph-NOMA, while offering verifiable guarantees of stability and resilience to user mobility (Theorem~2). When simulated at the system level, it can be shown that our strategy can substantially enhance network performance: the handover failure rate can be reduced by up to 40\%, and the ping-pong effect can be mitigated, especially at vehicular speeds (e.g., 60 km/h). Moreover, Deep-SIC has a 20\% smaller normalised root mean square error (NRMSE) in low-SNR situations than state-of-the-art algorithms with linear computational complexity, $O(K)$. This work has introduced a new paradigm for robust and predictive mobility management in dynamic wireless networks.
|
https://arxiv.org/abs/2601.02373
|
Academic Papers
|
svg
|
c621733efb2b0db6e330831150b598196cabe2aa9a294559a95a209d7d4d0482
|
2026-01-07T00:00:00-05:00
|
A Lay User Explainable Food Recommendation System Based on Hybrid Feature Importance Extraction and Large Language Models
|
arXiv:2601.02374v1 Announce Type: new Abstract: Large Language Models (LLM) have experienced strong development in recent years, with varied applications. This paper uses LLMs to develop a post-hoc process that provides more elaborated explanations of the results of food recommendation systems. By combining LLM with a hybrid extraction of key variables using SHAP, we obtain dynamic, convincing and more comprehensive explanations to lay user, compared to those in the literature. This approach enhances user trust and transparency by making complex recommendation outcomes easier to understand for a lay user.
|
https://arxiv.org/abs/2601.02374
|
Academic Papers
|
svg
|
fec8639775fe4a3c136dbff937155b587789a2f06905eaeff1d98b11f16e0cbc
|
2026-01-07T00:00:00-05:00
|
LeafTutor: An AI Agent for Programming Assignment Tutoring
|
arXiv:2601.02375v1 Announce Type: new Abstract: High enrollment in STEM-related degree programs has created increasing demand for scalable tutoring support, as universities experience a shortage of qualified instructors and teaching assistants (TAs). To address this challenge, LeafTutor, an AI tutoring agent powered by large language models (LLMs), was developed to provide step-by-step guidance for students. LeafTutor was evaluated through real programming assignments. The results indicate that the system can deliver step-by-step programming guidance comparable to human tutors. This work demonstrates the potential of LLM-driven tutoring solutions to enhance and personalize learning in STEM education. If any reader is interested in collaboration with our team to improve or test LeafTutor, please contact Pu Tian (pu.tian@stockton.edu) or Yalong Wu (wuy@uhcl.edu).
|
https://arxiv.org/abs/2601.02375
|
Academic Papers
|
svg
|
a9a2248eed8d16f14cf1fe532d113564a94c82e1a1f8f51d792508fe502c2eb8
|
2026-01-07T00:00:00-05:00
|
A Secure Edge Gateway Architecture for Wi-Fi-Enabled IoT
|
arXiv:2601.02376v1 Announce Type: new Abstract: This paper presents a Secure Edge Gateway Architecture for Wi-Fi-Enabled IoT designed to strengthen local network protection without altering existing infrastructure. The proposed gateway acts as an intermediate control point between Wi-Fi access points and the core network, monitoring traffic, isolating untrusted devices, and preventing common wireless attacks such as spoofing, deauthentication, and unauthorized access. The design focuses on adaptive traffic filtering and lightweight policy enforcement instead of complex analytical models, making it suitable for medium-sized network environments. The prototype gateway was deployed in a real office with around 70 total devices, including 28 IoT units such as sensors, cameras, and smart controllers. Over ten days of continuous operation, the system reduced successful spoofing incidents by 87% and improved recovery time after deauthentication by 42%, while increasing network latency by only 3.1% and reducing throughput by less than 4% compared to a baseline WPA3 configuration. These results confirm that implementing security functions at the edge layer can significantly improve the resilience of Wi-Fi-enabled IoT environments without introducing noticeable overhead or requiring specialized hardware.
|
https://arxiv.org/abs/2601.02376
|
Academic Papers
|
svg
|
8aca228688b374eabb6b56f6b36d09403c3a1c1ad63714c639ed78967191411d
|
2026-01-07T00:00:00-05:00
|
Trust in LLM-controlled Robotics: a Survey of Security Threats, Defenses and Challenges
|
arXiv:2601.02377v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into robotics has revolutionized their ability to interpret complex human commands and execute sophisticated tasks. However, such paradigm shift introduces critical security vulnerabilities stemming from the ''embodiment gap'', a discord between the LLM's abstract reasoning and the physical, context-dependent nature of robotics. While security for text-based LLMs is an active area of research, existing solutions are often insufficient to address the unique threats for the embodied robotic agents, where malicious outputs manifest not merely as harmful text but as dangerous physical actions. In this work, we present a systematic survey, summarizing the emerging threat landscape and corresponding defense strategies for LLM-controlled robotics. Specifically, we discuss a comprehensive taxonomy of attack vectors, covering topics such as jailbreaking, backdoor attacks, and multi-modal prompt injection. In response, we analyze and categorize a range of defense mechanisms, from formal safety specifications and runtime enforcement to multi-LLM oversight and prompt hardening. Furthermore, we review key datasets and benchmarks used to evaluate the robustness of these embodied systems. By synthesizing current research, this work highlights the urgent need for context-aware security solutions and provides a foundational roadmap for the development of safe, secure, and reliable LLM-controlled robotics.
|
https://arxiv.org/abs/2601.02377
|
Academic Papers
|
svg
|
f704eb9f89ed079455d7697c4d3d6117f17dc30725964bb2631b47ae9da9424b
|
2026-01-07T00:00:00-05:00
|
Modeling the Mental World for Embodied AI: A Comprehensive Review
|
arXiv:2601.02378v1 Announce Type: new Abstract: As the application of Embodied AI Agents in avatars, wearable devices, and robotic systems continues to deepen, their core research challenges have gradually shifted from physical environment interaction to the accurate understanding of social interactions. Traditional physical world models (PWM) focus on quantifiable physical attributes such as space and motion, failing to meet the needs of social intelligence modeling. In contrast, the Mental World Model (MWM), as a structured representation of humans' internal mental states, has become the critical cognitive foundation for embodied agents to achieve natural human-machine collaboration and dynamic social adaptation. However, current MWM research faces significant bottlenecks: such as fragmented conceptual framework with vague boundaries between MWM and PWM, disjointed reasoning mechanisms for the technical pathways and applicable scenarios of different Theory of Mind (ToM) reasoning paradigms, and detachment between evaluation and practice. To address these issues, this review systematically synthesizes over 100 authoritative studies to provide a comprehensive overview of MWM research for embodied AI. Its core contributions are threefold: First, it constructs a complete theoretical framework for MWM for the first time. Specifically, it distinguishes the essential differences between MWM and PWMs. Second, it systematically defines the key components of MWM through two paradigms for mental element representation. Third, it comprehensively analyzes two core ToM reasoning paradigms with 19 ToM methods. Finally, it also clarifies the integration trend of neuro-symbolic hybrid architectures, and synthesizes 26 ToM evaluation benchmarks. This work aims to promote the integration of embodied agents into human society and advance the in-depth development of human-machine collaborative interaction.
|
https://arxiv.org/abs/2601.02378
|
Academic Papers
|
svg
|
82cb1b342447a88a61b535a18705cab823cc605c8db585937e27165b90f2ee69
|
2026-01-07T00:00:00-05:00
|
Movement Primitives in Robotics: A Comprehensive Survey
|
arXiv:2601.02379v1 Announce Type: new Abstract: Biological systems exhibit a continuous stream of movements, consisting of sequential segments, that allow them to perform complex tasks in a creative and versatile fashion. This observation has led researchers towards identifying elementary building blocks of motion known as movement primitives, which are well-suited for generating motor commands in autonomous systems, such as robots. In this survey, we provide an encyclopedic overview of movement primitive approaches and applications in chronological order. Concretely, we present movement primitive frameworks as a way of representing robotic control trajectories acquired through human demonstrations. Within the area of robotics, movement primitives can encode basic motions at the trajectory level, such as how a robot would grasp a cup or the sequence of motions necessary to toss a ball. Furthermore, movement primitives have been developed with the desirable analytical properties of a spring-damper system, probabilistic coupling of multiple demonstrations, using neural networks in high-dimensional systems, and more, to address difficult challenges in robotics. Although movement primitives have widespread application to a variety of fields, the goal of this survey is to inform practitioners on the use of these frameworks in the context of robotics. Specifically, we aim to (i) present a systematic review of major movement primitive frameworks and examine their strengths and weaknesses; (ii) highlight applications that have successfully made use of movement primitives; and (iii) examine open questions and discuss practical challenges when applying movement primitives in robotics.
|
https://arxiv.org/abs/2601.02379
|
Academic Papers
|
svg
|
d554686bef0de7cae45e9f212bc4491bd7e545f57720f44cf384f0abbee663f6
|
2026-01-07T00:00:00-05:00
|
The Refutability Gap: Challenges in Validating Reasoning by Large Language Models
|
arXiv:2601.02380v1 Announce Type: new Abstract: Recent reports claim that Large Language Models (LLMs) have achieved the ability to derive new science and exhibit human-level general intelligence. We argue that such claims are not rigorous scientific claims, as they do not satisfy Popper's refutability principle (often termed falsifiability), which requires that scientific statements be capable of being disproven. We identify several methodological pitfalls in current AI research on reasoning, including the inability to verify the novelty of findings due to opaque and non-searchable training data, the lack of reproducibility caused by continuous model updates, and the omission of human-interaction transcripts, which obscures the true source of scientific discovery. Additionally, the absence of counterfactuals and data on failed attempts creates a selection bias that may exaggerate LLM capabilities. To address these challenges, we propose guidelines for scientific transparency and reproducibility for research on reasoning by LLMs. Establishing such guidelines is crucial for both scientific integrity and the ongoing societal debates regarding fair data usage.
|
https://arxiv.org/abs/2601.02380
|
Academic Papers
|
svg
|
d9d96f09e8d747e81ce07490aac012eb2fff149ad8d76e1fd886b8bf98f952f5
|
2026-01-07T00:00:00-05:00
|
TAG-HGT: A Scalable and Cost-Effective Framework for Inductive Cold-Start Academic Recommendation
|
arXiv:2601.02381v1 Announce Type: new Abstract: Inductive cold-start recommendation remains the "Achilles' Heel" of industrial academic platforms, where thousands of new scholars join daily without historical interaction records. While recent Generative Graph Models (e.g., HiGPT, OFA) demonstrate promising semantic capabilities, their prohibitive inference latency (often exceeding 13 minutes per 1,000 requests) and massive computational costs render them practically undeployable for real-time, million-scale applications. To bridge this gap between generative quality and industrial scalability, we propose TAG-HGT, a cost-effective neuro-symbolic framework. Adopting a decoupled "Semantics-First, Structure-Refined" paradigm, TAG-HGT utilizes a frozen Large Language Model (DeepSeek-V3) as an offline semantic factory and distills its knowledge into a lightweight Heterogeneous Graph Transformer (HGT) via Cross-View Contrastive Learning (CVCL). We present a key insight: while LLM semantics provide necessary global recall, structural signals offer the critical local discrimination needed to distinguish valid collaborators from semantically similar but socially unreachable strangers in dense embedding spaces. Validated under a strict Time-Machine Protocol on the massive OpenAlex dataset, TAG-HGT achieves a SOTA System Recall@10 of 91.97%, outperforming structure-only baselines by 20.7%. Most significantly, from an industrial perspective, TAG-HGT reduces inference latency by five orders of magnitude ($4.5 \times 10^{5}\times$) compared to generative baselines (from 780s down to 1.73 ms), and slashes inference costs from $\sim$$1.50 to $<$$0.001 per 1k queries. This 99.9% cost reduction democratizes high-precision academic recommendation.
|
https://arxiv.org/abs/2601.02381
|
Academic Papers
|
svg
|
877085d6f665800e79a2a43587d3d24bbf640a82631fffa8aca6f85bcefc2380
|
2026-01-07T00:00:00-05:00
|
How to Discover Knowledge for FutureG: Contextual RAG and LLM Prompting for O-RAN
|
arXiv:2601.02382v1 Announce Type: new Abstract: We present a retrieval-augmented question answering framework for 5G/6G networks, where the Open Radio Access Network (O-RAN) has become central to disaggregated, virtualized, and AI-driven wireless systems. While O-RAN enables multi-vendor interoperability and cloud-native deployments, its fast-changing specifications and interfaces pose major challenges for researchers and practitioners. Manual navigation of these complex documents is labor-intensive and error-prone, slowing system design, integration, and deployment. To address this challenge, we adopt Contextual Retrieval-Augmented Generation (Contextual RAG), a strategy in which candidate answer choices guide document retrieval and chunk-specific context to improve large language model (LLM) performance. This improvement over traditional RAG achieves more targeted and context-aware retrieval, which improves the relevance of documents passed to the LLM, particularly when the query alone lacks sufficient context for accurate grounding. Our framework is designed for dynamic domains where data evolves rapidly and models must be continuously updated or redeployed, all without requiring LLM fine-tuning. We evaluate this framework using the ORANBenchmark-13K dataset, and compare three LLMs, namely, Llama3.2, Qwen2.5-7B, and Qwen3.0-4B, across both Direct Question Answering (Direct Q&A) and Chain-of-Thought (CoT) prompting strategies. We show that Contextual RAG consistently improves accuracy over standard RAG and base prompting, while maintaining competitive runtime and CO2 emissions. These results highlight the potential of Contextual RAG to serve as a scalable and effective solution for domain-specific Q&A in ORAN and broader 5G/6G environments, enabling more accurate interpretation of evolving standards while preserving efficiency and sustainability.
|
https://arxiv.org/abs/2601.02382
|
Academic Papers
|
svg
|
4148534e637eb32ea154b9d8d608409df406a91a9180f2967f8bc5ab07c1f5c2
|
2026-01-07T00:00:00-05:00
|
The Future of the AI Summit Series
|
arXiv:2601.02383v1 Announce Type: new Abstract: This policy memo examines the evolution of the international AI Summit series, initiated at Bletchley Park in 2023 and continued through Seoul in 2024 and Paris in 2025, as a forum for cooperation on the governance of advanced artificial intelligence. It analyzes the factors underpinning the series' early successes and assesses challenges related to scope, participation, continuity, and institutional design. Drawing on comparisons with existing international governance models, the memo evaluates options for hosting arrangements, secretariat formats, participant selection, agenda setting, and meeting frequency. It proposes a set of design recommendations aimed at preserving the series' focus on advanced AI governance while balancing inclusivity, effectiveness, and long-term sustainability.
|
https://arxiv.org/abs/2601.02383
|
Academic Papers
|
svg
|
34a57291e6129dff55b56d5bdae0bd69996fb6130f3e5cff353cbdb141d8b482
|
2026-01-07T00:00:00-05:00
|
E-commerce Transactions in Islam: Fiqh Muamalah on The Validity of Buying and Selling on Digital Platforms
|
arXiv:2601.02384v1 Announce Type: new Abstract: The development of the digital economy has established e-commerce platforms as the primary space for commercial transactions for the Muslim community. However, innovations in features and business models on these platforms have gave rise to Sharia issues that cannot be fully explained through conventional Fiqh Muamalah contract frameworks. This research aims to examine the compliance of transaction practices in e-commerce with Sharia principles, particularly in the six most frequently used transaction forms, namely information arbitrage-based dropshipping, Buy Now Pay Later financing schemes, digital representations, algorithmic marketing that encourages consumptive behavior, halal verification, and Pre-Order systems. The research method used is a Critical Literature Review with a normative juridical approach, through the study of arguments from the Qur'an, Hadith, DSN-MUI Fatwas, as well as classical and contemporary fiqh literature. The results show that dropshipping and PO practices are considered invalid if conducted with a direct sale contract (bai') due to the nonfulfillment of the element of possession (qabd) and the presence of high uncertainty (gharar). Both practices can be justified through the restructuring of contracts into wakalah bil ujrah, salam, or istishna'. Conventional BNPL is declared non-compliant with Sharia because it contains riba nasiah and riba qardh. Misleading digital representations and halal claims without valid verification fall into the category of tadlis, while dark patterns based algorithmic marketing contradicts maqashid al-syariah, especially the protection of wealth (hifz al-mal) and intellect (hifz al-'aql). This research emphasizes the need for a comprehensive Sharia audit covering contract legality, algorithmic ethics, and interface design to realize a digital economic ecosystem that is fair, transparent, and in accordance with Islamic Sharia.
|
https://arxiv.org/abs/2601.02384
|
Academic Papers
|
svg
|
6c946d27d55f892707847a93e80494a5b635a2d730d091ef789662e3815d978d
|
2026-01-07T00:00:00-05:00
|
Base Station Deployment under EMF constrain by Deep Reinforcement learning
|
arXiv:2601.02385v1 Announce Type: new Abstract: As 5G networks rapidly expand and 6G technologies emerge, characterized by dense deployments, millimeter-wave communications, and dynamic beamforming, the need for scalable simulation tools becomes increasingly critical. These tools must support efficient evaluation of key performance metrics such as coverage and radio-frequency electromagnetic field (RF-EMF) exposure, inform network design decisions, and ensure compliance with safety regulations. Moreover, base station (BS) placement is a crucial task in the network design, where satisfying coverage requirements is essential. To address these, based on our previous work, we first propose a conditional generative adversarial network (cGAN) that predicts location specific received signal strength (RSS), and EMF exposure simultaneously from the network topology, as images. As a network designing application, we propose a Deep Q Network (DQN) framework, using the trained cGAN, for optimal base station (BS) deployment in the network. Compared to conventional ray tracing simulations, the proposed cGAN reduces inference and deployment time from several hours to seconds. Unlike a standalone cGAN, which provides static performance maps, the proposed GAN-DQN framework enables sequential decision making under coverage and exposure constraints, learning effective deployment strategies that directly solve the BS placement problem. Thus making it well suited for real time design and adaptation in dynamic scenarios in order to satisfy pre defined network specific heterogeneous performance goals.
|
https://arxiv.org/abs/2601.02385
|
Academic Papers
|
svg
|
0fde6f66fdab02325e016673f910ce53cc4ca56736ca2a6708053145546acc61
|
2026-01-07T00:00:00-05:00
|
Tree of Preferences for Diversified Recommendation
|
arXiv:2601.02386v1 Announce Type: new Abstract: Diversified recommendation has attracted increasing attention from both researchers and practitioners, which can effectively address the homogeneity of recommended items. Existing approaches predominantly aim to infer the diversity of user preferences from observed user feedback. Nonetheless, due to inherent data biases, the observed data may not fully reflect user interests, where underexplored preferences can be overwhelmed or remain unmanifested. Failing to capture these preferences can lead to suboptimal diversity in recommendations. To fill this gap, this work aims to study diversified recommendation from a data-bias perspective. Inspired by the outstanding performance of large language models (LLMs) in zero-shot inference leveraging world knowledge, we propose a novel approach that utilizes LLMs' expertise to uncover underexplored user preferences from observed behavior, ultimately providing diverse and relevant recommendations. To achieve this, we first introduce Tree of Preferences (ToP), an innovative structure constructed to model user preferences from coarse to fine. ToP enables LLMs to systematically reason over the user's rationale behind their behavior, thereby uncovering their underexplored preferences. To guide diversified recommendations using uncovered preferences, we adopt a data-centric approach, identifying candidate items that match user preferences and generating synthetic interactions that reflect underexplored preferences. These interactions are integrated to train a general recommender for diversification. Moreover, we scale up overall efficiency by dynamically selecting influential users during optimization. Extensive evaluations of both diversity and relevance show that our approach outperforms existing methods in most cases and achieves near-optimal performance in others, with reasonable inference latency.
|
https://arxiv.org/abs/2601.02386
|
Academic Papers
|
svg
|
6aef7162057fc353159823c3ffd9ed4f71cf6f3a0db051090c2d681b2f680f14
|
2026-01-07T00:00:00-05:00
|
Regional Resource Management for Service Provisioning in LEO Satellite Networks: A Topology Feature-Based DRL Approach
|
arXiv:2601.02387v1 Announce Type: new Abstract: Satellite networks with wide coverage are considered natural extensions to terrestrial networks for their long-distance end-to-end (E2E) service provisioning. However, the inherent topology dynamics of low earth orbit satellite networks and the uncertain network scales bring an inevitable requirement that resource chains for E2E service provisioning must be efficiently re-planned. Therefore, achieving highly adaptive resource management is of great significance in practical deployment applications. This paper first designs a regional resource management (RRM) mode and further formulates the RRM problem that can provide a unified decision space independent of the network scale. Subsequently, leveraging the RRM mode and deep reinforcement learning framework, we develop a topology feature-based dynamic and adaptive resource management algorithm to combat the varying network scales. The proposed algorithm successfully takes into account the fixed output dimension of the neural network and the changing resource chains for E2E service provisioning. The matched design of the service orientation information and phased reward function effectively improves the service performance of the algorithm under the RRM mode. The numerical results demonstrate that the proposed algorithm with the best convergence performance and fastest convergence rate significantly improves service performance for varying network scales, with gains over compared algorithms of more than 2.7%, 11.9%, and 10.2%, respectively.
|
https://arxiv.org/abs/2601.02387
|
Academic Papers
|
svg
|
9b88ce8f80d4738c376bcbd0aa338c234ac1f554c745e9b2616b936a58eab9cd
|
2026-01-07T00:00:00-05:00
|
Generative AI for Networking
|
arXiv:2601.02389v1 Announce Type: new Abstract: Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are revolutionizing network management systems, paving the way towards fully autonomous and self-optimizing communication systems. These models enable networks to address complex decision-making tasks across both short-term operational scenarios and long-term strategic planning. Through natural language understanding, LLMs can analyze customer inquiries, predict network congestion patterns, and automate troubleshooting processes, leading to more efficient customer support and network maintenance. GenAI can optimize content delivery by generating personalized recommendations, improving user engagement, and dynamically adjusting network resources based on real-time demands, ultimately enhancing overall performance and user experience in telecommunication services. In this paper, we discuss the pivotal role of GenAI in advancing network performance and achieving the ultimate objective of self-adaptive networks. Moreover, we present a use case that leverages the self-attention mechanism of transformers to perform long-term traffic prediction. Harnessing these cutting-edge technologies demonstrates the transformative power of LLM and GenAI in revolutionizing telecommunication networks, elevating resilience and adaptability to unprecedented levels.
|
https://arxiv.org/abs/2601.02389
|
Academic Papers
|
svg
|
92353c39ef7bb424ec305d66ab5eaeb457dbf5d202e3f46e63acb38c4fe90f51
|
2026-01-07T00:00:00-05:00
|
Breaking Rank - A Novel Unscented Kalman Filter for Parameter Estimations of a Lumped-Parameter Cardiovascular Model
|
arXiv:2601.02390v1 Announce Type: new Abstract: We make modifications to the unscented Kalman filter (UKF) which bestow almost complete practical identifiability upon a lumped-parameter cardiovascular model with 10 parameters and 4 output observables - a highly non-linear, stiff problem of clinical significance. The modifications overcome the challenging problems of rank deficiency when applying the UKF to parameter estimation. Rank deficiency usually means only a small subset of parameters can be estimated. Traditionally, pragmatic compromises are made, such as selecting an optimal subset of parameters for estimation and fixing non-influential parameters. Kalman filters are typically used for dynamical state tracking, to facilitate the control u at every time step. However, for the purpose of parameter estimation, this constraint no longer applies. Our modification has transformed the utility of UKF for the parameter estimation purpose, including minimally influential parameters, with excellent robustness (i.e., under severe noise corruption, challenging patho-physiology, and no prior knowledge of parameter distributions). The modified UKF algorithm is robust in recovering almost all parameters to over 98% accuracy, over 90% of the time, with a challenging target data set of 50, 10-parameter samples. We compare this to the original implementation of the UKF algorithm for parameter estimation and demonstrate a significant improvement.
|
https://arxiv.org/abs/2601.02390
|
Academic Papers
|
svg
|
a61672a998bbed6b134368565718852d6e51787d892745fdf2a9ed394aca9ee9
|
2026-01-07T00:00:00-05:00
|
WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables
|
arXiv:2601.02391v1 Announce Type: new Abstract: Wearable devices such as AI glasses are transforming voice assistants into always-available, hands-free collaborators that integrate seamlessly with daily life, but they also introduce challenges like egocentric audio affected by motion and noise, rapid micro-interactions, and the need to distinguish device-directed speech from background conversations. Existing benchmarks largely overlook these complexities, focusing instead on clean or generic conversational audio. To bridge this gap, we present WearVox, the first benchmark designed to rigorously evaluate voice assistants in realistic wearable scenarios. WearVox comprises 3,842 multi-channel, egocentric audio recordings collected via AI glasses across five diverse tasks including Search-Grounded QA, Closed-Book QA, Side-Talk Rejection, Tool Calling, and Speech Translation, spanning a wide range of indoor and outdoor environments and acoustic conditions. Each recording is accompanied by rich metadata, enabling nuanced analysis of model performance under real-world constraints. We benchmark leading proprietary and open-source speech Large Language Models (SLLMs) and find that most real-time SLLMs achieve accuracies on WearVox ranging from 29% to 59%, with substantial performance degradation on noisy outdoor audio, underscoring the difficulty and realism of the benchmark. Additionally, we conduct a case study with two new SLLMs that perform inference with single-channel and multi-channel audio, demonstrating that multi-channel audio inputs significantly enhance model robustness to environmental noise and improve discrimination between device-directed and background speech. Our results highlight the critical importance of spatial audio cues for context-aware voice assistants and establish WearVox as a comprehensive testbed for advancing wearable voice AI research.
|
https://arxiv.org/abs/2601.02391
|
Academic Papers
|
svg
|
3324d1c2fde8da6df33acc4bb4e50eb781abab584e021601a1365834dfc8558f
|
2026-01-07T00:00:00-05:00
|
Self-Supervised Masked Autoencoders with Dense-Unet for Coronary Calcium Removal in limited CT Data
|
arXiv:2601.02392v1 Announce Type: new Abstract: Coronary calcification creates blooming artifacts in Computed Tomography Angiography (CTA), severely hampering the diagnosis of lumen stenosis. While Deep Convolutional Neural Networks (DCNNs) like Dense-Unet have shown promise in removing these artifacts via inpainting, they often require large labeled datasets which are scarce in the medical domain. Inspired by recent advancements in Masked Autoencoders (MAE) for 3D point clouds, we propose \textbf{Dense-MAE}, a novel self-supervised learning framework for volumetric medical data. We introduce a pre-training strategy that randomly masks 3D patches of the vessel lumen and trains the Dense-Unet to reconstruct the missing geometry. This forces the encoder to learn high-level latent features of arterial topology without human annotation. Experimental results on clinical CTA datasets demonstrate that initializing the Calcium Removal network with our MAE-based weights significantly improves inpainting accuracy and stenosis estimation compared to training from scratch, specifically in few-shot scenarios.
|
https://arxiv.org/abs/2601.02392
|
Academic Papers
|
svg
|
61196479bbc45fb8c8d4c0ec754b228bda7d0b3479003fec7f6deb1ccd9692f0
|
2026-01-07T00:00:00-05:00
|
SLASh: Simulation of LISLs Aboard LEO Satellite Shells
|
arXiv:2601.02396v1 Announce Type: new Abstract: Recent advances in satellite technology have introduced a new frontier of wireless networking by establishing Low Earth Orbit (LEO) Satellite networks that work to connect difficult to reach areas and improve global connectivity. These novel advancements lack robust open-source simulation models that can highlight potential bottlenecks or potential wasted resources, wasting terrestrial users and the companies that provide these networks time and money. To that end, we propose SLASh, a highly-customizable satellite network simulation which allows users to design a simulated network with specific characteristics, and constructs them analog to real-world conditions. Additionally, SLASh can generate abstract telemetry that can be simulated moving throughout the network, allowing users to compare network capabilities across a variety of frameworks.
|
https://arxiv.org/abs/2601.02396
|
Academic Papers
|
svg
|
8800934718681827545db8e902f9ac7d09471bb9cf879a299968644123d0fee6
|
2026-01-07T00:00:00-05:00
|
Evolutionary Algorithms for Computing Nash Equilibria in Dynamic Games
|
arXiv:2601.02397v1 Announce Type: new Abstract: Dynamic nonzero sum games are widely used to model multi agent decision making in control, economics, and related fields. Classical methods for computing Nash equilibria, especially in linear quadratic settings, rely on strong structural assumptions and become impractical for nonlinear dynamics, many players, or long horizons, where multiple local equilibria may exist. We show through examples that such methods can fail to reach the true global Nash equilibrium even in relatively small games. To address this, we propose two population based evolutionary algorithms for general dynamic games with linear or nonlinear dynamics and arbitrary objective functions: a co evolutionary genetic algorithm and a hybrid genetic algorithm particle swarm optimization scheme. Both approaches search directly over joint strategy spaces without restrictive assumptions and are less prone to getting trapped in local Nash equilibria, providing more reliable approximations to global Nash solutions.
|
https://arxiv.org/abs/2601.02397
|
Academic Papers
|
svg
|
0a8b2447505cdbd9e5d2c1ed9ab23b5917b84d68d2f0288a650659db88581008
|
2026-01-07T00:00:00-05:00
|
AI-Native Integrated Sensing and Communications for Self-Organizing Wireless Networks: Architectures, Learning Paradigms, and System-Level Design
|
arXiv:2601.02398v1 Announce Type: new Abstract: Integrated Sensing and Communications (ISAC) is emerging as a foundational paradigm for next-generation wireless networks, enabling communication infrastructures to simultaneously support data transmission and environment sensing. By tightly coupling radio sensing with communication functions, ISAC unlocks new capabilities for situational awareness, localization, tracking, and network adaptation. At the same time, the increasing scale, heterogeneity, and dynamics of future wireless systems demand self-organizing network intelligence capable of autonomously managing resources, topology, and services. Artificial intelligence (AI), particularly learning-driven and data-centric methods, has become a key enabler for realizing this vision. This survey provides a comprehensive and system-level review of AI-native ISAC-enabled self-organizing wireless networks. We develop a unified taxonomy that spans: (i) ISAC signal models and sensing modalities, (ii) network state abstraction and perception from sensing-aware radio data, (iii) learning-driven self-organization mechanisms for resource allocation, topology control, and mobility management, and (iv) cross-layer architectures integrating sensing, communication, and network intelligence. We further examine emerging learning paradigms, including deep reinforcement learning, graph-based learning, multi-agent coordination, and federated intelligence that enable autonomous adaptation under uncertainty, mobility, and partial observability. Practical considerations such as sensing-communication trade-offs, scalability, latency, reliability, and security are discussed alongside representative evaluation methodologies and performance metrics. Finally, we identify key open challenges and future research directions toward deployable, trustworthy, and scalable AI-native ISAC systems for 6G and beyond.
|
https://arxiv.org/abs/2601.02398
|
Academic Papers
|
svg
|
5edb7e4e7d486274c68ffc12ab049a5dd082cf2f65e180bac234ef3396780367
|
2026-01-07T00:00:00-05:00
|
ProSoftArena: Benchmarking Hierarchical Capabilities of Multimodal Agents in Professional Software Environments
|
arXiv:2601.02399v1 Announce Type: new Abstract: Multimodal agents are making rapid progress on general computer-use tasks, yet existing benchmarks remain largely confined to browsers and basic desktop applications, falling short in professional software workflows that dominate real-world scientific and industrial practice. To close this gap, we introduce ProSoftArena, a benchmark and platform specifically for evaluating multimodal agents in professional software environments. We establish the first capability hierarchy tailored to agent use of professional software and construct a benchmark of 436 realistic work and research tasks spanning 6 disciplines and 13 core professional applications. To ensure reliable and reproducible assessment, we build an executable real-computer environment with an execution-based evaluation framework and uniquely incorporate a human-in-the-loop evaluation paradigm. Extensive experiments show that even the best-performing agent attains only a 24.4\% success rate on L2 tasks and completely fails on L3 multi-software workflow. In-depth analysis further provides valuable insights for addressing current agent limitations and more effective design principles, paving the way to build more capable agents in professional software settings. This project is available at: https://prosoftarena.github.io.
|
https://arxiv.org/abs/2601.02399
|
Academic Papers
|
svg
|
e9f99d7cd8779b5dbd1eda65ff25687baa7cd6203782ca1938f8b40ba9fa53dc
|
2026-01-07T00:00:00-05:00
|
Spiking Heterogeneous Graph Attention Networks
|
arXiv:2601.02401v1 Announce Type: new Abstract: Real-world graphs or networks are usually heterogeneous, involving multiple types of nodes and relationships. Heterogeneous graph neural networks (HGNNs) can effectively handle these diverse nodes and edges, capturing heterogeneous information within the graph, thus exhibiting outstanding performance. However, most methods of HGNNs usually involve complex structural designs, leading to problems such as high memory usage, long inference time, and extensive consumption of computing resources. These limitations pose certain challenges for the practical application of HGNNs, especially for resource-constrained devices. To mitigate this issue, we propose the Spiking Heterogeneous Graph Attention Networks (SpikingHAN), which incorporates the brain-inspired and energy-saving properties of Spiking Neural Networks (SNNs) into heterogeneous graph learning to reduce the computing cost without compromising the performance. Specifically, SpikingHAN aggregates metapath-based neighbor information using a single-layer graph convolution with shared parameters. It then employs a semantic-level attention mechanism to capture the importance of different meta-paths and performs semantic aggregation. Finally, it encodes the heterogeneous information into a spike sequence through SNNs, simulating bioinformatic processing to derive a binarized 1-bit representation of the heterogeneous graph. Comprehensive experimental results from three real-world heterogeneous graph datasets show that SpikingHAN delivers competitive node classification performance. It achieves this with fewer parameters, quicker inference, reduced memory usage, and lower energy consumption. Code is available at https://github.com/QianPeng369/SpikingHAN.
|
https://arxiv.org/abs/2601.02401
|
Academic Papers
|
svg
|
2bae4e4890b7794cdc8ad75b52dbba4f1b4c5ef011eb9c44d1f21b0054b134de
|
2026-01-07T00:00:00-05:00
|
Auction-Driven Spectrum Allocation With AutoEncoder-Based Compression in Rural Wireless Networks: A Novel Framework for Reliable Telemedicine
|
arXiv:2601.02402v1 Announce Type: new Abstract: Rural healthcare faces numerous challenges, including limited access to specialized medical services and diagnostic equipment, which delays patient care. Enhancing the ability to transmit medical images and data from rural areas to urban hospitals via wireless networks is critical. However, bandwidth limitations, unreliable networks, and concerns over data security and privacy hinder efficient transmission. Additionally, the high data volume of medical content and the limited battery life of IoT devices pose further challenges. To address these challenges, data compression techniques such as Autoencoders (AEs) offer promising solutions by significantly reducing the communication overhead without sacrificing essential image quality or details. Additionally, spectrum allocation mechanisms in rural areas are often inefficient, leading to poor resource utilization. Auction theory presents a dynamic and adaptive approach to optimize spectrum allocation. This paper proposes a novel hybrid framework that integrates AE-based data compression with auction-based spectrum allocation, addressing both communication efficiency and spectrum utilization in rural wireless networks. Extensive simulations validate the framework's ability to improve spectrum utilization, transmission efficiency, and overall connectivity, offering a practical solution for enhancing rural telemedicine infrastructure.
|
https://arxiv.org/abs/2601.02402
|
Academic Papers
|
svg
|
ba80ab53d64f9739f1fb03cce209c500afa8866353138f05c8a86572047263ce
|
2026-01-07T00:00:00-05:00
|
PCEval: A Benchmark for Evaluating Physical Computing Capabilities of Large Language Models
|
arXiv:2601.02404v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains, including software development, education, and technical assistance. Among these, software development is one of the key areas where LLMs are increasingly adopted. However, when hardware constraints are considered-for instance, in physical computing, where software must interact with and control physical hardware -their effectiveness has not been fully explored. To address this gap, we introduce \textsc{PCEval} (Physical Computing Evaluation), the first benchmark in physical computing that enables a fully automatic evaluation of the capabilities of LLM in both the logical and physical aspects of the projects, without requiring human assessment. Our evaluation framework assesses LLMs in generating circuits and producing compatible code across varying levels of project complexity. Through comprehensive testing of 13 leading models, \textsc{PCEval} provides the first reproducible and automatically validated empirical assessment of LLMs' ability to reason about fundamental hardware implementation constraints within a simulation environment. Our findings reveal that while LLMs perform well in code generation and logical circuit design, they struggle significantly with physical breadboard layout creation, particularly in managing proper pin connections and avoiding circuit errors. \textsc{PCEval} advances our understanding of AI assistance in hardware-dependent computing environments and establishes a foundation for developing more effective tools to support physical computing education.
|
https://arxiv.org/abs/2601.02404
|
Academic Papers
|
svg
|
9cb014904ef7ea63b65499e06c3dcf6393ad6984a94f5745b711a1dacb93b94a
|
2026-01-07T00:00:00-05:00
|
Evolving Personalities in Chaos: An LLM-Augmented Framework for Character Discovery in the Iterated Prisoners Dilemma under Environmental Stress
|
arXiv:2601.02407v1 Announce Type: new Abstract: Standard simulations of the Iterated Prisoners Dilemma (IPD) operate in deterministic, noise-free environments, producing strategies that may be theoretically optimal but fragile when confronted with real-world uncertainty. This paper addresses two critical gaps in evolutionary game theory research: (1) the absence of realistic environmental stressors during strategy evolution, and (2) the Interpretability Gap, where evolved genetic strategies remain opaque binary sequences devoid of semantic meaning. We introduce a novel framework combining stochastic environmental perturbations (God Mode) with Large Language Model (LLM)-based behavioral profiling to transform evolved genotypes into interpretable character archetypes. Our experiments demonstrate that strategies evolved under chaotic conditions exhibit superior resilience and present distinct behavioral phenotypes, ranging from Ruthless Capitalists to Diplomatic Enforcers. These phenotypes are readily classified by LLMs but remain nearly impossible to interpret through manual genome inspection alone. This work bridges evolutionary computation with explainable AI and provides a template for automated agent characterization in multi-agent systems.
|
https://arxiv.org/abs/2601.02407
|
Academic Papers
|
svg
|
6892ebc42f7264b269c19b1a4950577e472222047bf34663423f564e9df55d5a
|
2026-01-07T00:00:00-05:00
|
Expert-Guided Explainable Few-Shot Learning with Active Sample Selection for Medical Image Analysis
|
arXiv:2601.02409v1 Announce Type: new Abstract: Medical image analysis faces two critical challenges: scarcity of labeled data and lack of model interpretability, both hindering clinical AI deployment. Few-shot learning (FSL) addresses data limitations but lacks transparency in predictions. Active learning (AL) methods optimize data acquisition but overlook interpretability of acquired samples. We propose a dual-framework solution: Expert-Guided Explainable Few-Shot Learning (EGxFSL) and Explainability-Guided AL (xGAL). EGxFSL integrates radiologist-defined regions-of-interest as spatial supervision via Grad-CAM-based Dice loss, jointly optimized with prototypical classification for interpretable few-shot learning. xGAL introduces iterative sample acquisition prioritizing both predictive uncertainty and attention misalignment, creating a closed-loop framework where explainability guides training and sample selection synergistically. On the BraTS (MRI), VinDr-CXR (chest X-ray), and SIIM-COVID-19 (chest X-ray) datasets, we achieve accuracies of 92\%, 76\%, and 62\%, respectively, consistently outperforming non-guided baselines across all datasets. Under severe data constraints, xGAL achieves 76\% accuracy with only 680 samples versus 57\% for random sampling. Grad-CAM visualizations demonstrate guided models focus on diagnostically relevant regions, with generalization validated on breast ultrasound confirming cross-modality applicability.
|
https://arxiv.org/abs/2601.02409
|
Academic Papers
|
svg
|
91ed45660ea1f33069dc5bf901609508732f8f7b07e45ebf7891d391b97d1769
|
2026-01-07T00:00:00-05:00
|
The Vibe-Check Protocol: Quantifying Cognitive Offloading in AI Programming
|
arXiv:2601.02410v1 Announce Type: new Abstract: The integration of Large Language Models (LLMs) into software engineering education has driven the emergence of ``Vibe Coding,'' a paradigm where developers articulate high-level intent through natural language and delegate implementation to AI agents. While proponents argue this approach modernizes pedagogy by emphasizing conceptual design over syntactic memorization, accumulating empirical evidence raises concerns regarding skill retention and deep conceptual understanding. This paper proposes a theoretical framework to investigate the research question: \textit{Is Vibe Coding a better way to learn software engineering?} We posit a divergence in student outcomes between those leveraging AI for acceleration versus those using it for cognitive offloading. To evaluate these educational trade-offs, we propose the \textbf{Vibe-Check Protocol (VCP)}, a systematic benchmarking framework incorporating three quantitative metrics: the \textit{Cold Start Refactor} ($M_{CSR}$) for modeling skill decay; \textit{Hallucination Trap Detection} ($M_{HT}$) based on signal detection theory to evaluate error identification; and the \textit{Explainability Gap} ($E_{gap}$) for quantifying the divergence between code complexity and conceptual comprehension. Through controlled comparisons, VCP aims to provide a quantitative basis for educators to determine the optimal pedagogical boundary: identifying contexts where Vibe Coding fosters genuine mastery and contexts where it introduces hidden technical debt and superficial competence.
|
https://arxiv.org/abs/2601.02410
|
Academic Papers
|
svg
|
a7b13ac4ed15b4849dd6dd6cec908f463000a9f09c9587dc14c05e2a43e1f883
|
2026-01-07T00:00:00-05:00
|
SpikySpace: A Spiking State Space Model for Energy-Efficient Time Series Forecasting
|
arXiv:2601.02411v1 Announce Type: new Abstract: Time-series forecasting often operates under tight power and latency budgets in fields like traffic management, industrial condition monitoring, and on-device sensing. These applications frequently require near real-time responses and low energy consumption on edge devices. Spiking neural networks (SNNs) offer event-driven computation and ultra-low power by exploiting temporal sparsity and multiplication-free computation. Yet existing SNN-based time-series forecasters often inherit complex transformer blocks, thereby losing much of the efficiency benefit. To solve the problem, we propose SpikySpace, a spiking state-space model (SSM) that reduces the quadratic cost in the attention block to linear time via selective scanning. Further, we replace dense SSM updates with sparse spike trains and execute selective scans only on spike events, thereby avoiding dense multiplications while preserving the SSM's structured memory. Because complex operations such as exponentials and divisions are costly on neuromorphic chips, we introduce simplified approximations of SiLU and Softplus to enable a neuromorphic-friendly model architecture. In matched settings, SpikySpace reduces estimated energy consumption by 98.73% and 96.24% compared to two state-of-the-art transformer based approaches, namely iTransformer and iSpikformer, respectively. In standard time series forecasting datasets, SpikySpace delivers competitive accuracy while substantially reducing energy cost and memory traffic. As the first full spiking state-space model, SpikySpace bridges neuromorphic efficiency with modern sequence modeling, marking a practical and scalable path toward efficient time series forecasting systems.
|
https://arxiv.org/abs/2601.02411
|
Academic Papers
|
svg
|
fd1d295bd42cf1d67e2f4d7ae282cd97452574d114e9aeeb3d3129a25ad41007
|
2026-01-07T00:00:00-05:00
|
Socially-Aware Recommender Systems Mitigate Opinion Clusterization
|
arXiv:2601.02412v1 Announce Type: new Abstract: Recommender systems shape online interactions by matching users with creators content to maximize engagement. Creators, in turn, adapt their content to align with users preferences and enhance their popularity. At the same time, users preferences evolve under the influence of both suggested content from the recommender system and content shared within their social circles. This feedback loop generates a complex interplay between users, creators, and recommender algorithms, which is the key cause of filter bubbles and opinion polarization. We develop a social network-aware recommender system that explicitly accounts for this user-creators feedback interaction and strategically exploits the topology of the user's own social network to promote diversification. Our approach highlights how accounting for and exploiting user's social network in the recommender system design is crucial to mediate filter bubble effects while balancing content diversity with personalization. Provably, opinion clusterization is positively correlated with the influence of recommended content on user opinions. Ultimately, the proposed approach shows the power of socially-aware recommender systems in combating opinion polarization and clusterization phenomena.
|
https://arxiv.org/abs/2601.02412
|
Academic Papers
|
svg
|
724d7c4101066d4741701aeb749cc60ecd7dc3c5692af07e726b310057dee17b
|
2026-01-07T00:00:00-05:00
|
MIAR: Modality Interaction and Alignment Representation Fuison for Multimodal Emotion
|
arXiv:2601.02414v1 Announce Type: new Abstract: Multimodal Emotion Recognition (MER) aims to perceive human emotions through three modes: language, vision, and audio. Previous methods primarily focused on modal fusion without adequately addressing significant distributional differences among modalities or considering their varying contributions to the task. They also lacked robust generalization capabilities across diverse textual model features, thus limiting performance in multimodal scenarios. Therefore, we propose a novel approach called Modality Interaction and Alignment Representation (MIAR). This network integrates contextual features across different modalities using a feature interaction to generate feature tokens to represent global representations of this modality extracting information from other modalities. These four tokens represent global representations of how each modality extracts information from others. MIAR aligns different modalities using contrastive learning and normalization strategies. We conduct experiments on two benchmarks: CMU-MOSI and CMU-MOSEI datasets, experimental results demonstrate the MIAR outperforms state-of-the-art MER methods.
|
https://arxiv.org/abs/2601.02414
|
Academic Papers
|
svg
|
a6eb25394db947769e1eb22e6a94c977176385f7f775bdca136f7d0b6c5a4a92
|
2026-01-07T00:00:00-05:00
|
Multimodal Sentiment Analysis based on Multi-channel and Symmetric Mutual Promotion Feature Fusion
|
arXiv:2601.02415v1 Announce Type: new Abstract: Multimodal sentiment analysis is a key technology in the fields of human-computer interaction and affective computing. Accurately recognizing human emotional states is crucial for facilitating smooth communication between humans and machines. Despite some progress in multimodal sentiment analysis research, numerous challenges remain. The first challenge is the limited and insufficiently rich features extracted from single modality data. Secondly, most studies focus only on the consistency of inter-modal feature information, neglecting the differences between features, resulting in inadequate feature information fusion. In this paper, we first extract multi-channel features to obtain more comprehensive feature information. We employ dual-channel features in both the visual and auditory modalities to enhance intra-modal feature representation. Secondly, we propose a symmetric mutual promotion (SMP) inter-modal feature fusion method. This method combines symmetric cross-modal attention mechanisms and self-attention mechanisms, where the cross-modal attention mechanism captures useful information from other modalities, and the self-attention mechanism models contextual information. This approach promotes the exchange of useful information between modalities, thereby strengthening inter-modal interactions. Furthermore, we integrate intra-modal features and inter-modal fused features, fully leveraging the complementarity of inter-modal feature information while considering feature information differences. Experiments conducted on two benchmark datasets demonstrate the effectiveness and superiority of our proposed method.
|
https://arxiv.org/abs/2601.02415
|
Academic Papers
|
svg
|
2ab66cdd3815047a6b263bbd088650ac7c786c5cc60fb872464d028eccee49ef
|
2026-01-07T00:00:00-05:00
|
Talks that Builds: Exploring Communication factors for the Success of Emerging Professional in Product Teams
|
arXiv:2601.02421v1 Announce Type: new Abstract: This paper recognizes that most organizational communication study focuses on established professionals aged above 27 with more than five years of experience. In contrast, this study examines product teams with younger emerging professionals aged 18-27 and explores which factors influence their success. While some established factors still apply, others become less relevant, and new ones such as curiosity, locational proximity, documentation, access to resources were identified in the study. Overall, this study fills a gap in the literature on how these newer factors shape team productivity and project outcomes based on the success rate of the product the team developed.
|
https://arxiv.org/abs/2601.02421
|
Academic Papers
|
svg
|
70bc98759a774bc1bd0badb5a167e67358cf20deddf7ae95c23fe75425a7fc3a
|
2026-01-07T00:00:00-05:00
|
Watch Wider and Think Deeper: Collaborative Cross-modal Chain-of-Thought for Complex Visual Reasoning
|
arXiv:2601.02422v1 Announce Type: new Abstract: Multi-modal reasoning requires the seamless integration of visual and linguistic cues, yet existing Chain-of-Thought methods suffer from two critical limitations in cross-modal scenarios: (1) over-reliance on single coarse-grained image regions, and (2) semantic fragmentation between successive reasoning steps. To address these issues, we propose the CoCoT (Collaborative Coross-modal Thought) frame- work, built upon two key innovations: a) Dynamic Multi-Region Grounding to adaptively detect the most relevant image regions based on the question, and b) Relation-Aware Reasoning to enable multi-region collaboration by iteratively align- ing visual cues to form a coherent and logical chain of thought. Through this approach, we construct the CoCoT-70K dataset, comprising 74,691 high-quality samples with multi-region annotations and structured reasoning chains. Extensive experiments demonstrate that CoCoT significantly enhances complex visual rea- soning, achieving an average accuracy improvement of 15.4% on LLaVA-1.5 and 4.0% on Qwen2-VL across six challenging benchmarks. The data and code are available at: https://github.com/deer-echo/CoCoT.
|
https://arxiv.org/abs/2601.02422
|
Academic Papers
|
svg
|
36bad935c29dbe18e4e760cb80c1c86ede57a4387090f6119766e21eb17ef8d2
|
2026-01-07T00:00:00-05:00
|
NitroGen: An Open Foundation Model for Generalist Gaming Agents
|
arXiv:2601.02427v1 Announce Type: new Abstract: We introduce NitroGen, a vision-action foundation model for generalist gaming agents that is trained on 40,000 hours of gameplay videos across more than 1,000 games. We incorporate three key ingredients: 1) an internet-scale video-action dataset constructed by automatically extracting player actions from publicly available gameplay videos, 2) a multi-game benchmark environment that can measure cross-game generalization, and 3) a unified vision-action model trained with large-scale behavior cloning. NitroGen exhibits strong competence across diverse domains, including combat encounters in 3D action games, high-precision control in 2D platformers, and exploration in procedurally generated worlds. It transfers effectively to unseen games, achieving up to 52% relative improvement in task success rates over models trained from scratch. We release the dataset, evaluation suite, and model weights to advance research on generalist embodied agents.
|
https://arxiv.org/abs/2601.02427
|
Academic Papers
|
svg
|
ba29f890ab959c3a0f95ed75c9cb8ef253f58c0052c0c7f26884e49d0d5c9e4b
|
2026-01-07T00:00:00-05:00
|
A Dynamic Retrieval-Augmented Generation System with Selective Memory and Remembrance
|
arXiv:2601.02428v1 Announce Type: new Abstract: We introduce \emph{Adaptive RAG Memory} (ARM), a retrieval-augmented generation (RAG) framework that replaces a static vector index with a \emph{dynamic} memory substrate governed by selective remembrance and decay. Frequently retrieved items are consolidated and protected from forgetting, while rarely used items gradually decay, inspired by cognitive consolidation and forgetting principles. On a lightweight retrieval benchmark, ARM reaches near state-of-the-art performance (e.g., NDCG@5 $\approx$ 0.940, Recall@5 $=1.000$) with only $\sim$22M parameters in the embedding layer, achieving the best efficiency among ultra-efficient models ($<$25M parameters). In addition, we compare static vs. dynamic RAG combinations across Llama 3.1 and GPT-4o. Llama 3.1 with static RAG achieves the highest key-term coverage (67.2\%) at moderate latency, while GPT-4o with a dynamic selective retrieval policy attains the fastest responses (8.2s on average) with competitive coverage (58.7\%). We further present an engineering optimization of the DynamicRAG implementation, making embedding weights configurable, adjustable at runtime, and robust to invalid settings. ARM yields competitive accuracy, self-regularizing memory growth, and interpretable retention dynamics without retraining the generator\color{black} and provides practical trade-off between quality, latency and memory efficiency for production and research RAG system.
|
https://arxiv.org/abs/2601.02428
|
Academic Papers
|
svg
|
d0be67bc67edeb05e2b9773dcb3b6b341f4e6e2bd58ed67a48bda512ff5856a5
|
2026-01-07T00:00:00-05:00
|
WebCoderBench: Benchmarking Web Application Generation with Comprehensive and Interpretable Evaluation Metrics
|
arXiv:2601.02430v1 Announce Type: new Abstract: Web applications (web apps) have become a key arena for large language models (LLMs) to demonstrate their code generation capabilities and commercial potential. However, building a benchmark for LLM-generated web apps remains challenging due to the need for real-world user requirements, generalizable evaluation metrics without relying on ground-truth implementations or test cases, and interpretable evaluation results. To address these challenges, we introduce WebCoderBench, the first real-world-collected, generalizable, and interpretable benchmark for web app generation. WebCoderBench comprises 1,572 real user requirements, covering diverse modalities and expression styles that reflect realistic user intentions. WebCoderBench provides 24 fine-grained evaluation metrics across 9 perspectives, combining rule-based and LLM-as-a-judge paradigm for fully automated, objective, and general evaluation. Moreover, WebCoderBench adopts human-preference-aligned weights over metrics to yield interpretable overall scores. Experiments across 12 representative LLMs and 2 LLM-based agents show that there exists no dominant model across all evaluation metrics, offering an opportunity for LLM developers to optimize their models in a targeted manner for a more powerful version.
|
https://arxiv.org/abs/2601.02430
|
Academic Papers
|
svg
|
4a0916e1a303656e256aa681771453c0d6575826ce21876308cfd609edd162a4
|
2026-01-07T00:00:00-05:00
|
Quantifying Quanvolutional Neural Networks Robustness for Speech in Healthcare Applications
|
arXiv:2601.02432v1 Announce Type: new Abstract: Speech-based machine learning systems are sensitive to noise, complicating reliable deployment in emotion recognition and voice pathology detection. We evaluate the robustness of a hybrid quantum machine learning model, quanvolutional neural networks (QNNs) against classical convolutional neural networks (CNNs) under four acoustic corruptions (Gaussian noise, pitch shift, temporal shift, and speed variation) in a clean-train/corrupted-test regime. Using AVFAD (voice pathology) and TESS (speech emotion), we compare three QNN models (Random, Basic, Strongly) to a simple CNN baseline (CNN-Base), ResNet-18 and VGG-16 using accuracy and corruption metrics (CE, mCE, RCE, RmCE), and analyze architectural factors (circuit complexity or depth, convergence) alongside per-emotion robustness. QNNs generally outperform the CNN-Base under pitch shift, temporal shift, and speed variation (up to 22% lower CE/RCE at severe temporal shift), while the CNN-Base remains more resilient to Gaussian noise. Among quantum circuits, QNN-Basic achieves the best overall robustness on AVFAD, and QNN-Random performs strongest on TESS. Emotion-wise, fear is most robust (80-90% accuracy under severe corruptions), neutral can collapse under strong Gaussian noise (5.5% accuracy), and happy is most vulnerable to pitch, temporal, and speed distortions. QNNs also converge up to six times faster than the CNN-Base. To our knowledge, this is a systematic study of QNN robustness for speech under common non-adversarial acoustic corruptions, indicating that shallow entangling quantum front-ends can improve noise resilience while sensitivity to additive noise remains a challenge.
|
https://arxiv.org/abs/2601.02432
|
Academic Papers
|
svg
|
e097717ae7af3803d8f6179b97fc751d7dd0e22fb10083ded8165176585ebadd
|
2026-01-07T00:00:00-05:00
|
Physical Transformer
|
arXiv:2601.02433v1 Announce Type: new Abstract: Digital AI systems spanning large language models, vision models, and generative architectures that operate primarily in symbolic, linguistic, or pixel domains. They have achieved striking progress, but almost all of this progress lives in virtual spaces. These systems transform embeddings and tokens, yet do not themselves touch the world and rarely admit a physical interpretation. In this work we propose a physical transformer that couples modern transformer style computation with geometric representation and physical dynamics. At the micro level, attention heads, and feed-forward blocks are modeled as interacting spins governed by effective Hamiltonians plus non-Hamiltonian bath terms. At the meso level, their aggregated state evolves on a learned Neural Differential Manifold (NDM) under Hamiltonian flows and Hamilton, Jacobi, Bellman (HJB) optimal control, discretized by symplectic layers that approximately preserve geometric and energetic invariants. At the macro level, the model maintains a generative semantic workspace and a two-dimensional information-phase portrait that tracks uncertainty and information gain over a reasoning trajectory. Within this hierarchy, reasoning tasks are formulated as controlled information flows on the manifold, with solutions corresponding to low cost trajectories that satisfy geometric, energetic, and workspace-consistency constraints. On simple toy problems involving numerical integration and dynamical systems, the physical transformer outperforms naive baselines in stability and long-horizon accuracy, highlighting the benefits of respecting underlying geometric and Hamiltonian structure. More broadly, the framework suggests a path toward physical AI that unify digital reasoning with physically grounded manifolds, opening a route to more interpretable and potentially unified models of reasoning, control, and interaction with the real world.
|
https://arxiv.org/abs/2601.02433
|
Academic Papers
|
svg
|
458cade82e2cb9b238ee6e4e72cbce6f95e44baab9bac3cbdbf5a28d131314bc
|
2026-01-07T00:00:00-05:00
|
TAP-ViTs: Task-Adaptive Pruning for On-Device Deployment of Vision Transformers
|
arXiv:2601.02437v1 Announce Type: new Abstract: Vision Transformers (ViTs) have demonstrated strong performance across a wide range of vision tasks, yet their substantial computational and memory demands hinder efficient deployment on resource-constrained mobile and edge devices. Pruning has emerged as a promising direction for reducing ViT complexity. However, existing approaches either (i) produce a single pruned model shared across all devices, ignoring device heterogeneity, or (ii) rely on fine-tuning with device-local data, which is often infeasible due to limited on-device resources and strict privacy constraints. As a result, current methods fall short of enabling task-customized ViT pruning in privacy-preserving mobile computing settings. This paper introduces TAP-ViTs, a novel task-adaptive pruning framework that generates device-specific pruned ViT models without requiring access to any raw local data. Specifically, to infer device-level task characteristics under privacy constraints, we propose a Gaussian Mixture Model (GMM)-based metric dataset construction mechanism. Each device fits a lightweight GMM to approximate its private data distribution and uploads only the GMM parameters. Using these parameters, the cloud selects distribution-consistent samples from public data to construct a task-representative metric dataset for each device. Based on this proxy dataset, we further develop a dual-granularity importance evaluation-based pruning strategy that jointly measures composite neuron importance and adaptive layer importance, enabling fine-grained, task-aware pruning tailored to each device's computational budget. Extensive experiments across multiple ViT backbones and datasets demonstrate that TAP-ViTs consistently outperforms state-of-the-art pruning methods under comparable compression ratios.
|
https://arxiv.org/abs/2601.02437
|
Academic Papers
|
svg
|
35785787a7b68ae1839d8c813d89767cb1756b114e131110da92bd9fc247daa3
|
2026-01-07T00:00:00-05:00
|
Focus on What Matters: Fisher-Guided Adaptive Multimodal Fusion for Vulnerability Detection
|
arXiv:2601.02438v1 Announce Type: new Abstract: Software vulnerability detection is a critical task for securing software systems and can be formulated as a binary classification problem: given a code snippet, determine whether it contains a vulnerability. Existing multimodal approaches typically fuse Natural Code Sequence (NCS) representations from pretrained language models with Code Property Graph (CPG) representations from graph neural networks, often under the implicit assumption that adding a modality necessarily yields extra information. In practice, sequence and graph representations can be redundant, and fluctuations in the quality of the graph modality can dilute the discriminative signal of the dominant modality. To address this, we propose TaCCS-DFA, a framework that introduces Fisher information as a geometric measure of how sensitive feature directions are to the classification decision, enabling task-oriented complementary fusion. TaCCS-DFA online estimates a low-rank principal Fisher subspace and restricts cross-modal attention to task-sensitive directions, thereby retrieving structural features from CPG that complement the sequence modality; meanwhile, an adaptive gating mechanism dynamically adjusts the contribution of the graph modality for each sample to suppress noise propagation. Our analysis shows that, under an isotropic perturbation assumption, the proposed mechanism admits a tighter risk bound than conventional full-spectrum attention. Experiments on BigVul, Devign, and ReVeal show that TaCCS-DFA achieves strong performance across multiple backbones. With CodeT5 as the backbone, TaCCS-DFA reaches an F1 score of 87.80\% on the highly imbalanced BigVul dataset, improving over a strong baseline Vul-LMGNNs by 6.3 percentage points while maintaining low calibration error and computational overhead.
|
https://arxiv.org/abs/2601.02438
|
Academic Papers
|
svg
|
3c6c68997d756ca144df949927b347fac77ddd4f9cc2255040d6c7dcb05d4be6
|
2026-01-07T00:00:00-05:00
|
WebGym: Scaling Training Environments for Visual Web Agents with Realistic Tasks
|
arXiv:2601.02439v1 Announce Type: new Abstract: We present WebGym, the largest-to-date open-source environment for training realistic visual web agents. Real websites are non-stationary and diverse, making artificial or small-scale task sets insufficient for robust policy learning. WebGym contains nearly 300,000 tasks with rubric-based evaluations across diverse, real-world websites and difficulty levels. We train agents with a simple reinforcement learning (RL) recipe, which trains on the agent's own interaction traces (rollouts), using task rewards as feedback to guide learning. To enable scaling RL, we speed up sampling of trajectories in WebGym by developing a high-throughput asynchronous rollout system, designed specifically for web agents. Our system achieves a 4-5x rollout speedup compared to naive implementations. Second, we scale the task set breadth, depth, and size, which results in continued performance improvement. Fine-tuning a strong base vision-language model, Qwen-3-VL-8B-Instruct, on WebGym results in an improvement in success rate on an out-of-distribution test set from 26.2% to 42.9%, significantly outperforming agents based on proprietary models such as GPT-4o and GPT-5-Thinking that achieve 27.1% and 29.8%, respectively. This improvement is substantial because our test set consists only of tasks on websites never seen during training, unlike many other prior works on training visual web agents.
|
https://arxiv.org/abs/2601.02439
|
Academic Papers
|
svg
|
4079501d9012cb84e1db6c3e45aee41d0eb63328d7da44d44c32c6bb2846e2ba
|
2026-01-07T00:00:00-05:00
|
Understanding Pure Textual Reasoning for Blind Image Quality Assessment
|
arXiv:2601.02441v1 Announce Type: new Abstract: Textual reasoning has recently been widely adopted in Blind Image Quality Assessment (BIQA). However, it remains unclear how textual information contributes to quality prediction and to what extent text can represent the score-related image contents. This work addresses these questions from an information-flow perspective by comparing existing BIQA models with three paradigms designed to learn the image-text-score relationship: Chain-of-Thought, Self-Consistency, and Autoencoder. Our experiments show that the score prediction performance of the existing model significantly drops when only textual information is used for prediction. Whereas the Chain-of-Thought paradigm introduces little improvement in BIQA performance, the Self-Consistency paradigm significantly reduces the gap between image- and text-conditioned predictions, narrowing the PLCC/SRCC difference to 0.02/0.03. The Autoencoder-like paradigm is less effective in closing the image-text gap, yet it reveals a direction for further optimization. These findings provide insights into how to improve the textual reasoning for BIQA and high-level vision tasks.
|
https://arxiv.org/abs/2601.02441
|
Academic Papers
|
svg
|
7f1290d0943e2a6b7c4d90c1720b515c68fb2128aefa5e61c11750320adca1ba
|
2026-01-07T00:00:00-05:00
|
Evaluating the Diagnostic Classification Ability of Multimodal Large Language Models: Insights from the Osteoarthritis Initiative
|
arXiv:2601.02443v1 Announce Type: new Abstract: Multimodal large language models (MLLMs) show promising performance on medical visual question answering (VQA) and report generation, but these generation and explanation abilities do not reliably transfer to disease-specific classification. We evaluated MLLM architectures on knee osteoarthritis (OA) radiograph classification, which remains underrepresented in existing medical MLLM benchmarks, even though knee OA affects an estimated 300 to 400 million people worldwide. Through systematic ablation studies manipulating the vision encoder, the connector, and the large language model (LLM) across diverse training strategies, we measured each component's contribution to diagnostic accuracy. In our classification task, a trained vision encoder alone could outperform full MLLM pipelines in classification accuracy and fine-tuning the LLM provided no meaningful improvement over prompt-based guidance. And LoRA fine-tuning on a small, class-balanced dataset (500 images) gave better results than training on a much larger but class-imbalanced set (5,778 images), indicating that data balance and quality can matter more than raw scale for this task. These findings suggest that for domain-specific medical classification, LLMs are more effective as interpreters and report generators rather than as primary classifiers. Therefore, the MLLM architecture appears less suitable for medical image diagnostic classification tasks that demand high certainty. We recommend prioritizing vision encoder optimization and careful dataset curation when developing clinically applicable systems.
|
https://arxiv.org/abs/2601.02443
|
Academic Papers
|
svg
|
ba7bba1822d66173fb5d51e9452c0b26ebc88b6844dfe2f24cdaac963e9e6d5a
|
2026-01-07T00:00:00-05:00
|
VocalBridge: Latent Diffusion-Bridge Purification for Defeating Perturbation-Based Voiceprint Defenses
|
arXiv:2601.02444v1 Announce Type: new Abstract: The rapid advancement of speech synthesis technologies, including text-to-speech (TTS) and voice conversion (VC), has intensified security and privacy concerns related to voice cloning. Recent defenses attempt to prevent unauthorized cloning by embedding protective perturbations into speech to obscure speaker identity while maintaining intelligibility. However, adversaries can apply advanced purification techniques to remove these perturbations, recover authentic acoustic characteristics, and regenerate cloneable voices. Despite the growing realism of such attacks, the robustness of existing defenses under adaptive purification remains insufficiently studied. Most existing purification methods are designed to counter adversarial noise in automatic speech recognition (ASR) systems rather than speaker verification or voice cloning pipelines. As a result, they fail to suppress the fine-grained acoustic cues that define speaker identity and are often ineffective against speaker verification attacks (SVA). To address these limitations, we propose Diffusion-Bridge (VocalBridge), a purification framework that learns a latent mapping from perturbed to clean speech in the EnCodec latent space. Using a time-conditioned 1D U-Net with a cosine noise schedule, the model enables efficient, transcript-free purification while preserving speaker-discriminative structure. We further introduce a Whisper-guided phoneme variant that incorporates lightweight temporal guidance without requiring ground-truth transcripts. Experimental results show that our approach consistently outperforms existing purification methods in recovering cloneable voices from protected speech. Our findings demonstrate the fragility of current perturbation-based defenses and highlight the need for more robust protection mechanisms against evolving voice-cloning and speaker verification threats.
|
https://arxiv.org/abs/2601.02444
|
Academic Papers
|
svg
|
cc803cb0943f0f2aa6085381484e3246b8412b72f60fadd5edfb3a0ab0ef87a8
|
2026-01-07T00:00:00-05:00
|
A Spatio-Temporal Deep Learning Approach For High-Resolution Gridded Monsoon Prediction
|
arXiv:2601.02445v1 Announce Type: new Abstract: The Indian Summer Monsoon (ISM) is a critical climate phenomenon, fundamentally impacting the agriculture, economy, and water security of over a billion people. Traditional long-range forecasting, whether statistical or dynamical, has predominantly focused on predicting a single, spatially-averaged seasonal value, lacking the spatial detail essential for regional-level resource management. To address this gap, we introduce a novel deep learning framework that reframes gridded monsoon prediction as a spatio-temporal computer vision task. We treat multi-variable, pre-monsoon atmospheric and oceanic fields as a sequence of multi-channel images, effectively creating a video-like input tensor. Using 85 years of ERA5 reanalysis data for predictors and IMD rainfall data for targets, we employ a Convolutional Neural Network (CNN)-based architecture to learn the complex mapping from the five-month pre-monsoon period (January-May) to a high-resolution gridded rainfall pattern for the subsequent monsoon season. Our framework successfully produces distinct forecasts for each of the four monsoon months (June-September) as well as the total seasonal average, demonstrating its utility for both intra-seasonal and seasonal outlooks.
|
https://arxiv.org/abs/2601.02445
|
Academic Papers
|
svg
|
09de2b97dc786e43c09c7c33e39e8282a673c381f3027da7067649975d66208b
|
2026-01-07T00:00:00-05:00
|
Don't Mind the Gaps: Implicit Neural Representations for Resolution-Agnostic Retinal OCT Analysis
|
arXiv:2601.02447v1 Announce Type: new Abstract: Routine clinical imaging of the retina using optical coherence tomography (OCT) is performed with large slice spacing, resulting in highly anisotropic images and a sparsely scanned retina. Most learning-based methods circumvent the problems arising from the anisotropy by using 2D approaches rather than performing volumetric analyses. These approaches inherently bear the risk of generating inconsistent results for neighboring B-scans. For example, 2D retinal layer segmentations can have irregular surfaces in 3D. Furthermore, the typically used convolutional neural networks are bound to the resolution of the training data, which prevents their usage for images acquired with a different imaging protocol. Implicit neural representations (INRs) have recently emerged as a tool to store voxelized data as a continuous representation. Using coordinates as input, INRs are resolution-agnostic, which allows them to be applied to anisotropic data. In this paper, we propose two frameworks that make use of this characteristic of INRs for dense 3D analyses of retinal OCT volumes. 1) We perform inter-B-scan interpolation by incorporating additional information from en-face modalities, that help retain relevant structures between B-scans. 2) We create a resolution-agnostic retinal atlas that enables general analysis without strict requirements for the data. Both methods leverage generalizable INRs, improving retinal shape representation through population-based training and allowing predictions for unseen cases. Our resolution-independent frameworks facilitate the analysis of OCT images with large B-scan distances, opening up possibilities for the volumetric evaluation of retinal structures and pathologies.
|
https://arxiv.org/abs/2601.02447
|
Academic Papers
|
svg
|
b446e6222a0062c7e3a5c3852f035ec3891cda273ea20d9759f23b4c35e9a861
|
2026-01-07T00:00:00-05:00
|
Stigmergic Swarming Agents for Fast Subgraph Isomorphism
|
arXiv:2601.02449v1 Announce Type: new Abstract: Maximum partial subgraph isomorphism compares two graphs (nodes joined by edges) to find a largest common subgraph. A common use case, for graphs with labeled nodes, seeks to find instances of a \textit{query} graph with $q$ nodes in a (typically larger) \textit{data} graph with $d$ nodes. The problem is NP-complete, and na\"ive solutions are exponential in $q + d$. The fastest current heuristic has complexity $O(d^2)$. This paper outlines ASSIST (Approximate Swarming Subgraph Isomorphism through Stigmergy), inspired by the ant colony optimization approach to the traveling salesperson. After peering (identifying matching individual nodes in query and data) in time $O(q\cdot log(d))$, the time required for ASSIST's iterative subgraph search, the combinatorially complex part of the problem, is linear in query size and constant in data size. ASSIST can be extended to support matching problems (such as temporally ordered edges, inexact matches, and missing nodes or edges in the data graph) that frustrate other heuristics.
|
https://arxiv.org/abs/2601.02449
|
Academic Papers
|
svg
|
56e9076f404abd84c91a3618b1c40f6672b7c97f348f76c63c81e1f97afd100a
|
2026-01-07T00:00:00-05:00
|
mHC-GNN: Manifold-Constrained Hyper-Connections for Graph Neural Networks
|
arXiv:2601.02451v1 Announce Type: new Abstract: Graph Neural Networks (GNNs) suffer from over-smoothing in deep architectures and expressiveness bounded by the 1-Weisfeiler-Leman (1-WL) test. We adapt Manifold-Constrained Hyper-Connections (\mhc)~\citep{xie2025mhc}, recently proposed for Transformers, to graph neural networks. Our method, mHC-GNN, expands node representations across $n$ parallel streams and constrains stream-mixing matrices to the Birkhoff polytope via Sinkhorn-Knopp normalization. We prove that mHC-GNN exhibits exponentially slower over-smoothing (rate $(1-\gamma)^{L/n}$ vs.\ $(1-\gamma)^L$) and can distinguish graphs beyond 1-WL. Experiments on 10 datasets with 4 GNN architectures show consistent improvements. Depth experiments from 2 to 128 layers reveal that standard GNNs collapse to near-random performance beyond 16 layers, while mHC-GNN maintains over 74\% accuracy even at 128 layers, with improvements exceeding 50 percentage points at extreme depths. Ablations confirm that the manifold constraint is essential: removing it causes up to 82\% performance degradation. Code is available at \href{https://github.com/smlab-niser/mhc-gnn}{https://github.com/smlab-niser/mhc-gnn}
|
https://arxiv.org/abs/2601.02451
|
Academic Papers
|
svg
|
60484d85ed3646a37f34e2438fb6744e28e3a2460f624d0db7a3df8b2ecd8772
|
2026-01-07T00:00:00-05:00
|
The Rise of Agentic Testing: Multi-Agent Systems for Robust Software Quality Assurance
|
arXiv:2601.02454v1 Announce Type: new Abstract: Software testing has progressed toward intelligent automation, yet current AI-based test generators still suffer from static, single-shot outputs that frequently produce invalid, redundant, or non-executable tests due to the lack of execution aware feedback. This paper introduces an agentic multi-model testing framework a closed-loop, self-correcting system in which a Test Generation Agent, an Execution and Analysis Agent, and a Review and Optimization Agent collaboratively generate, execute, analyze, and refine tests until convergence. By using sandboxed execution, detailed failure reporting, and iterative regeneration or patching of failing tests, the framework autonomously improves test quality and expands coverage. Integrated into a CI/CD-compatible pipeline, it leverages reinforcement signals from coverage metrics and execution outcomes to guide refinement. Empirical evaluations on microservice based applications show up to a 60% reduction in invalid tests, 30% coverage improvement, and significantly reduced human effort compared to single-model baselines demonstrating that multi-agent, feedback-driven loops can evolve software testing into an autonomous, continuously learning quality assurance ecosystem for self-healing, high-reliability codebases.
|
https://arxiv.org/abs/2601.02454
|
Academic Papers
|
svg
|
2744f6d968efd68ec39aa5121ace93217c0b1ae51604f6348dc0aa85c3d9866a
|
2026-01-07T00:00:00-05:00
|
Dynamic Quantization Error Propagation in Encoder-Decoder ASR Quantization
|
arXiv:2601.02455v1 Announce Type: new Abstract: Running Automatic Speech Recognition (ASR) models on memory-constrained edge devices requires efficient compression. While layer-wise post-training quantization is effective, it suffers from error accumulation, especially in encoder-decoder architectures. Existing solutions like Quantization Error Propagation (QEP) are suboptimal for ASR due to the model's heterogeneity, processing acoustic features in the encoder while generating text in the decoder. To address this, we propose Fine-grained Alpha for Dynamic Quantization Error Propagation (FADE), which adaptively controls the trade-off between cross-layer error correction and local quantization. Experiments show that FADE significantly improves stability by reducing performance variance across runs, while simultaneously surpassing baselines in mean WER.
|
https://arxiv.org/abs/2601.02455
|
Academic Papers
|
svg
|
5fedc0a7d6c7bb7d633de39abbfd6e8bd2ab6a410bb0dd87cd6cbbe8d60715a5
|
2026-01-07T00:00:00-05:00
|
InternVLA-A1: Unifying Understanding, Generation and Action for Robotic Manipulation
|
arXiv:2601.02456v1 Announce Type: new Abstract: Prevalent Vision-Language-Action (VLA) models are typically built upon Multimodal Large Language Models (MLLMs) and demonstrate exceptional proficiency in semantic understanding, but they inherently lack the capability to deduce physical world dynamics. Consequently, recent approaches have shifted toward World Models, typically formulated via video prediction; however, these methods often suffer from a lack of semantic grounding and exhibit brittleness when handling prediction errors. To synergize semantic understanding with dynamic predictive capabilities, we present InternVLA-A1. This model employs a unified Mixture-of-Transformers architecture, coordinating three experts for scene understanding, visual foresight generation, and action execution. These components interact seamlessly through a unified masked self-attention mechanism. Building upon InternVL3 and Qwen3-VL, we instantiate InternVLA-A1 at 2B and 3B parameter scales. We pre-train these models on hybrid synthetic-real datasets spanning InternData-A1 and Agibot-World, covering over 533M frames. This hybrid training strategy effectively harnesses the diversity of synthetic simulation data while minimizing the sim-to-real gap. We evaluated InternVLA-A1 across 12 real-world robotic tasks and simulation benchmark. It significantly outperforms leading models like pi0 and GR00T N1.5, achieving a 14.5\% improvement in daily tasks and a 40\%-73.3\% boost in dynamic settings, such as conveyor belt sorting.
|
https://arxiv.org/abs/2601.02456
|
Academic Papers
|
svg
|
30f2474cd50e62a35e1e483ea49d12c3eee817f58b1f88bb2d1ac053ef277ebd
|
2026-01-07T00:00:00-05:00
|
PatchAlign3D: Local Feature Alignment for Dense 3D Shape understanding
|
arXiv:2601.02457v1 Announce Type: new Abstract: Current foundation models for 3D shapes excel at global tasks (retrieval, classification) but transfer poorly to local part-level reasoning. Recent approaches leverage vision and language foundation models to directly solve dense tasks through multi-view renderings and text queries. While promising, these pipelines require expensive inference over multiple renderings, depend heavily on large language-model (LLM) prompt engineering for captions, and fail to exploit the inherent 3D geometry of shapes. We address this gap by introducing an encoder-only 3D model that produces language-aligned patch-level features directly from point clouds. Our pre-training approach builds on existing data engines that generate part-annotated 3D shapes by pairing multi-view SAM regions with VLM captioning. Using this data, we train a point cloud transformer encoder in two stages: (1) distillation of dense 2D features from visual encoders such as DINOv2 into 3D patches, and (2) alignment of these patch embeddings with part-level text embeddings through a multi-positive contrastive objective. Our 3D encoder achieves zero-shot 3D part segmentation with fast single-pass inference without any test-time multi-view rendering, while significantly outperforming previous rendering-based and feed-forward approaches across several 3D part segmentation benchmarks. Project website: https://souhail-hadgi.github.io/patchalign3dsite/
|
https://arxiv.org/abs/2601.02457
|
Academic Papers
|
svg
|
b779a2123c96dec199e5f34bdfb4ed23d76840fcb9de930d829e598def3df269
|
2026-01-07T00:00:00-05:00
|
Variational (Energy-Based) Spectral Learning: A Machine Learning Framework for Solving Partial Differential Equations
|
arXiv:2601.02492v1 Announce Type: new Abstract: We introduce variational spectral learning (VSL), a machine learning framework for solving partial differential equations (PDEs) that operates directly in the coefficient space of spectral expansions. VSL offers a principled bridge between variational PDE theory, spectral discretization, and contemporary machine learning practice. The core idea is to recast a given PDE \[ \mathcal{L}u = f \quad \text{in} \quad Q=\Omega\times(0,T), \] together with boundary and initial conditions, into differentiable space--time energies built from strong-form least-squares residuals and weak (Galerkin) formulations. The solution is represented as a finite spectral expansion \[ u_N(x,t)=\sum_{n=1}^{N} c_n\,\phi_n(x,t), \] where $\phi_n$ are tensor-product Chebyshev bases in space and time, with Dirichlet-satisfying spatial modes enforcing homogeneous boundary conditions analytically. This yields a compact linear parameterization in the coefficient vector $\mathbf{c}$, while all PDE complexity is absorbed into the variational energy. We show how to construct strong-form and weak-form space-time functionals, augment them with initial-condition and Tikhonov regularization terms, and minimize the resulting objective with gradient-based optimization. In practice, VSL is implemented in TensorFlow using automatic differentiation and Keras cosine-decay-with-restarts learning-rate schedules, enabling robust optimization of moderately sized coefficient vectors. Numerical experiments on benchmark elliptic and parabolic problems, including one- and two-dimensional Poisson, diffusion, and Burgers-type equations, demonstrate that VSL attains accuracy comparable to classical spectral collocation with Crank-Nicolson time stepping, while providing a differentiable objective suitable for modern optimization tooling.
|
https://arxiv.org/abs/2601.02492
|
Academic Papers
|
svg
|
cadf3bff93ab9859e2888c75043087f799374beba10ca07d5bc363ff57202d00
|
2026-01-07T00:00:00-05:00
|
APoW: Auditable Proof-of-Work Against Block Withholding Attacks
|
arXiv:2601.02496v1 Announce Type: new Abstract: We introduce APoW, a novel proof-of-work (PoW) construction inspired by Hashcash-style nonce searching, which enables the auditing of other miners' work through accountable re-scanning of the nonce space. The proposed scheme allows a miner to probabilistically attest to having searched specified regions of the nonce space in earlier mining rounds, while concurrently earning rewards for performing productive work for a new block or pool share. This capability enables miners belonging to a mining pools to audit another miner's claimed effort retroactively, thereby allowing the probabilistic detection of block withholding attacks (BWAs) without requiring trusted hardware or trusted third parties. As a consequence, the construction supports the design of decentralized mining pools in which work attribution is verifiable and withholding incentives are substantially reduced. The scheme preserves the fundamental properties of conventional PoW, including public verifiability and difficulty adjustment, while adding an orthogonal auditability layer tailored to pool-based mining. Finally, while a full deployment of APoW in Bitcoin would require a consensus rule change and minor modifications to mining ASICs, the construction remains practically useful even without consensus changes, for instance, as a pool-level auditing mechanism that enables verifiable pay-for-auditing using existing pool reserves.
|
https://arxiv.org/abs/2601.02496
|
Academic Papers
|
svg
|
729bfe3b838d1148389070a7ecf5ba4ec91bab2947f0f80588020f1bbcb228a3
|
2026-01-07T00:00:00-05:00
|
Polynomial Convergence of Riemannian Diffusion Models
|
arXiv:2601.02499v1 Announce Type: new Abstract: Diffusion models have demonstrated remarkable empirical success in the recent years and are considered one of the state-of-the-art generative models in modern AI. These models consist of a forward process, which gradually diffuses the data distribution to a noise distribution spanning the whole space, and a backward process, which inverts this transformation to recover the data distribution from noise. Most of the existing literature assumes that the underlying space is Euclidean. However, in many practical applications, the data are constrained to lie on a submanifold of Euclidean space. Addressing this setting, De Bortoli et al. (2022) introduced Riemannian diffusion models and proved that using an exponentially small step size yields a small sampling error in the Wasserstein distance, provided the data distribution is smooth and strictly positive, and the score estimate is $L_\infty$-accurate. In this paper, we greatly strengthen this theory by establishing that, under $L_2$-accurate score estimate, a {\em polynomially small stepsize} suffices to guarantee small sampling error in the total variation distance, without requiring smoothness or positivity of the data distribution. Our analysis only requires mild and standard curvature assumptions on the underlying manifold. The main ingredients in our analysis are Li-Yau estimate for the log-gradient of heat kernel, and Minakshisundaram-Pleijel parametrix expansion of the perturbed heat equation. Our approach opens the door to a sharper analysis of diffusion models on non-Euclidean spaces.
|
https://arxiv.org/abs/2601.02499
|
Academic Papers
|
svg
|
06e989cdc314a522cdd19969e83a902a03e914bb6b3e83942b7a22b3a154d71e
|
2026-01-07T00:00:00-05:00
|
GEM-Style Constraints for PEFT with Dual Gradient Projection in LoRA
|
arXiv:2601.02500v1 Announce Type: new Abstract: Full fine-tuning of Large Language Models (LLMs) is computationally costly, motivating Continual Learning (CL) approaches that utilize parameter-efficient adapters. We revisit Gradient Episodic Memory (GEM) within the Low-Rank Adapter (LoRA) subspace and introduce I-GEM: a fixed-budget, GPU-resident dual projected-gradient approximation to GEM's quadratic projection. By constraining non-interference solely within the adapter parameters, I-GEM preserves GEM-like stability with orders-of-magnitude lower mean projection overhead. On a 3-task AG News split with induced domain drift, using GPT-2 (355M) and LoRA ($r=8$), I-GEM matches GEM's average accuracy (within $\sim\!0.04$ pts) and outperforms A-GEM by $\sim\!1.4$ pts. Crucially, it reduces projection time vs.\ GEM by a factor of $\sim\!10^3$. These results suggest that applying GEM constraints in the LoRA subspace is a practical pathway for continual learning at the LLM scale.
|
https://arxiv.org/abs/2601.02500
|
Academic Papers
|
svg
|
5f970c80b690aa9edff345ffbde49a7152aea955f2ae4817c968d4560083124d
|
2026-01-07T00:00:00-05:00
|
Enhancing Debugging Skills with AI-Powered Assistance: A Real-Time Tool for Debugging Support
|
arXiv:2601.02504v1 Announce Type: new Abstract: Debugging is a crucial skill in programming education and software development, yet it is often overlooked in CS curricula. To address this, we introduce an AI-powered debugging assistant integrated into an IDE. It offers real-time support by analyzing code, suggesting breakpoints, and providing contextual hints. Using RAG with LLMs, program slicing, and custom heuristics, it enhances efficiency by minimizing LLM calls and improving accuracy. A three-level evaluation - technical analysis, UX study, and classroom tests - highlights its potential for teaching debugging.
|
https://arxiv.org/abs/2601.02504
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.