MOOSE-Star-IR-R1D-7B Model Card
Overview
MOOSE-Star-IR-R1D-7B (referred to as MS-IR-7B in the paper) is a 7B parameter model fine-tuned for selecting the correct cross-paper inspiration from 15 candidates given a research background. It's designed for scientific hypothesis generation in the MOOSE-Star framework.
- Paper: MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier (arXiv:2603.03756)
- Base Model: DeepSeek-R1-Distill-Qwen-7B
- License: Apache 2.0
- Code: ZonglinY/MOOSE-Star
- Multi-task variant: MOOSE-Star-R1D-7B (IR + HC in one model)
Model Description
| Parameter | Value |
|---|---|
| Base Model | DeepSeek-R1-Distill-Qwen-7B |
| Training Method | Full-parameter SFT (ZeRO-3) |
| Training Data | TOMATO-Star-SFT-Data-R1D-32B IR split (150,218 train + 2,377 eval) |
| Teacher Model | DeepSeek-R1-Distill-Qwen-32B |
| Learning Rate | 1e-5 |
| Epochs | 1 |
| Batch Size | 128 |
| Chat Template | deepseekr1 |
| Cutoff Length | 16384 |
Task Description
The model selects the most relevant cross-paper inspiration from 15 candidates (A-O) that includes:
- 1 correct inspiration (ground truth)
- 14 hard negatives (keyword-similar, embedding-similar, and random papers)
The model outputs chain-of-thought reasoning and is designed for a hierarchical search pipeline with O(log N) complexity.
Prompt Format (Simplified Overview)
The full prompt template is constructed via instruction_prompts() in the code examples below. The general structure is:
[Task instruction preamble]
## Context
**Research Question:**
{research_question}
**Background Survey (existing methods for THIS task):**
{background_survey}
**Previous Hypothesis (if any):**
{previous_hypothesis_or_none}
## Candidate Inspiration Papers
### Candidate [A]
**Title:** {title_A}
**Abstract:** {abstract_A}
... (15 candidates total, A through O)
## Output Format
<think>
[reasoning process]
</think>
**Selected ID starts:** [X] **Selected ID ends**
**Selection Reason starts:** [reason] **Selection Reason ends**
Usage
Prerequisites: Clone the MOOSE-Star repo for prompt templates and inference utilities:
git clone https://github.com/ZonglinY/MOOSE-Star.git && cd MOOSE-Star
# See requirements.txt for full dependencies; at minimum: pip install transformers torch
Option A: SGLang Deployment (Recommended)
# SGLang requires a separate environment; see https://github.com/sgl-project/sglang for installation
# Start the server
python -m sglang.launch_server --model-path ZonglinY/MOOSE-Star-IR-R1D-7B --port 1235
import sys
sys.path.insert(0, "./Inference")
from ir_probability_extractor import IRProbabilityExtractor
extractor = IRProbabilityExtractor(base_urls=["http://localhost:1235/v1"])
result = extractor.get_selection_probabilities(
research_question="Your research question",
background_survey="Your background survey",
candidates=[
{"title": "Candidate A title", "abstract": "Candidate A abstract"},
{"title": "Candidate B title", "abstract": "Candidate B abstract"},
# ... up to 15 candidates (labeled A-O)
],
)
print(f"Selected: [{result.selected_label}]")
print(f"Probabilities: {result.probabilities}")
Option B: Direct HuggingFace Inference
import sys
sys.path.insert(0, "./utils")
from prompt_store import instruction_prompts
from transformers import AutoModelForCausalLM, AutoTokenizer
import re
model_name = "ZonglinY/MOOSE-Star-IR-R1D-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, dtype="auto", device_map="auto")
p = instruction_prompts("inspiration_retrieval_with_reasoning_with_alphabetical_candidates")
candidates = [{"title": "...", "abstract": "..."}, ...]
candidates_text = "".join(
f"### Candidate [{chr(ord('A') + i)}]\n**Title:** {c['title']}\n**Abstract:** {c['abstract']}\n\n"
for i, c in enumerate(candidates)
)
research_question = "Your research question"
background_survey = "Your background survey"
prompt = (p[0] + research_question
+ p[1] + background_survey
+ p[2] + "No previous hypothesis."
+ p[3] + candidates_text
+ p[4])
messages = [{"role": "user", "content": prompt}]
formatted = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)
formatted += "<|Assistant|>"
inputs = tokenizer(formatted, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=8192, temperature=0.6, top_p=0.9, do_sample=True)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
# Parse selected candidate
match = re.search(r"\*\*Selected ID starts:\*\*\s*\[(\w)\]\s*\*\*Selected ID ends\*\*", response)
if match:
selected = match.group(1)
print(f"Selected: [{selected}]")
Evaluation Results
| Model | Accuracy |
|---|---|
| Random Selection | 6.70% |
| R1-Distilled-Qwen-7B (base) | 28.42% |
| MS-IR-7B (this model) | 54.37% |
Citation
@article{yang2025moosestar,
title={MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier},
author={Yang, Zonglin and Bing, Lidong},
journal={arXiv preprint arXiv:2603.03756},
year={2026}
}
Try It on the Inference Provider (Copy-Paste Examples)
The HF Inference Provider playground for this model currently has known
limitations (default max_tokens is set to the full context window, and the
parameter is not adjustable in the UI). Until that is fixed upstream, the
recommended way to try the model is via the API.
Quickstart (Python)
from huggingface_hub import InferenceClient
client = InferenceClient(provider="featherless-ai", token="<HF_TOKEN>")
prompt = """<paste one of the prompts below>"""
r = client.chat_completion(
messages=[{"role": "user", "content": prompt}],
model="<this model id>",
max_tokens=4096, # IMPORTANT: must be < 32768 - len(prompt_tokens)
temperature=0.6,
top_p=0.9,
)
print(r.choices[0].message.content)
Note on chat template: under the DeepSeek-R1 chat template the opening
<think>tag is injected by the template itself, so the raw output begins directly with reasoning text and contains only the closing</think>. This is expected.
Example Case (real TOMATO-Star test paper, October 2025)
- Research question: How to comprehensively explain and predict the heterogeneity of neurodegeneration in Alzheimer's disease?
- Ground-truth inspiration: "Network neuroscience" (Bassett & Sporns, 2017) — the kind of cross-disciplinary leap MOOSE-Star is trained to surface (using graph-theoretic brain-network tools to explain AD heterogeneity that single-pathology models miss).
IR prompt — select 1 inspiration from 15 candidates A–O (ground-truth = [F] "Network neuroscience.")
You are helping with scientific hypothesis generation by selecting an inspiration that solves a fundamental problem in the current approach.
## Core Task: Problem Identification and Solution
**Your Primary Goal**: Identify which candidate paper can best help solve a fundamental problem in the existing methods/hypothesis - either directly or by inspiring a solution.
**Key Principle**: Good inspirations help solve real problems. They might directly provide a solution, or they might spark an idea, remind you of related concepts, or inspire a creative adaptation. The best breakthroughs often come from unexpected connections.
**What Makes a Good Inspiration**:
1. **Problem-Solution Fit**: Either addresses a known limitation OR reveals new improvement opportunities
2. **Enables Progress**: The paper provides concepts, sparks ideas, or inspires solutions that advance the research
3. **Creative Connection**: The link might be indirect, non-obvious, or emerge during exploration
4. **Clear Impact**: You can explain how this paper contributes to progress, even if the path is unexpected
**The Research Process**:
1. **Background**: Research question + existing methods (with their limitations)
2. **Problem Identification**: What fundamental issue prevents progress?
3. **Inspiration Selection**: Which concept best solves this problem?
4. **Hypothesis Formation**: Adapt the solution to create a better method
**Classic Example - Backpropagation**:
- **Research Question**: How to use data to automatically improve parameters of multi-layer logistic regression?
- **Existing Methods**: Could only do inference, not learning
- **FUNDAMENTAL PROBLEM**: No way to compute gradients through multiple layers
- **Solution Found**: Chain rule from calculus
- **Why It Solves the Problem**: Chain rule computes derivatives of composite functions; neural networks ARE composite functions
- **Result**: Backpropagation algorithm
Note: The focus was on SOLVING THE GRADIENT PROBLEM. The breakthrough came from recognizing neural networks as composite functions.
## Your Current Task
**Flexible Reasoning Process** (these steps can happen in any order or iteratively):
- **Problem Recognition**: Identify limitations in current methods/hypothesis (can happen before OR after seeing candidates)
- **Opportunity Discovery**: For each candidate, explore how it might advance the research:
- It might solve a problem you already identified
- It might reveal a problem you hadn't noticed and simultaneously offer a solution
- It might spark ideas for improvements you hadn't considered
- **Selection**: Choose the candidate that enables the most meaningful progress
**Note**: The reasoning is often bidirectional - seeing a candidate can make you realize "oh, this could address limitation X that I hadn't fully articulated" or "this suggests a way to improve aspect Y"
**Remember**:
- The best inspiration might not seem immediately relevant
- Focus on problem-solving potential, not keyword matching
- Creative connections often lead to breakthroughs
- Consider how concepts could be adapted or repurposed
**Avoid**:
- Choosing based on surface-level similarity
- Dismissing candidates that seem unrelated at first glance
## Context
**Research Question:**
How to comprehensively explain and predict the heterogeneity of neurodegeneration in Alzheimer's disease?
**Background Survey (existing methods for THIS task):**
Current Alzheimer's disease (AD) research is dominated by unifactorial approaches, primarily the amyloid-beta (Aβ) hypothesis, which posits Aβ accumulation as the central cause of neurodegeneration. Established methods include:
- Biomarker-focused frameworks (e.g., ATN framework), which categorize AD pathology into amyloid/tau/neurodegeneration but oversimplify heterogeneity.
- Reductionist therapeutics like Aβ antibodies (lecanemab [17], donanemab [18]) that remove plaques but yield marginal cognitive improvements (±25% delay) and cause adverse effects (brain swelling/microbleeds).
- Siloed scale investigations: Molecular studies (Aβ/tau at nanoscale [4]), cellular analyses (neuroinflammation at microscale [29]), and epidemiological risk tracking (exposcale [33]) lack integration.
Key Limitations:
- Aβ-centric models fail to explain:
- Aβ plaques in cognitively healthy seniors [8–10]
- Poor correlation between Aβ burden and cognitive decline [11–13]
- Heterogeneity in clinical presentations (e.g., visual vs. memory-predominant AD [56])
- Clinical trials targeting single entities ignore cross-scale interactions (e.g., exercise improves cognition via vascular/metabolic pathways [58]).
**Previous Hypothesis (if any - current progress built from earlier inspirations):**
None (starting from background knowledge)
## Candidate Inspiration Papers
### Candidate [A]
**Title:** Social cognitive network neuroscience.
**Abstract:** Over the past three decades, research from the field of social neuroscience has identified a constellation of brain regions that relate to social cognition. Although these studies have provided important insights into the specific neural regions underlying social behavior, they may overlook the broader neural context in which those regions and the interactions between them are embedded. Network neuroscience is an emerging discipline that focuses on modeling and analyzing brain networks-collections of interacting neural elements. Because human cognition requires integrating information across multiple brain regions and systems, we argue that a novel social cognitive network neuroscience approach-which leverages methods from the field of network neuroscience and graph theory-can advance our understanding of how brain systems give rise to social behavior. This review provides an overview of the field of network neuroscience, discusses studies that have leveraged this approach to advance social neuroscience research, highlights the potential contributions of social cognitive network neuroscience to understanding social behavior and provides suggested tools and resources for conducting network neuroscience research.
### Candidate [B]
**Title:** Editorial: Topological Neuroscience.
**Abstract:** Topology, in its many forms, describes relations. It has thus long been a central concept in neuroscience, capturing structural and functional aspects of the organization of the nervous system and their links to cognition. Recent advances in computational topology have extended the breadth and depth of topological descriptions. This Focus Feature offers a unified overview of the emerging field of topological neuroscience and of its applications across the many scales of the nervous system from macro-, over meso-, to microscales.
### Candidate [C]
**Title:** Exogenous neuritin treatment improves survivability and functions of Schwann cells with improved outgrowth of neurons in rat diabetic neuropathy.
**Abstract:** Pathogenesis and treatment for diabetic neuropathy are still complex. A deficit of neurotrophic factors affecting Schwann cells is a very important cause of diabetic neuropathy. Neuritin is a newly discovered potential neurotrophic factor. In this study, we explored the effect of exogenous neuritin on survivability and functions of diabetic Schwann cells of rats with experimental diabetic neuropathy. Diabetic neuropathy was induced in rats. 12-week diabetic rats contrasted with non-diabetic normal rats had decreased levels of serum neuritin and slowed nerve conduction velocities (NCVs). Schwann cells isolated from these diabetic rats and cultured in high glucose showed reduced cell neuritin mRNA and protein and supernatant neuritin protein, increased apoptosis rates, increased caspase-3 activities and progressively reduced viability. In contrast, exogenous neuritin treatment reduced apoptosis and improved viability, with elevated Bcl-2 levels (not Bax) and decreased caspase-3 activities. Co-cultured with diabetic Schwann cells pre-treated with exogenous neuritin in high glucose media, and diabetic DRG neurons showed lessened decreased neurite outgrowth and supernatant NGF concentration occurring in co-culture of diabetic cells. Exogenous neuritin treatment ameliorated survivability and functions of diabetic Schwann cells of rats with diabetic neuropathy. Our study may provide a new mechanism and potential treatment for diabetic neuropathy.
### Candidate [D]
**Title:** The prone position in COVID-19 impacts the thickness of peripapillary retinal nerve fiber layers and macular ganglion cell layers.
**Abstract:** The prone position reduces mortality in severe cases of COVID-19 with acute respiratory distress syndrome. However, visual loss and changes to the peripapillary retinal nerve fiber layer (p-RNFL) and the macular ganglion cell layer and inner plexiform layer (m-GCIPL) have occurred in patients undergoing surgery in the prone position. Moreover, COVID-19-related eye problems have been reported. This study compared the p-RNFL and m-GCIPL thicknesses of COVID-19 patients who were placed in the prone position with patients who were not. This prospective longitudinal and case-control study investigated 15 COVID-19 patients placed in the prone position (the "Prone Group"), 23 COVID-19 patients not in the prone position (the "Non-Prone Group"), and 23 healthy, non-COVID individuals without ocular disease or systemic conditions (the "Control Group"). The p-RNFL and m-GCIPL thicknesses of the COVID-19 patients were measured at 1, 3, and 6 months and compared within and between groups. The result showed that the Prone and Non-Prone Groups had no significant differences in their p-RNFL thicknesses at the 3 follow-ups. However, the m-GCIPL analysis revealed significant differences in the inferior sector of the Non-Prone Group between months 1 and 3 (mean difference, 0.74 μm; P = 0.009). The p-RNFL analysis showed a significantly greater thickness at 6 months for the superior sector of the Non-Prone Group (131.61 ± 12.08 μm) than for the Prone Group (118.87 ± 18.21 μm; P = 0.039). The m-GCIPL analysis revealed that the inferior sector was significantly thinner in the Non-Prone Group than in the Control Group (at 1 month 80.57 ± 4.60 versus 83.87 ± 5.43 μm; P = 0.031 and at 6 months 80.48 ± 3.96 versus 83.87 ± 5.43 μm; P = 0.044). In conclusion, the prone position in COVID-19 patients can lead to early loss of p-RNFL thickness due to rising intraocular pressure, which is independent of the timing of prone positioning. Consequently, there is no increase in COVID-19 patients' morbidity burden.
### Candidate [E]
**Title:** TCMSP: a database of systems pharmacology for drug discovery from herbal medicines
**Abstract:** BackgroundModern medicine often clashes with traditional medicine such as Chinese herbal medicine because of the little understanding of the underlying mechanisms of action of the herbs. In an effort to promote integration of both sides and to accelerate the drug discovery from herbal medicines, an efficient systems pharmacology platform that represents ideal information convergence of pharmacochemistry, ADME properties, drug-likeness, drug targets, associated diseases and interaction networks, are urgently needed.DescriptionThe traditional Chinese medicine systems pharmacology database and analysis platform (TCMSP) was built based on the framework of systems pharmacology for herbal medicines. It consists of all the 499 Chinese herbs registered in the Chinese pharmacopoeia with 29,384 ingredients, 3,311 targets and 837 associated diseases. Twelve important ADME-related properties like human oral bioavailability, half-life, drug-likeness, Caco-2 permeability, blood-brain barrier and Lipinski’s rule of five are provided for drug screening and evaluation. TCMSP also provides drug targets and diseases of each active compound, which can automatically establish the compound-target and target-disease networks that let users view and analyze the drug action mechanisms. It is designed to fuel the development of herbal medicines and to promote integration of modern medicine and traditional medicine for drug discovery and development.ConclusionsThe particular strengths of TCMSP are the composition of the large number of herbal entries, and the ability to identify drug-target networks and drug-disease networks, which will help revealing the mechanisms of action of Chinese herbs, uncovering the nature of TCM theory and developing new herb-oriented drugs. TCMSP is freely available at http://sm.nwsuaf.edu.cn/lsp/tcmsp.php.
### Candidate [F]
**Title:** Network neuroscience.
**Abstract:** Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system.
### Candidate [G]
**Title:** Sequential and cooperative action of Fgfs and Shh in the zebrafish retina.
**Abstract:** The signaling molecule Sonic hedgehog (Shh) is required for differentiation of the vertebrate retina. In the developing zebrafish retina, shh expression is initiated at the ventronasal region, from where it spreads as a wave through the retina. To investigate the molecular mechanism underlying this coordinated expression of shh, we mapped the cis-regulatory region and identified a novel regulatory sequence in the first intron of the shh locus. This sequence contains binding sites for the transcription factors Erm and Pea3 that are known transducers of Fgf signaling. Mutation of the binding sites or knockdown of Pea3 and Erm abolishes transgene expression, indicating that Fgf signaling regulates shh expression in the retina. We provide evidence that Fgf3 and -8 control initiation of expression, while Fgf19 is crucial for the propagation of transgene expression through the retina. Inhibitor experiments indicate a continued requirement of FGF and Hedgehog (Hh) signaling for transgene expression after initiation at the ventronasal aspect of the retina. We propose a model, in which Fgf3 and -8 initiate expression and Fgf19 and Shh signals cooperate subsequently to promote establishment of expression throughout the retina.
### Candidate [H]
**Title:** Memory function and the hippocampus.
**Abstract:** There has been a long tradition in memory research of adopting the view of a vital role of the medial temporal lobe and especially the hippocampus in declarative memory. Despite the broad support for this notion, there is an ongoing debate about what computations are performed by the different substructures. The present chapter summarizes several accounts of hippocampal functions in terms of the cognitive processes subserved by these structures, the information processed, and the underlying neural operations. Firstly, the value of the distinction between recollection and familiarity for the understanding of the role the hippocampus plays in memory is discussed. Then multiple lines of evidence for the role of the hippocampus in memory are considered. Cumulating evidence suggests that the hippocampus fosters the binding of disparate cortical representations of items and their spatiotemporal context into a coherent representation by means of a sparse conjunctive neural coding. This association of item and context will then lead to the phenomenological experience of recollection. In contrast, surrounding cortical areas have broader neural coding that provide a scalar signal of the similarity between two inputs (e.g. between the encoding and the retrieval). By this they form the basis of a feeling of familiarity, but also might encode the commonalities between these different inputs. However, a more complete picture of the importance of the hippocampus for declarative memories can only be drawn when the interactions of the medial temporal lobe with other brain areas are also taken into account.
### Candidate [I]
**Title:** Association between response inhibition and working memory in adult ADHD: a link to right frontal cortex pathology?
**Abstract:** We sought to assess the relationship between response inhibition and working memory in adult patients with attention-deficit/hyperactivity disorder (ADHD) and neurosurgical patients with frontal lobe damage. The stop-signal reaction time (SSRT) test and a spatial working memory (SWM) task were administered to 20 adult patients with ADHD and a group of matched controls. The same tasks were administered to 21 patients with lesions to right frontal cortex and 19 patients with left frontal lesions. The SSRT test, but not choice reaction time, was significantly associated with search errors on the SWM task in both the adult ADHD and right frontal patients. In the right frontal patients, impaired performance on both variables was correlated with the volume of damage to the inferior frontal gyrus. Response inhibition and working memory impairments in ADHD may stem from a common pathologic process rather than being distinct deficits. Such pathology could relate to right frontal-cortex abnormalities in ADHD, consistent with prior reports, as well as with the demonstration here of a significant association between SSRT and SWM in right frontal patients.
### Candidate [J]
**Title:** The effects of common peroneal nerve electrical stimulation on lower extremity deep venous hemodynamics: A randomized, crossover and controlled study.
**Abstract:** Intermittent pneumatic compression (IPC) and neuromuscular electrical stimulation can improve deep vein hemodynamics in the lower limbs. We developed a new, small and convenient, and easy to wear common peroneal nerve electrical stimulator (CPNES) and to investigate the effectiveness and safety of CPNES intervention on deep venous hemodynamics. Thirty healthy volunteers were recruited and randomly divided into group A and B. In group A, the hemodynamics of the left superficial femoral artery and the superficial femoral vein were measured after IPC compression, and then the CPNES was activated and the hemodynamics was measured again. In group B, the order of intervention was reversed. In group A, the peak velocity, time average blood flow velocity (TAMV), and flow velocity of femoral vein after IPC and CPNES intervention were higher than these of the baseline (P < .05, respectively). No significant differences of these blood flow parameters were found between IPC and CPNES intervention (P > .05, respectively). In group B, these blood flow parameters of femoral vein after IPC and CPNES intervention were higher than these of the baseline (P < .05, respectively). No significant difference of these blood flow parameters (P > .05, respectively) were noted between IPC and CPNES intervention as well. No differential change of these flow velocity of femoral artery after IPC and CPNES intervention in group A or group B. The hemodynamics of superficial femoral arteries and veins after intervention in group A and B were similar (P > .05, respectively). The effectiveness of CPNES intervention on the hemodynamics of the lower extremity is similar with that of IPC, increasing blood flow and may prevent venous thrombosis without adverse reaction.
### Candidate [K]
**Title:** Region of interest correction factors improve reliability of diffusion imaging measures within and across scanners and field strengths
**Abstract:** Diffusion tensor imaging (DTI) measures are commonly used as imaging markers to investigate individual differences in relation to behavioral and health-related characteristics. However, the ability to detect reliable associations in cross-sectional or longitudinal studies is limited by the reliability of the diffusion measures. Several studies have examined the reliability of diffusion measures within (i.e. intra-site) and across (i.e. inter-site) scanners with mixed results. Our study compares the test-retest reliability of diffusion measures within and across scanners and field strengths in cognitively normal older adults with a follow-up interval less than 2.25 years. Intra-class correlation (ICC) and coefficient of variation (CoV) of fractional anisotropy (FA) and mean diffusivity (MD) were evaluated in sixteen white matter and twenty-six gray matter bilateral regions. The ICC for intra-site reliability (0.32 to 0.96 for FA and 0.18 to 0.95 for MD in white matter regions; 0.27 to 0.89 for MD and 0.03 to 0.79 for FA in gray matter regions) and inter-site reliability (0.28 to 0.95 for FA in white matter regions, 0.02 to 0.86 for MD in gray matter regions) with longer follow-up intervals were similar to earlier studies using shorter follow-up intervals. The reliability of across field strengths comparisons was lower than intra- and inter-site reliabilities. Within and across scanner comparisons showed that diffusion measures were more stable in larger white matter regions (>1500 mm(3)). For gray matter regions, the MD measure showed stability in specific regions and was not dependent on region size. Linear correction factor estimated from cross-sectional or longitudinal data improved the reliability across field strengths. Our findings indicate that investigations relating diffusion measures to external variables must consider variable reliability across the distinct regions of interest and that correction factors can be used to improve consistency of measurement across field strengths. An important result of this work is that inter-scanner and field strength effects can be partially mitigated with linear correction factors specific to regions of interest. These data-driven linear correction techniques can be applied in cross-sectional or longitudinal studies. Published by Elsevier Inc.
### Candidate [L]
**Title:** Role of adenosine A2a receptor in cancers and autoimmune diseases
**Abstract:** Adenosine receptors are P1 class of purinergic receptors that belong to G protein‐coupled receptors. There are 4 subtypes of adenosine receptors, namely A1, A2A, A2B, and A3. A2AR has a high affinity for the ligand adenosine. Under pathological conditions or external stimuli, ATP is sequentially hydrolyzed to adenosine by CD39 and CD73. The combination of adenosine and A2AR can increase the concentration of cAMP and activate a series of downstream signaling pathways, and further playing the role of immunosuppression and promotion of tumor invasion. A2AR is expressed to some extent on various immune cells, where it is abnormally expressed on immune cells in cancers and autoimmune diseases. A2AR expression also correlates with disease progression. Inhibitors and agonists of A2AR may be potential new strategies for treatment of cancers and autoimmune diseases. We herein briefly reviewed the expression and distribution of A2AR, adenosine/A2AR signaling pathway, expression, and potential as a therapeutic target.
### Candidate [M]
**Title:** Cognitive network neuroscience.
**Abstract:** Network science provides theoretical, computational, and empirical tools that can be used to understand the structure and function of the human brain in novel ways using simple concepts and mathematical representations. Network neuroscience is a rapidly growing field that is providing considerable insight into human structural connectivity, functional connectivity while at rest, changes in functional networks over time (dynamics), and how these properties differ in clinical populations. In addition, a number of studies have begun to quantify network characteristics in a variety of cognitive processes and provide a context for understanding cognition from a network perspective. In this review, we outline the contributions of network science to cognitive neuroscience. We describe the methodology of network science as applied to the particular case of neuroimaging data and review its uses in investigating a range of cognitive functions including sensory processing, language, emotion, attention, cognitive control, learning, and memory. In conclusion, we discuss current frontiers and the specific challenges that must be overcome to integrate these complementary disciplines of network science and cognitive neuroscience. Increased communication between cognitive neuroscientists and network scientists could lead to significant discoveries under an emerging scientific intersection known as cognitive network neuroscience.
### Candidate [N]
**Title:** Graph Neural Networks in Network Neuroscience.
**Abstract:** Noninvasive medical neuroimaging has yielded many discoveries about the brain connectivity. Several substantial techniques mapping morphological, structural and functional brain connectivities were developed to create a comprehensive road map of neuronal activities in the human brain -namely brain graph. Relying on its non-euclidean data type, graph neural network (GNN) provides a clever way of learning the deep graph structure and it is rapidly becoming the state-of-the-art leading to enhanced performance in various network neuroscience tasks. Here we review current GNN-based methods, highlighting the ways that they have been used in several applications related to brain graphs such as missing brain graph synthesis and disease classification. We conclude by charting a path toward a better application of GNN models in network neuroscience field for neurological disorder diagnosis and population graph integration. The list of papers cited in our work is available at https://github.com/basiralab/GNNs-in-Network-Neuroscience.
### Candidate [O]
**Title:** Null models in network neuroscience.
**Abstract:** Recent advances in imaging and tracing technology provide increasingly detailed reconstructions of brain connectomes. Concomitant analytic advances enable rigorous identification and quantification of functionally important features of brain network architecture. Null models are a flexible tool to statistically benchmark the presence or magnitude of features of interest, by selectively preserving specific architectural properties of brain networks while systematically randomizing others. Here we describe the logic, implementation and interpretation of null models of connectomes. We introduce randomization and generative approaches to constructing null networks, and outline a taxonomy of network methods for statistical inference. We highlight the spectrum of null models - from liberal models that control few network properties, to conservative models that recapitulate multiple properties of empirical networks - that allow us to operationalize and test detailed hypotheses about the structure and function of brain networks. We review emerging scenarios for the application of null models in network neuroscience, including for spatially embedded networks, annotated networks and correlation-derived networks. Finally, we consider the limits of null models, as well as outstanding questions for the field.
## Output Format
**CRITICAL**: You MUST structure your response EXACTLY as follows (the markers are used for automatic parsing).
<think>
[Your flexible reasoning process - explore problems and opportunities as they emerge, evaluate how candidates relate to potential improvements, select the most promising one. Refer to candidates using their labels like Candidate [A], Candidate [B], etc.]
</think>
**Selected ID starts:** [X] **Selected ID ends**
(Replace [X] with the letter of your chosen candidate, e.g., [A], [B], [C], etc. Output ONLY the letter in brackets, nothing else between the markers.)
**Selection Reason starts:** [summary of why this inspiration was selected - what problem it addresses, how it enables progress] **Selection Reason ends**
Expected output marker: **Selected ID starts:** [F] **Selected ID ends**
- Downloads last month
- 236
Model tree for ZonglinY/MOOSE-Star-IR-R1D-7B
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B