Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 276, in _generate_tables
                  df = pandas_read_json(f)
                       ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 34, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
                  return json_reader.read()
                         ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
                  obj = self._get_object_parser(self.data)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
                  obj = FrameParser(json, **kwargs).parse()
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
                  self._parse()
                File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
                  ujson_loads(json, precise_float=self.precise_float), dtype=None
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
              ValueError: Expected object or value
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
                  for key, pa_table in ex_iterable.iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 279, in _generate_tables
                  raise e
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 242, in _generate_tables
                  pa_table = paj.read_json(
                             ^^^^^^^^^^^^^^
                File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

BioPacific MIP Research Assistant

BioPacific MIP Research Assistant is an agentic AI system designed to help researchers explore and understand scientific literature more efficiently. It combines a local LLM service, an embedding model, a Qdrant vector database, and an automated paper-processing pipeline to support paper-grounded question answering over the BioPacific literature collection.

The system can ingest papers, update the searchable database, retrieve relevant documents, and serve a session-based REST API for interactive research workflows.

New Features

The current version introduces several improvements for a more practical research assistant experience.

  • Multi-turn conversation support. The agent now supports session-based conversations, so users can ask follow-up questions while preserving the context of previous turns.
  • Automatic intention recognition. The agent can decide whether a user message actually needs retrieval. For example, it can skip retrieval for simple follow-up requests such as rephrasing, but perform retrieval when the question requires scientific evidence.
  • Automatic natural-language filters. The agent can infer structured retrieval constraints directly from user language, including journal, time range, and author hints. For example, if a user asks for synthetic biology papers published in PNAS in the last 4 years, the system can translate that request into targeted retrieval filters automatically.
  • More concise and precise answers. Before generating the final response, the agent first cleans and compresses retrieved paper content into focused summaries, then answers based on those refined inputs. This makes the final output shorter, clearer, and more relevant to the user's request.

Environment Setup

This project requires an Anaconda/Conda environment.

Please firstly unzip the src.zip. Then, from the repository root, run:

bash install_env.sh

This script will automatically create a Conda environment named biopacific and install the required packages for the project.

After the environment is created, go to the models/ directory. Each model folder contains its own download_model.sh script. Run those scripts to download the required local models:

cd models/Qwen3-Embedding-8B
bash download_model.sh

cd ../Qwen3.5-9B
bash download_model.sh

After that, activate the environment when you want to use the system:

conda activate biopacific

Configuration

After installing the environment, configure the project by editing config.yaml.

Most settings in config.yaml do not need to be changed for normal use. In practice, the most important items are:

  • GPU allocation
  • Port allocation

GPU Allocation

The two vLLM services are configured independently:

  • embedding.gpu: GPU used by the embedding model
  • llm.gpu: GPU used by the chat/completions model

By default, the config assigns:

  • embedding.gpu: "0"
  • llm.gpu: "1"

If your machine has at least two GPUs, this is a good default because the embedding model and the chat model can run separately. If you only have one GPU, you will need to adjust the configuration carefully based on available memory and your deployment strategy.

You may also need to tune:

  • embedding.gpu_memory_utilization
  • llm.gpu_memory_utilization
  • embedding.max_model_len
  • llm.max_model_len

If you encounter out-of-memory issues, reducing the memory utilization value is usually the first thing to try.

Port Allocation

The default service layout is:

  • embedding.port: 7770
  • llm.port: 7771
  • qdrant.port: 7772
  • pipeline.s5_agent.service_port: 7773

Make sure these ports are free on your machine before starting the system. If any of them conflict with another service, update the values in config.yaml.

Running the System

Once the biopacific environment is ready and config.yaml has been configured, start the system from the repository root:

conda activate biopacific
cd /path/to/BioPacific-MIP-Rearch-Assistant
./start_service.sh

This command automatically starts the following long-running services:

  • embedding: vLLM embedding server
  • llm: vLLM chat/completions server
  • qdrant: vector database
  • agent: paper agent REST service

The startup workflow also runs the data pipeline automatically:

  1. It checks and starts the prerequisite services.
  2. It runs the paper pipeline stages.
  3. It updates the paper database if new content is available.
  4. It finally starts the agent REST service.

After the agent service is running, the system is ready to be accessed through the REST API.

Common Commands

Start everything:

./start_service.sh

Stop everything:

./start_service.sh stop

Restart everything:

./start_service.sh restart

Check service status:

./start_service.sh status

If you want the paper database to refresh regularly, you can schedule a monthly restart:

./start_service.sh restart

Running this periodically is useful because each restart will trigger the automated pipeline and allow the database to update when new papers are available.

REST API Usage

An example client is provided in rest-api/example_chat_session.py.

You can run it directly after the services are up:

python rest-api/example_chat_session.py

Session ID and Multi-turn Conversation

The REST API is session-based. A sessionId identifies one conversation on the server side.

If you keep using the same sessionId, the agent will preserve recent conversation history and support multi-turn follow-up questions. This is what enables interactions such as:

  • asking an initial literature question
  • following up with "tell me more about paper [2]"
  • requesting a simpler explanation of the previous answer

If you start a new sessionId, the server treats it as a new conversation.

The Three Core Methods

The API provides three main endpoints:

1. Start a Session

POST /v1/session/start

Example request body:

{
  "sessionId": "optional-session-id"
}

Example response:

{
  "sessionId": "your-session-id"
}

If you do not want to manage session IDs yourself, the example client can generate one automatically.

2. Chat

POST /v1/chat

Example request body:

{
  "sessionId": "your-session-id",
  "message": "Find synthetic biology papers published in PNAS in the last 4 years."
}

Example response shape:

{
  "sessionId": "your-session-id",
  "response": "Final grounded answer...",
  "docs": [
    {
      "chat_doc_id": 1,
      "rank": 1,
      "score": 0.91,
      "paper": {
        "title": "...",
        "authors": ["..."],
        "journal": "...",
        "pub_date": "...",
        "doi": "...",
        "keywords": ["..."],
        "abstract": "...",
        "sections": [
          {
            "section_title": "...",
            "subsections": [
              {
                "title": "...",
                "paragraphs": ["..."]
              }
            ]
          }
        ],
        "link": "https://pubmed.ncbi.nlm.nih.gov/..."
      }
    }
  ]
}

The response field contains the assistant answer, and docs contains the retrieved paper results for that turn. Each document entry currently includes:

  • chat_doc_id: the stable paper identifier used by the assistant for inline citations during the chat session
  • rank: the retrieval ranking for the current turn
  • score: the retrieval similarity score returned by the agent
  • paper: the paper payload, including title, abstract, sections, authors, journal, pub_date, doi, keywords, and a PubMed link

Important: citations in the agent's response refer to chat_doc_id, not to rank. For example, if the response cites paper [3], that means the cited paper is the one whose chat_doc_id is 3, not necessarily the paper whose retrieval rank is 3 in the current turn.

3. End a Session

POST /v1/session/end

Example request body:

{
  "sessionId": "your-session-id"
}

Example response:

{
  "ok": true
}

Ending a session clears the conversation state associated with that session on the server.

Example Python Workflow

The example client exposes three corresponding methods:

  • start_session()
  • chat()
  • end_session()

A typical workflow looks like this:

from example_chat_session import ExampleChatSession

session = ExampleChatSession()
session.start_session()

session.chat("Find synthetic biology papers published in PNAS in the last 4 years.")
session.chat("Now summarize the key trends from those papers.")
session.chat("Please explain the previous answer in simpler language.")

session.end_session()

As long as the same session remains active, the assistant can use the conversation history to interpret follow-up questions more effectively.

Summary

BioPacific MIP Research Assistant provides an agentic, paper-grounded research workflow with automated paper ingestion, retrieval, summarization, and session-based conversation. With the latest updates, it now supports better multi-turn interaction, smarter retrieval decisions, more precise natural-language filtering, and cleaner final answers for literature exploration.

Downloads last month
43