BioHopR / README.md
bluesky333's picture
Update README.md
08f0669 verified
metadata
license: mit

BioHopR

Paper |

Description

We introduce BioHopR, a novel benchmark designed to evaluate multi-hop, multi-answer reasoning in structured biomedical knowledge graphs.
Built from the comprehensive PrimeKG, BioHopR includes 1-hop and 2-hop reasoning tasks that reflect real-world biomedical complexities.

Prompt

We used the below to get the response of the open source LLMs.

def generate_single(model, tokenizer, question):
    q="You are an expert biomedical researcher.\n"+question+"\nJust give me the answer without any explanations.\nAnswer:\n"
    inputs = tokenizer(q, return_tensors="pt", return_attention_mask=False).to(DEVICE)
    response = model.generate(**inputs, 
                         do_sample=False,
                         temperature=0.0, 
                         top_p=None,
                         num_beams=1,
                         no_repeat_ngram_size=3,
                         eos_token_id=tokenizer.eos_token_id,  # End of sequence token
                         pad_token_id=tokenizer.eos_token_id,  # Pad token
                         max_new_tokens=32,
                        )
    output = tokenizer.decode(response.squeeze()[len(inputs['input_ids'][0]):], skip_special_tokens=True)
    return output

def generate_multi(model, tokenizer, question):
    q="You are an expert biomedical researcher.\n"+question+"\nJust give me the answers without any explanations in a bullet-pointed list.\nAnswer:\n"
    inputs = tokenizer(q, return_tensors="pt", return_attention_mask=False).to(DEVICE)
    response = model.generate(**inputs, 
                         do_sample=False,
                         temperature=0.0, 
                         top_p=None,
                         num_beams=1,
                         no_repeat_ngram_size=3,
                         eos_token_id=tokenizer.eos_token_id,  # End of sequence token
                         pad_token_id=tokenizer.eos_token_id,  # Pad token
                         max_new_tokens=256,
                        )
    output = tokenizer.decode(response.squeeze()[len(inputs['input_ids'][0]):], skip_special_tokens=True)
    return output