page_content stringlengths 10 9.29k | metadata dict |
|---|---|
Vol. 06, No. 02, pp. 342 –357 (2025)
ISSN: 2708-0757
JOURNAL OF APPLIED SCIENCE AND TECHNOLOGY TRENDS
www.jastt.org
342
doi: 10.38094/jastt62404
A Hybrid LLM–Knowledge Graph Framework for Accurate
Biomedical Question Answering
Havraz Y. Omar1,²*, Abdulhakeem O. Mohammed³
¹Department of Information Technology, Technical College of Duhok, Duhok Polytechnic University, Duhok, Kurdistan Region,
Iraq. havraz.omar@dpu.edu.krd
² Department of Information Technology, Technical College of Informatics – Akre, Akre University for Applied Sciences, Akre,
Kurdistan Region, Iraq.
³ Department of Computer Science, College of Science, University of Zakho, Zakho, Kurdistan Region, Iraq.
a.mohammed@uoz.edu.krd
*Correspondence: havraz.omar@dpu.edu.krd
Abstract
Biomedical question answering requires accurate and interpretable systems; however, existing approaches often face challenges such as
language model hallucinations and limited reasoning when relying solely on standalone knowledge graphs. To address these limitations,
this study proposes a hybrid framework that integrates the LLaMA-3B language model with a Neo4j-based drug–disease–symptom
knowledge graph. The system translates natural language questions into executable Cypher queries, operates on an iBKH-derived graph
comprising over 65,000 entities and 3 million relationships, and returns answers with supporting evidence through a transparent interface.
Experiments conducted on 60 biomedical questions across three levels of difficulty demonstrate the robustness of the approach: 96%
exact match for simple queries, 95% for medium queries, and 86.7% for complex queries. Overall, the system achieves Precision@5 of
96.1%, Recall@5 of 89.0%, F1@5 of 91.0%, Hits@k of 96.1%, and an MRR of 94.4%, while maintaining an average response time of
only 6.07 seconds. These results indicate that the system retrieves nearly all relevant answers, ranks them correctly, and delivers them
with latency low enough for interactive use. Moreover, unlike cloud-based APIs such as ChatGPT, which require internet connectivity
and external data transmission, the proposed framework operates fully offline, ensuring privacy, reproducibility, and compliance with
biomedical data governance. Overall, this pipeline provides an accurate, efficient, and privacy-preserving solution for biomedical question
answering, making it a practical alternative to cloud-dependent approaches in sensitive healthcare contexts.
Keywords: Knowledge Graph, LLM, Question Answering, Neo4j, Biomedical Informatics, Healthcare AI, LLaMA 3.
Received: August 14th, 2025 / Revised: October 10th, 2025 / Accepted: October 16th, 2025 / Online: October 20th, 2025
I. INTRODUCTION
Answering questions in the biomedical field is a difficult task
due to the complexity of medical knowledge and the need for
precision. In recent years, large language models (LLMs) like
LLaMA, GPT-4 have made progress in understanding and
generating human-like responses to medical questions [1, 2].
These models can process large amounts of information and
respond in natural language, which makes them helpful in
healthcare settings [3]. However, they often struggle to provide
accurate answers when dealing with specialized biomedical
content [4, 5].
One major issue with LLMs is a problem called hallucination,
where the model generates information that sounds right but is
actually incorrect or unsupported [6]. In medical applications,
this can be dangerous, as healthcare professionals rely on precise
and trustworthy information [7]. Therefore, researchers are
exploring ways to combine LLMs with structured sources of
knowledge to improve their reliability [8].
LLM-only systems in biomedicine still hallucinate and are
hard to verify, limiting safe use [9, 10]. Biomedical knowledge
graphs (BKGs) such as iBKH and SPOKE curate multi-source
facts and enable multi-hop reasoning, yet they neither interpret
free text nor generate answers [11, 12]. Recent hybrids (KG-
aware RAG) improve grounding but often lack explicit path-level
justifications and robust end-to-end answer evaluation [13, 14].
Recent studies have increasingly integrated Knowledge
Graphs (KGs) with Large Language Models (LLMs) to improve
factual accuracy, reasoning, and reduce hallucinations. Notable
examples include DR.KNOWS, which combines UMLS-based
KGs with LLMs for better diagnostic reasoning [15], KnowNet | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 0,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
343
for visualizing and validating LLM outputs [16], and MedKA for
KG-enhanced question answering [17].
To address these challenges, several recent works have
explored the integration of large language models with
biomedical knowledge graphs (KGs). A biomedical KG is a
structured network that connects entities such as diseases, drugs,
and symptoms using defined relationships [18, 19]. These graphs
store verified medical knowledge from trusted databases,
allowing for more accurate and explainable responses [12]. KGs
are especially useful in multi-step reasoning tasks, where finding
an answer requires connecting different pieces of information
[20]. These entities and relationships can be visually represented
in a biomedical knowledge graph, as shown in Fig. 1, where
nodes represent medical concepts such as drugs, diseases,
symptoms, and pathways, and edges denote their semantic
relationships.
Fig. 1. Overview of Biomedical Knowledge Graph Entities and Relationships
One example of a widely used biomedical KG is SPOKE,
which includes millions of nodes and relationships from over 40
biomedical databases [12]. Integrating an LLM with a KG allows
the strengths of both technologies to work together: the LLM
provides language understanding, and the KG provides
structured, factual knowledge [21, 22]. A common method is
retrieval-augmented generation (RAG), where the LLM retrieves
information from the KG and uses it to generate more accurate
responses [13, 23]. In more advanced setups, the LLM can even
generate queries like Cypher to fetch specific data from the graph
[24, 25]. Neo4j is a popular graph database that supports fast and
flexible storage and querying of knowledge graphs using Cypher
[21]. It is well-suited for biomedical applications because it
allows easy exploration of complex medical relationships. Recent
work has shown that combining Neo4j with LLMs can lead to
better accuracy, fewer hallucinations, and more explainable
results [24, 26].
Despite improvements, building a reliable hybrid system that
combines an LLM with a biomedical KG remains a technical
challenge. Some approaches require complex pipelines or large
training datasets, while others rely on fine-tuning specific to a
narrow set of questions [27, 28]. There is still a need for systems
that are both accurate and easy to scale, especially in domains like
medicine where the cost of errors is high [22].
Recent advances in KG-augmented LLMs have improved
performance, yet biomedical QA continues to face three practical
gaps:
1. Traceability: LLM-only or text-retrieval-only pipelines
rarely provide graph-grounded justifications; users lack the
ability to inspect the exact nodes and edges that support an
answer.
2. Evaluation: Prior work often judges quality via surface-
form checks (e.g., matching a Cypher template), which fails to
capture end-to-end answer correctness or ranking quality
across different difficulty levels.
3. Deployment: Many solutions assume cloud resources or
domain-specific fine-tuning, yet biomedical contexts typically
demand a local, privacy-preserving system with low latency
and reproducible behavior.
Timestamp-aware execution and periodic KG refresh help
avoid deprecated or contraindicated links, making the tool better
suited for safety-critical clinical contexts (e.g., drug–drug
interactions).
To address these limitations, Our work introduces a locally
deployable pipeline that translates biomedical questions into
executable Cypher queries over a Neo4j knowledge graph. The
system returns answers with supporting nodes and edges, and is | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 1,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
344
evaluated using Exact Match, Precision, Recall, F1, Hits@k,
MRR, and latency across simple, medium, and complex question
sets. Unlike prior template-based methods, our approach enables
traceable, outcome-level validation. In summary, the main
contributions of this work are as follows:
• Hybrid LLM to Cypher QA: A system that translates
natural language questions into accurate, executable
Cypher over a Neo4j drug, disease, and symptom KG.
• Prompt-driven query generation: Schema, entity
aware prompting that reliably maps diverse biomedical
questions to graph queries.
• Evidence transparency: Along with each answer, we
surface the generated Cypher and the supporting
justification subgraph (nodes, edges) plus a brief natural
language rationale.
• Answer-level evaluation: End-to-end assessment using
Exact Match, F1, Precision/Recall, Hits@k, MRR and
latency across simple, medium and complex tiers.
• Local, reproducible deployment: On-prem LLaMA 3
inference (no cloud dependency) suitable for biomedical
settings requiring low latency and strong data control.
The remainder of this paper is organized as follows: Section
2 reviews related work on biomedical knowledge graphs and
LLM-based QA systems. Section 3 provides background on
knowledge graphs, large language models, and question
answering frameworks. Section 4 details the proposed
methodology,
including
system
architecture,
dataset
construction, and query translation. Section 5 presents the
experimental results through both quantitative metrics and
qualitative case studies. Section 6 discusses the findings, analyzes
limitations, and compares performance against baseline models.
Finally, Section 7 concludes this paper and outlines directions for
future work.
II. RELATED WORK
Recently, studies have concentrated on the integration of
clinical and medical knowledge graphs (LLM) to improve the
answer to medical questions. Researchers have derived several
biomedical KGs using Neo4j and incorporated the application of
LLMs like LLaMA and GPT to convert natural language
questions into graph queries. Improvements in answer
correctness, reduction of hallucination errors, one-to-many
relationships, and support for complex reasoning were the
objectives of these efforts. Some frameworks also adopted
retrieval methods to ground responses in secure data.
Su et al.[11] developed an integrative Biomedical Knowledge
Hub (iBKH), a huge biomedical knowledge graph that comprises
18 of the very best data sources. The deployment of the iBKH in
Neo4j allows for a user-friendly web portal to allow fast and
interactive knowledge retrieval. The system implemented
advanced graph learning techniques to enable the discovery of
biomedical knowledge, illustrated by an example of repurposing
in silico drugs for Alzheimer’s disease. iBKH achieved
promising predictive performance for known drugs and proposed
possible new drug candidates.
Rajabi and Kafaie[19] proposed a disease knowledge graph
using a cross-referential disease database comprising diseases,
symptoms, and drugs interconnected with relationships. They
transferred the data into Neo4j to create a graph of 9,400 nodes
and 45,000 relationships representing the semantic links between
medical concepts. Applying Cypher queries enabled answering
complex medical questions regarding identifying drugs that may
cause certain diseases; it was demonstrated that the graph inferred
new relationships not explicitly existing in the original data. The
conclusion was that disease knowledge graphs sped up clinical
discovery and contributed to understanding complex medical
relationships.
Hou et al.[3] assessed and contrasted ChatGPT (both GPT-
3.5 and GPT-4) and the biomedical knowledge graphs (BKGs)
concerning their ability to answer biomedical questions, generate
new knowledge, and reason. Their datasets were focused on
dietary supplements and drugs, while evaluation criteria entailed
accuracy, novelty, and reasoning ability. The results indicate that
while GPT-4 surpassed GPT-3.5 and BKGs in knowledge
provision, it proved inconsistent with regard to citations and
reasoning. Compared to them, BKGs scored higher in accuracy
and reliability, especially in discovering novel links as well as
within structured reasoning.
Soman et al.[13] presented a novel framework called KG-
RAG that integrates a large biomedical knowledge graph
(SPOKE) with LLaMA 2, GPT-3.5, and GPT-4 (LLMs) to
produce accurate biomedical text. They optimized the retrieval of
relevant graph context to cut over 50% tokens without losing
accuracy. It aided LLMs in performing better on biomedical
question answering with very high accuracy boosts, especially in
the case of LLaMA 2. They compared KG-RAG to other retrieval
methods and indicated its comparatively more robust and
efficient results. The framework produced reliable evidence-
based answers grounded in biomedical knowledge.
Luo et al.[23] created ChatKBQA, a new framework with a
question-and-answer approach over knowledge bases that first
generates logical forms with the help of fine-tuned LLMs and
then retrieves the relevant entities and relations. This generate-
then-retrieve approach is supposed to handle a couple of issues
with the earlier methods concerning tedious retrieval and error
propagation. They fine-tuned open-source LLMs like LLaMA 2
to change natural-language questions into logical forms with high
accuracy. The retrieval phase uses unsupervised phrase-level
semantic matching in a way that enhances the alignment of
entities and relations. Experiments on benchmark datasets
indicate ChatKBQA to be superior to its predecessors, with the
highest accuracy to date.
Pusch and Conrad[6] conducted work under a hybrid
approach conflating LLMs and biomedical Knowledge Graphs
(KGs) to suppress hallucination errors in question-answering.
They proposed query-checking algorithms for validating,
correcting, and executing the KG Cypher queries that LLMs
generated, thereby attaining accurate and understandable
answers. The system used retrieval-augmented generation (RAG)
to ground answers within KG data. The methodology was
validated on a biomedical KG called PrimeKG using 50
benchmark questions, assessing models like GPT-4 Turbo and
LLaMA 3. Commercially available GPT-4 Turbo obtained | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 2,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
345
record-high accuracy, while open-source models achieved
impressive gains through prompt optimization.
Feng et al.[22] developed the Knowledge Graph-based
Thought (KGT) framework that integrated LLMs with a pan-
cancer knowledge graph for biomedical question answering.
KGT was designed to reason on the knowledge graph schema and
identify optimal subgraphs to use for directing accurate answer
generation, all without fine-tuning the LLMs. The framework is
benchmarked against a new dataset (PcQA) designed specifically
for pan-cancer KGQA tasks and has outperformed all existing
state-of-the-art approaches by a rather large margin. KGT’s
practicality in biomedical issues was highlighted through case
studies for drug repositioning, drug resistance, and biomarker
discovery. Their approach exhibited robust adaptability among
various LLMs.
Rezaei et al.[26] developed AMG-RAG, a dynamic
framework that utilizes autonomous LLM agents with medical
search tools in the continuous construction and real-time updating
of Medical Knowledge Graphs (MKGs). Their system
incorporated confidence scoring and multi-hop reasoning to
improve accuracy and interpretability in medical question
answering. AMG-RAG outperformed size models on both very
hard MEDQA benchmarks and more accessible MedMCQA
ones, proving that it could conduct efficient reasoning based on
current structured medical knowledge. They also used Neo4j to
manage the knowledge graphs while adding external searches to
ensure the latest data.
Tiwari et al.[24] presented Auto-Cypher, a recent automated
pipeline for producing high-quality synthetic data for training
LLMs by mapping natural language to Cypher queries for graph
databases like Neo4j. The pipeline deployed the novelty of LLM-
as-database-filler to synthesize Neo4j databases for the execution
of generated queries to ensure their correctness. A sizable dataset
called SynthCypher was created, spanning multiple domains and
complex queries, leading to a 40% improvement in LLM
performance on Cypher generation. The datasets were used to
fine-tune open-source models such as LLaMA, Mistral, and
Qwen, and the SPIDER benchmark was adapted for evaluation
purposes.
Mohammed et al.[29] proposed a hybridized GraphRAG
framework combining Neo4j-based UMLS knowledge graphs
with a vector store for medical textbooks to create an improved
U.S.M.L.E.-style clinical question-answering approach. The
project integrated symbolic reasoning from knowledge graphs
with semantic retrieval performed on text embeddings to enhance
relevance and accuracy via adaptive re-ranking and query
expansion. The system had the answers produced by GPT-4o-
Mini, with different prompting strategies encouraging evidence-
based and traceable responses grounded in verified medical
knowledge. Experiments showed that the hybrid approach
improved factual accuracy and citation fidelity as compared to
the L.L.M.-only approach, enhancing transparency and
reliability. It is shown that binding both structured and
unstructured medical knowledge sources could aid in
ameliorating hallucinations and hence improve clinical
trustworthiness in AI-driven medical QA.
Yang et al.[30] articulated sepsis knowledge graph was
crafted by combining multicenter clinical data from over 10,000
patients with the help of GPT-4 for entity recognition and
relationship extraction. Real-world data were collected from
three hospitals and integrated with clinical guidelines and
databases from the public domain. The knowledge graph
contained 1,894 nodes and 2,021 relationships pertaining to
diseases, symptoms, biomarkers, treatments, and complications.
GPT outperformed other models in every resolution on sepsis-
specific datasets to obtain high F1-score results. The constructed
graph highlighted complex interactions in sepsis for assisting
clinical decision-making and was implemented on Neo4j.
Guan et al.[20] proposed a novel method for constructing a
local knowledge graph from retrieved biomedical documents by
extracting propositional claims. They carried out layer wise
summarization on this graph to capture multi-document
relationships and provide comprehensive contextual information
to a language model for question-answering purposes. The
method resolved issues in multi-document biomedical QA, such
as noise cancellation and efficient context usage. They then tested
their method on several benchmarks for biomedical question
answering, achieving performance at least comparable to, if not
better than, existing retrieval-augmented generation (RAG)
baselines. The study established enhanced reasoning and answer
accuracy of the model achieved through structured graph
summarization.
Previous studies have improved biomedical QA using KGs
and LLMs, but important gaps remain. Most systems lack
transparent, graph-based justifications, rely on limited evaluation
methods, or depend on cloud resources that reduce privacy and
reproducibility. Our framework addresses these gaps by
providing visible Cypher queries with evidence subgraphs,
applying comprehensive performance metrics across difficulty
levels, and ensuring fully local, privacy-preserving deployment.
Table I summarizes key previous studies on biomedical
knowledge graphs and question answering, outlining their
methods, datasets, and main limitations.
III. PRELIMINARIES
This section outlines the fundamental concepts required to
understand the proposed system. It introduces biomedical
knowledge graphs, explains how Neo4j stores data in graph
form, and describes the use of Cypher for querying. It also
provides a brief overview of large language models (LLMs) and
their role in interpreting natural language.
A. Biomedical Knowledge Graphs
Biomedical Knowledge Graphs (BKGs) provide a structured
representation of complex biomedical information by modeling
diverse medical entities, such as diseases, drugs, symptoms, and
biological pathways, as interconnected nodes within a graph
structure. The edges in these graphs represent the semantic
relationships between these entities, including ’treats’, ’causes’,
’interacts with’ and many others, as illustrated in Fig 1. This
form of representation enables the integration of heterogeneous
biomedical data from a wide range of sources, including | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 3,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
346
scientific literature, clinical records, genomic databases, and
experimental findings [19, 31].
Such integration creates a comprehensive biomedical
knowledge base that supports advanced analytics and discovery.
For example, biomedical knowledge graphs can reveal previously
unknown relationships (e.g., between drugs and diseases) and
help prioritize potential biomarkers for complex conditions. The
Integrative Biomedical Knowledge Hub (iBKH) is one such
large-scale graph that consolidates diverse biomedical resources
into a unified hub, enabling discovery at scale [11]. Beyond
iBKH, large biomedical knowledge graphs such as SPOKE
further illustrate how graph integration accelerates research and
supports precision-medicine use cases [12]. Overall, these graphs
serve as foundational resources for data-driven and personalized
medicine. These knowledge graphs serve as foundational
resources for precision medicine, where treatment can be tailored
to the individual’s biological profile, improving outcomes and
minimizing side effects [19, 31].
TABLE I. SUMMARY OF RELATED RESEARCH ON BIOMEDICAL KGS AND QUESTION ANSWERING
Ref.
Year
Data/Graph
Method
Baselines
Key Metric
Limitation
[11]
2023
iBKH (18 biomedical
sources, Neo4j)
Integrative KG + Graph
learning; drug repurposing
case
Known drugs,
Alzheimer’s
study
Predictive performance
(drug repurposing)
Limited to Alzheimer’s case
study; scalability and updates not
detailed
[19]
2023
Disease KG (9,400
nodes, 45,000 relations
in Neo4j)
Cypher queries for disease–
drug–symptom reasoning
Cross-referential
disease DB
New relation inference;
complex query
answering
Limited to single domain; lacks
large-scale evaluation
[3]
2023
BKGs vs. GPT-3.5/4
Comparative QA study:
LLMs vs. KGs
GPT-3.5, GPT-4,
KG reasoning
Accuracy, Novelty,
Reasoning
GPT-4 inconsistent in
reasoning/citations; KG less fluent
but more reliable
[13]
2024
SPOKE KG +
LLaMA2, GPT-3.5,
GPT-4
KG-optimized retrieval for
LLMs (RAG)
Other retrieval
methods
Accuracy, token
reduction >50%
Focus on retrieval optimization,
not KG construction
[23]
2024
Benchmark KB datasets
Generate-then-retrieve
(LLM → logical form →
KB retrieval)
Prior KBQA
methods
Accuracy (highest to
date)
Risk of error in logical form
generation
[6]
2024
PrimeKG
LLM + KG hybrid, Cypher
query validation, RAG
GPT-4 Turbo,
LLaMA 3
Accuracy,
Explainability
Dependent on KG coverage;
computationally intensive
[22]
2025
Pan-cancer KG (PcQA
dataset)
KG-enhanced reasoning
(subgraph selection)
SOTA KGQA
methods
Outperformed SOTA on
PcQA
Limited to pan-cancer focus; no
fine-tuning explored
[26]
2025
Dynamic Medical KG +
Neo4j
LLM agents + multi-hop
reasoning
MEDQA,
MedMCQA
baselines
Accuracy,
Interpretability
High system complexity; requires
continuous updating
[24]
2025
SynthCypher dataset
(Neo4j + synthetic
queries)
LLM-supervised Cypher
generation and verification
SPIDER
benchmark
Cypher accuracy 40%
Synthetic dataset may not capture
all real-world cases
[29]
2025
UMLS KG + Neo4j
Hybrid GraphRAG
LLM-only QA
Accuracy, Citation
fidelity
More complex pipeline; relies on
external vector store
[30]
2025
Clinical data (10k
patients, 1,894 nodes,
Neo4j)
KG construction using
GPT-4 for entity/relation
extraction
Other KG
construction
methods
High F1-scores
Focus only on sepsis; limited
generalization
[20]
2025
Local KG from
biomedical documents
Multi-level summarization
over KG for QA
RAG baselines
QA accuracy, reasoning
Tested mainly on document QA;
scalability not proven
B. Neo4j Graph Database
To manage the complexity and large size of biomedical
knowledge graphs, specialized graph databases are needed.
Neo4j is one of the most popular graph databases designed to
store and query data structured as nodes (entities) and
relationships (edges), both of which can have descriptive
properties [32, 33]. It uses the property graph model, which
makes it easy to represent complex, connected biomedical data
such as drug-gene interactions or disease pathways. Neo4j’s
Cypher query language is especially advantageous because it
allows users to write expressive and efficient queries to explore
multi-step connections in the data [34].
Neo4j works well for biomedical data because it can quickly
run complicated queries over highly interconnected datasets. This
is important in biology and medicine, where relationships
between entities like proteins, diseases, and drugs are often
complex and layered. Studies have shown that Neo4j handles
large biomedical graphs efficiently, making it a favorite among
researchers and industry users alike [33, 35, 36]. Its indexing and
caching mechanisms also help speed up query processing and
data retrieval [37].
Moreover,
Neo4j
integrates
smoothly
with
many
programming languages and analytics tools, which makes it
easier to build interactive biomedical applications and clinical
decision support systems that can turn complex graph data into
useful insights [38, 39].
C. Large Language Models (LLMs) in Biomedical Question
Answering
Large Language Models (LLMs) are powerful AI systems
trained on vast amounts of text data. They learn the structure and
patterns of language, enabling them to understand questions,
generate responses, summarize information, and perform other
complex language tasks. Well-known models such as LLaMA
and GPT-3 have greatly advanced the field of natural language
processing by showing strong performance across many tasks
[40, 41]. | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 4,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
347
In biomedical research and clinical settings, LLMs help
translate natural language questions from doctors, researchers, or
patients into precise, structured queries that can be executed on
biomedical knowledge graphs and databases. This makes it easier
to retrieve detailed biomedical information like drug interactions,
gene-disease associations, and symptom descriptions [42, 43].
Despite their power, LLMs can sometimes generate incorrect
or fabricated responses, a phenomenon known as hallucination,
which poses risks in sensitive biomedical contexts. These
hallucinations occur because the models generate plausible
answers based on patterns learned from data rather than verified
knowledge. To mitigate this, researchers integrate LLMs with
biomedical knowledge graphs to ground answers in factual data,
significantly improving accuracy and reducing misinformation
[4]. Further improvements come from fine-tuning LLMs on
biomedical corpora and carefully engineering prompts, which
enhance their reliability and relevance in medical question
answering.
Additionally, combining LLMs with knowledge graphs and
reasoning techniques is an active area of research that promises
to increase the interpretability and trustworthiness of AI systems
in biomedicine. These advances are critical for developing tools
that assist clinical decision-making and accelerate biomedical
discoveries [43, 44].
IV. METHODS AND MATERIALS
This section describes the methodology used to build a
biomedical question-answer system. The proposed method
consists of three main stages; First, a biomedical knowledge
graph is constructed in the data ingestion phase, using structured
data sources (e.g., diseases, drugs, symptoms). Second, a
language model (LLaMA 3) interprets the user’s question
written in English in the user interaction phase and converts it
into a Cypher query. Third, a graphical user interface allows
users to type questions and view the results interactively.
A. System Architecture
The proposed framework is organized as a step-by-step
pipeline that integrates a local large language model (LLM) with
a biomedical knowledge graph stored in Neo4j. The overall
workflow is illustrated in Fig. 2. Each module performs a
specific function, and together they ensure that the system
delivers accurate, reliable, and explainable answers.
Fig. 2. Workflow of the LLaMA 3 and Neo4j-Based QA System
Step 1. User Input (Flask Web Interface): A user submits
a biomedical question in natural language through a Flask-based
web interface. The same interface will later display the answer,
the executed query, and a compact preview of the retrieved rows.
Step 2. LLM Processing and Initial Cypher: The text query
is forwarded to a local LLaMA 3 module, which interprets the
intent and drafts an initial Cypher pattern suitable for querying
the graph.
Step 3. Cypher Query Generation: The Cypher Query Gen
block receives the initial pattern from LLaMA 3, canonicalizes
and completes it (projection, DISTINCT, filters), and returns a
finalized Cypher query to the model.
Step 4. Query execution on Neo4j: LLaMA 3 passes the
finalized query to the Query execution component (inside the
’Answer modules’ box), which runs it against the Neo4j
Knowledge Graph. KG supplies knowledge graph data (e.g., | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 5,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
348
drugs, diseases, symptoms) and execution returns structured data
(tuples) relevant to the question.
Step 5. Answer Synthesis: The structured tuples flow to
Answer Synthesis, which aggregates and formats them into a
concise raw answer. This raw answer is sent back to LLaMA 3
to optionally refine the phrasing while preserving the retrieved
facts.
Step 6. Result Presentation: LLaMA 3 produces the final
answer, which the interface displays together with the executed
Cypher query and an optional preview of the returned rows,
improving transparency and trust.
The pipeline couples LLM-based language understanding
(LLaMA 3) with a schema-grounded Neo4j knowledge graph.
The Cypher Query Gen refines the query formulation, Query
Execution retrieves evidence and Answer Synthesis converts
structured results into readable outputs that produce answers that
are accurate, interpretable, and easy to audit directly from the
displayed query and evidence.
B. Dataset and Knowledge Graph Construction
1)
Dataset
The integrated Biomedical Knowledge Hub (iBKH), a large
biomedical knowledge base, forms the first level of the system
and integrates information from various curated high-quality
biomedical databases. This implies that the data set includes
various types of entities, such as diseases, symptoms, drugs,
biological pathways, etc. This study used the representative
subset of the iBKH dataset, which contained 65828 biomedical
entities. These entities are semantically interconnected through a
total of 3004166 relationships, thus creating a rich knowledge
graph. The iBKH dataset was originally introduced in [11], and it
is freely available at (https://github.com/wcm-wanglab/iBKH).
This dataset is the core semantic foundation upon which this
study is built. The knowledge graph is populated from multiple
tabular sources (CSV files), each listing entities or relationships.
The main input files and their contents are as follows:
• Disease
vocabulary(disease_vocab.csv):
Contains
columns such as primary (a unique disease ID), name,
do_id (Disease Ontology ID), kegg_id, and umls_cui
(UMLS Concept ID). Each row represents a disease node
with external identifiers.
• Drug vocabulary (drug_vocab.csv): Includes primary
(unique drug ID), name, drugbank_id, kegg_id,
pharmgkb_id, umls_cui, mesh_id, iDISK_id and CID
(PubChem ID). Each row defines a drug node with
standard database identifiers.
• Symptom vocabulary (symptom_vocab.csv): Contains
primary (unique symptom ID), name, mesh_id, umls_cui
and iDISK_id. Each row defines a symptom node.
• Side
effect
vocabulary
(side_effect_vocab.csv):
Includes primary (unique side-effect ID) and name. Each
row represents a side-effect node (with UMLS ID when
available).
• Pathway vocabulary (pathway_vocab.csv): Contains
primary (unique pathway ID), name, reactome_id, go_id,
and kegg_id. Each row defines a biological pathway
node.
Relationship files (each row typically contains two entity IDs
and one or more boolean flags or codes) include:
• Disease–Symptom links (Di_Sy_res.csv): Rows include
Disease and Symptom IDs, a presence flag (1 or 0) and a
data source. If Present = 1, a HAS_SYMPTOM edge is
created from the disease to the symptom, with properties
for presence and source.
• Disease–Disease links (di_di_res.csv): Rows include
Disease_1 and Disease_2 IDs with binary flags for is_a
and Resemble. If is_a = 1, an (IS_A) edge is created
(Disease_1 → Disease_2); if Resemble = 1, a
RESEMBLES edge is created. The source field is used
for provenance.
• Drug–Disease links (D_Di_res.csv): Includes Drug and
Disease IDs with several binary flags. If a flag equals 1,
a corresponding edge is created:
o
TREATS (Treats = 1)
o
PALLIATES (Palliates = 1)
o
ASSOCIATED_WITH (Associate = 1)
o
ALLEVIATES_REDUCES (alleviates = 1)
o
TREATMENT_THERAPY (treatment/therapy =
1)
o
INHIBITS_CELL_GROWTH
(inhibits
cell
growth = 1)
o
HAS_BIOMARKER (biomarkers = 1)
o
PREVENTS_SUPPRESSES
(prevents/suppresses = 1)
o
ROLE_IN_PATHOGENESIS (role in disease
pathogenesis = 1)
• Drug–SideEffect links (D_SE_res.csv): Contains Drug
and SideEffect IDs with a Source column. Each row
creates a CAUSES edge from the drug to the side effect,
with source as an edge property.
• Drug–Drug interactions (D_D_res.csv): Rows include
Drug_1 and Drug_2 IDs with flags for Interaction and
Resemble. If Interaction = 1, an INTERACTS_WITH
edge is created (bidirectional). If Resemble = 1, a
RESEMBLES edge is added.
• Drug–Pathway links (D_Pwy_res.csv): Includes Drug
ID and Pathway ID. Each row generates an
ASSOCIATED_WITH edge from the drug to the
pathway.
• Disease–Pathway links (Di_Pwy_res.csv): Contains
Disease ID and Pathway ID. Each row creates an
ASSOCIATED_WITH edge from the disease to the
pathway.
2)
Data Upload Performance
The time required to upload different types of entities and
relationships into the Neo4j biomedical knowledge graph,
measured in seconds. These measurements reflect both the size
and complexity of the data being processed.
As shown in Table II, the longest upload time is for Drug-
Drug Relationships, which takes approximately 190 seconds due
to the large number of edges (over 3 million). Following this,
Disease-Disease and Drug-Disease Relationships also require
considerable time for loading. On the other hand, individual | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 6,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
349
entities such as Diseases and Drugs are uploaded much faster,
generally under 2 seconds.
TABLE II. DATA UPLOAD TIMES FOR DIFFERENT ENTITY AND RELATIONSHIP
TYPES IN NEO4J
Entity / Relationship Type
Upload Time (seconds)
Disease
0.81
Drugs
1.08
Symptoms
0.06
Side Effects
0.14
Pathways
0.08
Disease-Disease Relationships
30.97
Drug-Disease Relationships
30.28
Drug-SideEffect Relationships
5.24
Drug-Drug Relationships
190.09
Drug-Pathway Relationships
0.14
Disease-Pathway Relationships
0.06
Disease-Symptom Relationships
0.12
Fig. 3, presents a vertical bar chart that visually compares
these upload times across the different entity and relationship
types. The chart clearly shows the significant difference in upload
duration between nodes and edges, emphasizing the higher cost
of ingesting complex relationships in the graph.
3)
Experimental Environment
In this study, the proposed biomedical question answering
system was evaluated using a locally hosted environment. All
experiments were conducted on a Windows 11 Pro (64-bit)
system equipped with an Intel Core i5-10500H processor running
at 2.50 GHz (12 logical CPUs), 24 GB of RAM, and an NVIDIA
GeForce GTX 1650 GPU with Max-Q Design. The Neo4j graph
database (v4.4.5) was managed through Neo4j Desktop (v1.6.2),
and the LLaMA 3B language model was executed locally using
optimized configurations suitable for this hardware setup.
Each Cypher query generated by the system was executed
multiple times to calculate an average response time, ensuring
consistency across varying levels of question difficulty. The
knowledge graph was constructed using the iBKH dataset, and
data loading and system performance were carefully monitored
to maintain stability during testing. This experimental setup
provides a reliable and reproducible environment for
benchmarking the effectiveness and responsiveness of the hybrid
QA system.
Fig. 3. Upload times for various biomedical entities and relationships in Neo4j.
We run LLaMA 3 locally (not via cloud APIs) to satisfy
biomedical privacy/governance (no data leaves the host) and to
maximize reproducibility (fixed GGUF checkpoint, pinned
llama.cpp commit, controlled seeds settings, constant hardware).
Local execution yields predictable cost availability and stable
latency (no network jitter) and lets us enforce executable Cypher
grounding with per edge provenance and timestamp aware
execution.
4)
Knowledge Graph(KG) Construction
The Neo4j graph database was used as the backend to store
and query the KG, and it is a graph database designed for highly
connected data. Before loading data, uniqueness constraints were
created on the primary property for each node label (Disease,
Drug, Symptom, SideE ffect, Pathway). This enforces that each
primary ID appears only once, preventing duplicate entities. For
efficient lookups in queries, a search index was created based on
the name property of each node label. As noted in the Neo4j | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 7,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
350
documentation, indexes “enable quicker and more efficient
pattern matching” by allowing the query planner to rapidly locate
nodes by label and property.
With the schema in place, data was imported using Cypher’s
LOAD CSV commands. For each vocabulary file, rows with
nonnull primary IDs were processed: the code used MERGE to
create (or match existing) nodes with the given label and primary
property, and then SET the remaining attributes from the CSV
columns. For example, in importing disease_vocab.csv, each row
produced a node (:Disease primary: <id>) with properties name,
do_id, kegg_id, and umls_cui set from the row (empty strings
were turned into null). Similarly, drug_vocab.csv rows produced
drug nodes with properties for DrugBank, KEGG, PharmGKB
IDs, etc. This approach follows best practice: using MERGE on
the unique ID ensures that nodes are not duplicated during
multiple passes.
After all nodes were created, the relationship CSVs were
loaded. Each row in those files was matched to the corresponding
source and target nodes by their primary IDs, and conditional
logic was used to create edges. For example, the disease-
symptom file (Di_Sy_res.csv) was processed by matching a
disease node and a symptom node for each row, then executing
MERGE (d)-[r:HAS_SYMPTOM]-(s) if the present column is
nonzero; the edge was given a present property and a source
property from the CSV. The disease-disease file (di_di_res.csv)
was processed by matching disease nodes d1 and d2:If is_a = 1,
a (:Disease)-[:IS_A]->(:Disease) edge was merged; if Resemble
= 1, a (:Disease)-[:RESEMBLES]->(:Disease) edge was merged.
Drug-disease relationships were handled similarly: the script
matched a Drug node and a Disease node for each row of
D_Di_res.csv, then for each flag that was 1, it merged the
corresponding edge label (such as TREATS, PALLIATES,
ASSOCIATED_WITH, etc.) from the drug to the disease. Drug–
side-effect rows produced (:Drug)-[:CAUSES]->(:SideEffect)
edges with the source noted, and drug–drug rows produced either
INTERACTS_WITH or RESEMBLES edges between matched
drug pairs. Finally, the drug-pathway and disease-pathway files
each produced:ASSOCIATED_WITH edges linking drugs or
diseases to pathway nodes.
In this graph model, most relationships are directional (for
example, a drug TREATS a disease; a disease does not TREAT
a drug). This follows common practice in biomedical KGs. The
same relations (such as RESEMBLES or INTERACTS_WITH)
are inherently symmetric, but were stored as two directed edges
or one undirected edge depending on implementation. All
relationship creation steps used Cypher’s MERGE so that
repeated loads or out-of-order imports did not create duplicate
edges.
This study used a static KG snapshot for reproducibility, but
the system also supports incremental updates through the Neo4j
MERGE and batch import functions. A summary of the different
types of nodes and relationships is provided, together with their
counts, descriptions, and examples in Table III.
TABLE III. DETAILED SUMMARY OF KNOWLEDGE GRAPH COMPONENTS IN IBKH SUBSET
Component Type
Entity/Relationship
Count
Description
Examples
Disease
19,236
Medical conditions and disorders
Diabetes, Hypertension, Cancer
Drug
37,997
Pharmaceutical compounds
Aspirin, Insulin, Amoxicillin
Nodes
Symptom
1,361
Clinical signs and symptoms
Headache, Fever, Nausea
SideEffect
4,251
Negative outcomes of drugs
Nausea, Drowsiness
Pathway
2,983
Biological processes and pathways
Apoptosis, Glycolysis
ASSOCIATED_WITH
101,534
General associations (disease–
pathway, drug–pathway, etc.)
Diabetes ASSOCIATED_WITH Pathway
CAUSES
145,321
Drug-SideEffect relationships
Aspirin CAUSES Gastric_Bleeding
HAS_SYMPTOM
3,357
Disease-symptom links
COVID-19 HAS_SYMPTOM Fever
HAS_BIOMARKER
412
Biomarkers linked to disease
PSA HAS_BIOMARKER Prostate_Cancer
Relationships
INHIBITS_CELL_GROWTH
1,508
Drugs inhibiting cell growth
Chemo INHIBITS_CELL_GROWTH Tumor
INTERACTS_WITH
2,682,142
Drug-drug interactions
Aspirin INTERACTS_WITH Warfarin
IS_A
10,529
Subtype hierarchy
Flu IS_A Viral_Infection
PALLIATES
388
Drug palliates disease
Morphine PALLIATES Cancer
PREVENTS_SUPPRESSES
859
Preventive links
Vaccine PREVENTS_SUPPRESSES Measles
RESEMBLES
7,000
Similarity relationships
DrugA RESEMBLES DrugB
TREATMENT_THERAPY
44,852
Therapy relationships
Radiotherapy TREATMENT_THERAPY Tumor
TREATS
5,491
Drug-disease links
Insulin TREATS Diabetes
ALLEVIATES_REDUCES
∼180,000
Symptom relief
Paracetamol ALLEVIATES_REDUCES Fever
Total
Nodes
65,828
Total biomedical entities
—
Relationships
3,004,166
Total knowledge graph links
—
C. Natural Language to Cypher Query Translation
A key feature of the system is its ability to accept questions
written in plain English and automatically generate the
corresponding Cypher queries. This is accomplished using
Meta’s LLaMA 3 large language model, which runs entirely on
a local machine through the open-source llama.cpp framework.
Running the model locally ensures low-latency execution and
keeps sensitive queries within the user’s environment.
To generate a Cypher query, LLaMA 3 is prompted with
examples of natural language questions along with their correct
Cypher translations. The prompt also includes instructions on
how to navigate the structure of the graph schema. When a user
enters a question (e.g., ’What are the symptoms of Alzheimer’s
disease?’), the system inserts it into the prompt and asks LLaMA
3 to produce a corresponding query. For example, the model may
return:
MATCH (d:Disease)-[:HAS_SYMPTOM]->(s:Symptom)
WHERE toLower(d.name) CONTAINS "alzheimer"
RETURN s.name | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 8,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
351
This query searches for a disease node whose name contains
’alzheimer’ and follows HAS_SYMPTOM edges to list related
symptom names. The system then executes this cypher to retrieve
answers. The prompts (such as few-shot examples and schema
hints) were carefully designed to help LLaMA 3 generate correct
Cypher queries. The model learns how to use the graph’s labels
and relationships properly. For example, if a user asks, ’Which
drugs treat diabetes?’, LLaMA might respond with a suitable
Cypher query:
MATCH (d:Drug)-[:TREATS]->(di:Disease)
WHERE toLower(di.name) CONTAINS "diabetes"
RETURN d.name
This queries for drug nodes that have a TREATS edge to a
diabetes disease node. By leveraging LLaMA 3 in this way, our
system can flexibly handle many phrasing variations without
manual mapping rules.
D. Model Configuration & Decoding
We run a local LLaMA 3.2-3B model in GGUF format (llama-
3.2-3b-instruct-q4_k_m.gguf) via llama.cpp, as shown in Table
IV.
TABLE IV. MODEL RUNTIME AND DECODING SETTINGS
Runtime settings
Decoding settings
n_ctx = 1024
temperature = 0.2
n_threads = 12
top_p = 0.95
n_gpu_layers = 33
top_k = 40
n_batch = 512
repeat_penalty = 1.1
max_tokens = 80
seed = 42
E. Graph Subset and Versioning
We use an iBKH derived subgraph (≈65.8k nodes; ≈3.0M
edges) spanning DRUG, DISEASE, SYMPTOM, PATHWAY. IDs are
normalized to CURIEs and duplicates collapsed across
UMLS/DrugBank/DisGeNET/SIDER/KEGG. Each edge stores
provenance/licensing metadata (source, source_version, license,
retrieved_at,
evidence_pmids/urls)
and,
when
available,
temporal fields (valid_from, valid_to). We report coverage as
the
percentage
of
evaluated
questions
whose
gold
entities/relations are present.
F. Query Execution and Reliability
After a Cypher query is generated, it is executed on the Neo4j
database through the official Neo4j Python driver, which
manages the secure connection and returns the results. Instead of
restricting the output with a fixed LIMIT (e.g., LIMIT 5), the
system retrieves candidate results and evaluates them using
standardized retrieval metrics such as Hits@1, Hits@5, and
Hits@10. This approach ensures that the system remains
responsive while providing a fair assessment of ranking quality
across different cutoff levels, rather than depending solely on a
fixed number of returned items. Neo4j’s indexing on key node
properties, such as name and primary identifiers, also helps speed
up lookups as the knowledge graph grows. In cases where the
language model generates an incomplete or incorrect query, such
as referencing nodes or relationships that do not exist, the system
catches the error and either retries with a simpler prompt or
informs the user. Together, these steps make sure that queries run
quickly, return valid results, and keep the overall experience
smooth and reliable for biomedical question-answering.
G. User Interface for Query Transparency
The system includes a lightweight, cross-platform graphical
user interface (GUI) implemented as a web application using the
Flask framework in Python, with HTML and Bootstrap for
interactive visualization. The interface is designed to make the
question-answering process transparent and accessible to users
without technical expertise. It consists of three main panels:
1. Input Panel: Where the user can enter a biomedical
question in natural language.
2. Query Panel: Which displays the Cypher query generated
by the language model, allowing users to verify how their
question was interpreted.
3. Results Panel: Which presents the retrieved answers in a
clear, readable format, accompanied by a brief natural
language explanation generated by the system.
By showing both the query and the answer, the GUI promotes
user trust and enables validation of the system’s reasoning
process. The interface is lightweight enough to run smoothly on
standard desktop machines without additional dependencies,
making it practical for local deployments in clinical or research
settings. Fig. 4 illustrates the overall layout.
V. EXPERIMENTAL RESULTS
In this section, the proposed system is evaluated based on its
ability to translate natural-language biomedical questions into
executable Cypher queries over the iBKH knowledge graph. The
assessment is conducted end-to-end and focuses on three main
aspects: (i) the accuracy of query generation, measured by the
proportion of correctly produced Cypher queries; (ii) system
efficiency, quantified through total response time from question
submission to final answer delivery, covering both query
generation and execution; and (iii) the quality of retrieved
answers, evaluated using standardized information retrieval
metrics including Precision@k, Recall@k, F1@k, Hits@k, and
Mean Reciprocal Rank (MRR). For clarity, all these metrics are
formally defined in the following subsections, with their
corresponding equations, and will be used consistently
throughout the results section. Together, these dimensions
provide a comprehensive view of both correctness and efficiency
across simple, medium, and complex biomedical queries.
A. Description of the Experimental Data
To evaluate the proposed system, this work used a carefully
selected set of biomedical questions designed to test how well the
system understands natural language and converts it into Cypher
queries for a biomedical knowledge graph.
To evaluate the system, a benchmark of 60 biomedical
questions was constructed by the authors, guided by the schema
and relations in iBKH. The questions were grouped into simple,
medium, and complex levels to assess performance across
different reasoning requirements. Gold-standard answers were
manually prepared for each question to enable quantitative
evaluation using Exact Match, Precision@k, Recall@k, F1@k,
Hits@k, and MRR. The complete set of 60 questions is available
at this link https://drive.google.com/drive/my-drive. | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 9,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
352
The dataset contains 60 questions divided into three difficulty
levels based on how complex the language is and how deep the
biomedical reasoning needs to be:
• Level 1: 25 simple questions focusing mostly on easy-to-
understand facts, such as symptoms of certain diseases or
drugs used to treat specific conditions.
• Level 2: 20 medium-level questions that involve more
detailed relationships, such as drug interactions and
SideEffect.
• Level 3: 15 hard questions requiring multi-step reasoning
across multiple biomedical concepts or biological
pathways, similar to the complexity found in real clinical
cases.
The set of evaluation questions was designed to span a wide
range of common biomedical topics and to reflect clinically
relevant query types reported in prior literature. Each question is
paired with its corresponding gold standard cypher query and
categorized by difficulty level, as summarized in Table V, where
three illustrative examples are shown. The questions were derived
from publicly available biomedical QA benchmarks and adapted
from established knowledge bases (e.g., iBKH schema relations),
ensuring both coverage and diversity across diseases, drugs,
symptoms, and pathways.
TABLE V. QUESTION DIFFICULTY LEVELS AND SAMPLE CYPHER QUERIES
Level
Definition
Example
Cypher Query
1
Single-hop question using
one relationship
What are the symptoms of
Alzheimer?
MATCH (d:Disease)-[:HAS_SYMPTOM]->(s:Symptom)
WHERE toLower(d.name) CONTAINS 'alzheimer'
RETURN s.name AS symptom
2
Questions requiring one
relationship
What are the side effects of
drugs used to treat asthma?
WITH toLower('asthma') AS disease_name
MATCH (d:Disease)<-[:TREATS]-(dr:Drug)
WHERE toLower(d.name) CONTAINS disease_name
MATCH (dr:Drug)-[:CAUSES]->(se:SideEffect)
RETURN DISTINCT se.name AS side_effect, dr.name AS drug
3
Questions requiring two or
more relationships
What cholesterol medications
cause side effects, and what are
some of those effects?
WITH toLower('cholesterol') AS disease_name
MATCH (d:Disease)<-[:TREATS]-(dr:Drug)
WHERE toLower(d.name) CONTAINS disease_name
MATCH (dr:Drug)-[:CAUSES]->(se:SideEffect)
RETURN DISTINCT dr.name AS drug, se.name AS side_effect
Fig. 4. Graphical User Interface of the Biomedical Knowledge Explorer System
B. Quantitative Evaluation
The performance of the proposed system is evaluated on the
iBKH knowledge graph using Exact Match, Precision@k,
Recall@k, F1@k, Hits@k, MRR, and total response time across
simple, medium, and complex biomedical queries. These metrics
were measured for each difficulty level (simple, medium, and
complex) to see how performance changes as questions become
more challenging.
TABLE VI. QUERY GENERATION EXACT MATCH BY DIFFICULTY LEVEL | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 10,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
353
Difficulty Level
Total Questions
Correct Queries
Cypher Exact Match (EM) (%)
Simple
25
24
96%
Medium
20
19
95%
Complex
15
13
86.7%
avg
92.6%
To better understand the quality of the responses returned by
the system, this work examined real examples. One of the test
questions was "What drugs are used to treat breast cancer?" As
shown in Fig. 4, the system was able to understand the question,
generate the correct Cypher query using the TREATS
relationship, and retrieve the right information from the
biomedical knowledge graph. It returned a list of five drugs,
including Leuprolide, Goserelin, and Trastuzumab, which are
known treatments for breast cancer. This result shows that the
system is able to connect the question to the right part of the graph
and provide accurate and helpful answers, even for medical
questions that involve specific treatments.
1)
Exact Match of Query Generation
As shown in Table VI, how often the system generated the
correct query for each difficulty level. Here, accuracy is defined
as the percentage of questions for which the system’s generated
Cypher query matched the expected query. It is calculated using
the Eq. (1)
Exact Match (%) = Correct Queries
Total Questions × 100
(1)
These findings highlight the purpose of this experiment to test
whether the framework can reliably map natural language to
Cypher across varying levels of complexity. The graceful drop
from 96% on simple to 86.7% on complex queries indicates that
the system is robust for straightforward questions but still
challenged by multi-hop reasoning. This points to clear
opportunities for improvement, such as synonym expansion,
constrained decoding, or enhanced error handling for multi-step
queries.
2)
Latency
Table VII reports the average latency per difficulty level and
decomposes it into query generation and Neo4j execution. The
total response time is computed as in Eq. (2). Execution time is
effectively constant across all levels ( ≈0.04–0.05 s), so
variation in total latency is dominated by query generation. As
difficulty increases, the mean total time rises from 5.12 s
(simple) to 5.75 s (medium) and 7.35 s (complex). Dispersion
(Std. Dev.) grows with task complexity 0.72 s (simple), 0.32 s
(medium), 2.09 s (complex) reflecting more variable planning
and reasoning needed to assemble correct Cypher for harder
questions. Pooled over all questions, the overall mean is 6.07 s
with an overall SD of 1.38 s, keeping latencies within single-digit
seconds and practical for interactive, real-world use.
𝑇total = 𝑇gen + 𝑇exec
(2)
Standard Deviation (SD). Unless otherwise stated, SD is the
unbiased sample standard deviation computed over per-question
total times within each difficulty group 𝐺 with 𝑛𝐺 questions:
𝑇‾𝐺= 1
𝑛𝐺
∑𝑇total
(𝑖)
𝑛𝐺
𝑖=1
,
𝑠𝐺= √
1
𝑛𝐺−1 ∑
𝑛𝐺
𝑖=1
(𝑇total
(𝑖) −𝑇‾𝐺)
2 .
(3)
(4)
Overall values are computed by pooling all questions across
levels, with 𝑁= ∑𝑛𝐺
𝐺
:
𝑇‾overall = 1
𝑁∑𝑇total
(𝑖)
𝑁
𝑖=1
,
𝑠overall = √
1
𝑁−1 ∑
𝑁
𝑖=1
(𝑇total
(𝑖) −𝑇‾overall)
2 .
(5)
(6)
TABLE VII. AVERAGE LATENCY (S) BY DIFFICULTY LEVEL (SD = STANDARD
DEVIATION)
Difficulty
Level
Query
Generation (s)
Query
Execution (s)
Total
Time (s)
Std.
Dev. (s)
Simple
5.09
0.03
5.12
0.72
Medium
5.69
0.01
5.75
0.32
Complex
6.94
0.4
7.35
2.09
Overall
5.9
0.15
6.07
1.38
3)
Answer-Level Evaluation Metrics
As shown in Table VIII, the quality of the returned items is
evaluated using five standardized metrics: Precision@k,
Recall@k, F1@k, Hits@k, and MRR. Precision@k measures the
proportion of correct answers among the top-𝑘 retrieved items
Eq. (7), while Recall@k quantifies the fraction of gold-standard
answers covered within the top-𝑘 results Eq. (8). F1@k combines
both aspects through the harmonic mean Eq. (9). Hits@k reports
whether at least one correct answer appears in the top-𝑘 Eq.
(10, 11), and MRR captures how early the first correct answer is
retrieved in the ranking Eq. (12). Together, these metrics provide
a comprehensive view of both the correctness and completeness
of retrieval, as well as the ranking quality across simple, medium
and complex queries.
Precision@𝑘= |{ relevant ∩retrieved@𝑘 }|
𝑘
(7)
Precision@k measures the fraction of the top-𝑘 retrieved
items that are correct (i.e., appear in the gold set). For example, | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 11,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
354
P@1 refers to the accuracy of the very first retrieved item, P@5
evaluates correctness within the top three results, and P@10
considers the top ten. Higher values indicate that relevant items
tend to appear early in the ranked list.
TABLE VIII. EVALUATION METRICS ACROSS LEVELS
Simple
Medium
Complex
Overall
P@1
100
95
86.67
93.8
P@5
100
95
93.33
96.11
P@10
100
95
88
94.33
R@1
88
90
86.67
88.22
R@5
90.65
89.67
86.67
88.9
R@10
89.60
89
83.33
87.31
F1@1
88
90
86.67
88.22
F1@5
93.20
91.25
88.67
91.03
F1@10
93.28
91.39
85.24
89.97
Hits@k
100
95
93
96.1
MRR
100
95
88
94.4
Recall@𝑘=
|{ relevant∩retrieved@𝑘 }|
|{ relevant }|
(8)
Recall@k measures the proportion of all relevant items (in the
gold set) that are successfully retrieved within the top- 𝑘
positions. For instance, R@5 indicates how many of the expected
answers are covered by the top three results. This metric is
particularly important when the gold answer set is larger than 𝑘.
F1@𝑘= 2 ⋅Precision@𝑘⋅Recall@𝑘
Precision@𝑘+ Recall@𝑘
(9)
F1@k is the harmonic mean of Precision@k and Recall@k.
It balances the trade-off between retrieving only relevant items
(precision) and covering as many relevant items as possible
(recall). For example, F1@10 reflects the combined quality of the
system when retrieving the top ten results.
Hits@𝑘= {1
if { relevant ∩retrieved@𝑘 } ≠⌀
0
otherwise
(10)
Hits@k is a binary measure at the query level: it equals 1 if at
least one correct answer is present among the top-𝑘 retrieved
items, and 0 otherwise. For example, Hits@5 reports the
percentage of queries where the system was able to “hit” at least
one correct answer in the top five results.
Hits@𝑘= 1
|𝑄| ∑𝟏
𝑞∈𝑄
({ relevant𝑞
∩retrieved𝑞@𝑘 } ≠⌀)
(11)
When averaged across a set of queries 𝑄, Hits@k gives the
proportion of queries for which at least one relevant item is
returned in the top-𝑘 results. This measure is less sensitive to
ranking quality but emphasizes coverage across queries.
MRR = 1
|𝑄| ∑
1
rank𝑞
𝑞∈𝑄
(12)
Mean Reciprocal Rank (MRR) averages the reciprocal of the
rank of the first correct answer for each query 𝑞∈𝑄. For
example, if the first correct answer appears in the top position,
the reciprocal rank is 1/1 = 1.0; if it appears at rank 5, the
contribution is 1/5 = 0.2. MRR therefore rewards systems that
retrieve correct answers as early as possible.
According to the results in Table VIII, this work reports
standardized metrics, including Precision, Recall, F1, Hits@k,
and MRR at different cutoffs (𝑘= 1,5,10). Precision@k captures
the proportion of retrieved items among the top-𝑘 that are correct,
while Recall@k measures the coverage of gold-standard answers
within the same cutoff. F1@k balances both aspects. Hits@k
reflects whether at least one correct answer appears in the top-𝑘,
and MRR evaluates how early the first correct answer is retrieved.
For simple questions ( 𝑁= 25 ), the performance is
consistently near perfect, with P @ 10 = 100%, R @ 10 (89. 6%)
and F1 @ 10(93. 3%), along with Hits @ 10 = 100% and MRR =
100%. For medium questions (𝑁= 20), P @ 10 = 95%, R @ 10
(89%) and F1 @ 10 (91. 4%), with a strong ranking quality
reflected in Hits@10 = 95% and MRR = 95%. For complex
queries (𝑁= 15), the performance remains robust but slightly
lower, with P @ 10 (93. 3%), R @ 10 (88%) and F1 @ 10 (90.
2%), alongside Hits @ 10 (93. 3%) and MRR 93. 3%.
In general, at all levels, the system achieves P @ 10 (96. 1%),
R @ 10 (88. 9%) and F1 @ 10 (91. 6%), with Hits @ 10 (96. 1%)
and MRR 96. 1%. These results indicate that the system not only
retrieves nearly all expected answers but also ranks them highly,
ensuring both completeness and correctness. This level of
reliability is particularly valuable in biomedical applications
where precision and trustworthiness are critical.
The main purpose of this experiment was to assess not only
whether queries execute, but whether the returned results are both
correct and complete. The precision consistently above 95%
confirms that almost all the items retrieved are clinically valid,
while Hits@5 near 95% shows that the system usually returns
close to the expected five answers per question. Together, these
metrics demonstrate that knowledge graph grounding effectively
minimizes hallucinations and ensures trustworthy biomedical
output.
We evaluated performance differences across difficulty levels
using a two-sample t-test, which revealed statistically significant
differences. Error analysis indicates that failures in complex
queries are mainly due to missing relation hops, whereas errors
in medium-level queries are mostly caused by syntax
mismatches. These findings highlight the challenges associated
with query complexity and provide insights for targeted
improvements.
C. Qualitative Evaluation
In addition to quantitative metrics, the system’s outputs were
evaluated for contextual accuracy and alignment with the
structure of the knowledge graph. Two plain English questions
were selected and for each, the generated Cypher query, the
Neo4j output, and the visual graph representation were reviewed
to verify that the answers reflected valid relationships in the
graph. | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 12,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
355
For the question ’What are the symptoms of brain cancer?’
The system generated a Cypher query that correctly followed the
HAS_SYMPTOM relationship from disease nodes to symptoms
nodes, filtering by the specified disease name. The results
retrieved included terms such as aphasia (Broca, Wernicke),
anomia, agraphia, agnosia, amnesia (including retrograde
amnesia), olfaction disorders, and apnea symptoms consistent
with established neurological manifestations of brain tumors. In
Neo4j, the data formed a clear center-and-spoke pattern, with
brain cancer at the center and its associated nodes of symptoms
radiating outward, as shown in Fig. 5.
Cypher:
WITH toLower("brain cancer") AS disease_name
MATCH (d:Disease)-[:HAS_SYMPTOM]->(s:Symptom)
WHERE toLower(d.name) CONTAINS disease_name
RETURN DISTINCT s,d
LIMIT 10;
Fig. 5. Graphical user interface displaying Cypher query and results for breast
cancer treatment
A second query, What are the side effects of insulin?,
produced a Cypher statement starting from the drug node for
insulin and traversing the CAUSES relationship to the connected
SideEffect nodes. The resulting list included dizziness, diarrhea,
cough, back pain, weakness, rash/dermatitis, and hypersensitivity
side effects well documented in insulin safety profiles. In the
visual representation in Fig. 6, insulin appeared centrally
connected to these side effect nodes, further strengthening the
correctness of the relationship mapping of the system.
Cypher:
WITH toLower("insulin") AS drug_name
MATCH (d:Drug)-[:CAUSES]->(se:SideEffect)
WHERE toLower(d.name) CONTAINS drug_name
RETURN se,d
LIMIT 20;
Fig. 6. Knowledge Graph Visualization of Insulin and Its Related Side Effects
These examples demonstrate the system’s ability to interpret
plain English biomedical questions, generate correct Cypher
queries, and return results that are clinically plausible, easy to
interpret, and directly traceable to specific graph relationships.
This supports both the accuracy and the transparency of the
proposed approach.
To illustrate how biomedical questions can be translated into
graph queries, consider the natural language question: ’What are
the side effects of drugs that treat epilepsy?’. The following
Cypher query demonstrates how this question is mapped in the
knowledge graph, where diseases are connected to drugs through
the TREATS relation and drugs are connected to side effects
through the CAUSES relation.
Cypher:
MATCH (d:Disease)<-[:TREATS]-(dr:Drug)
WHERE toLower(d.name) CONTAINS "epilepsy"
MATCH (dr:Drug)-[:CAUSES]->(se:SideEffect)
RETURN DISTINCT se,d,dr
limit 10 | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 13,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
356
Fig. 7. Side effects of drugs that treat epilepsy
Executing this query returns drugs associated with epilepsy
and their corresponding side effects. For example, the query
identifies Pregabalin as a treatment for epilepsy and retrieves
multiple side effects such as anxiety, arrhythmia, gastrointestinal
pain, and others. The visualization of the graph in Fig. 7,
highlights this pattern, where the drug node is linked to epilepsy
via TREATS and to several side effect nodes via CAUSES,
providing an interpretable biomedical knowledge structure.
VI. DISCUSSION
Under
a
unified
evaluation
protocol
reporting
P@k/R@k/F1@k, Hits@k, and MRR at 𝑘∈{1,5,10} , the
pipeline exhibits consistent end-to-end behavior across the three
difficulty tiers. The 𝑘-ablation shows strong early precision (high
P@1 and elevated MRR), while recall increases with larger 𝑘,
indicating that correct answers typically surface near the top yet
persist deeper in the ranked slate. At the query-generation level,
exact-match (EM) is high for simple questions (e.g., ∼96%) and
lower for complex, multi-hop questions (e.g., ∼86.7%), which
aligns with increased compositionality. End-to-end latency
(mean total ≈6.07 s from question to final answer) remains within
interactive bounds on our local setup. Practically, a fully offline
deployment avoids internet connectivity, API keys, and external
data
transfer,
strengthening
privacy,
compliance,
and
reproducibility in biomedical contexts.
Several challenges qualify these findings. First, the
comparability challenge: to our knowledge, no prior study
evaluates iBKH using the same metric suite, making cross-paper
numeric comparisons not “apples to apples.” We therefore
interpret the results as controlled evidence about this pipeline
under a single, consistent protocol rather than as a cross-study
leaderboard. Beyond comparability, performance may vary with
other biomedical knowledge graphs; the current iBKH snapshot
is static, limiting real-time updates; and scaling to larger or
dynamically refreshed graphs can introduce latency and
consistency trade-offs.
Error analysis shows that residual failures concentrate in
complex, multi-hop queries where missing relation hops or brittle
name-based matching (synonyms, abbreviations, homonyms)
lead to partial answers. These observations motivate concept-
level normalization via biomedical identifiers (e.g., UMLS,
SNOMED, RxNorm) with synonym expansion, as well as
schema-constrained query generation and path-guided decoding
to better satisfy multi-hop constraints. Finally, correctness is
assessed primarily at the answer level (Hits@k, MRR,
precision/recall) and does not yet include full semantic-
equivalence checks across alternative Cypher queries, which may
overlook cases where different queries yield the same correct
results. Complementing template EM with result-set equivalence
checks, expanding the metric suite (e.g., nDCG/MAP) with per-
question 95% bootstrap confidence intervals, and supporting
incremental graph updates with distributed storage are promising
steps to enhance robustness, scalability, and external validity.
VII. CONCLUSION
This study introduced a hybrid biomedical question
answering framework that couples the LLaMA-3B language
model with a Neo4j-based iBKH knowledge graph to enable the
automatic generation of executable Cypher queries and to deliver
transparent, evidence-grounded answers through justification
subgraphs. Evaluation in simple, medium, and complex queries
demonstrated consistently high performance, with strong
precision, recall, F1, Hits@k, and MRR values, while
maintaining low latency suitable for interactive biomedical
applications. Beyond quantitative performance, the proposed
system provides a reproducible and privacy-preserving solution
by operating fully offline, a property of particular importance in
sensitive clinical and research contexts. However, certain
limitations remain. The reliance on a static iBKH snapshot
constrains coverage and adaptability, recall is lower for complex
multi-hop reasoning, and the absence of canonical entity
normalization (e.g., UMLS, SNOMED, RxNorm) may reduce
semantic robustness. Future research will therefore focus on
integrating standardized biomedical entity normalization,
enabling dynamic and incremental knowledge graph updates, and
leveraging domain-adapted or fine-tuned biomedical LLMs.
These directions are expected to further strengthen the
robustness, scalability, and applicability of the framework in real-
world biomedical and clinical environments.
REFERENCES
[1] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar,
P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al., “Sparks of artificial general
intelligence:
Early experiments with
gpt-4,” arXiv preprint
arXiv:2303.12712, 2023.
[2] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D.
Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “Gpt-4
technical report,” arXiv preprint arXiv:2303.08774, 2023.
[3] Y. Hou, J. Yeung, H. Xu, C. Su, F. Wang, and R. Zhang, “From answers to
insights: unveiling the strengths and limitations of chatgpt and
biomedical knowledge graphs,” Research Square, pp. rs–3, 2023. | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 14,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
Omar & Mohammed / Journal of Applied Science and Technology Trends Vol. 06, No. 02, pp. 342 –357 (2025)
357
[4] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto,
and P. Fung, “Survey of hallucination in natural language generation,”
ACM Computing Surveys, vol. 55, no. 12, pp. 1–38, 2023.
[5] C. Malaviya, S. Lee, S. Chen, E. Sieber, M. Yatskar, and D. Roth, “Expertqa:
expert-curated questions and attributed answers,” arXiv preprint
arXiv:2309.07852, 2023.
[6] L. Pusch and T. O. Conrad, “Combining llms and knowledge graphs to reduce
hallucinations
in
question
answering,”
arXiv
preprint
arXiv:2409.04181, 2024.
[7] H. Nori, N. King, S. M. McKinney, D. Carignan, and E. Horvitz,
“Capabilities of gpt-4 on medical challenge problems,” arXiv preprint
arXiv:2303.13375, 2023.
[8] T. Sekar, Kushal, S. Shankar, S. Mohammed, and J. Fiaidhi, “Investigations
on using evidence-based graphrag pipeline using llm tailored for usmle
style questions,” medRxiv, pp. 2025–05, 2025.
[9] S. Farquhar, J. Kossen, L. Kuhn, and Y. Gal, “Detecting hallucinations in
large language models using semantic entropy,” Nature, vol. 630, no.
8017, pp. 625–630, 2024.
[10] E. Asgari, N. Montaña-Brown, M. Dubois, S. Khalil, J. Balloch, J. A.
Yeung, and D. Pimenta, “A framework to assess clinical safety and
hallucination rates of llms for medical text summarisation,” npj Digital
Medicine, vol. 8, no. 1, p. 274, 2025.
[11] C. Su, Y. Hou, M. Zhou, S. Rajendran, J. R. Maasch, Z. Abedi, H. Zhang,
Z. Bai, A. Cuturrufo, W. Guo, et al., “Biomedical discovery through the
integrative biomedical knowledge hub (ibkh),” Iscience, vol. 26, no. 4,
2023.
[12] J. H. Morris, K. Soman, R. E. Akbas, X. Zhou, B. Smith, E. C. Meng, C. C.
Huang, G. Cerono, G. Schenk, A. Rizk-Jackson, et al., “The scalable
precision medicine open knowledge engine (spoke): a massive
knowledge graph of biomedical information,” Bioinformatics, vol. 39,
no. 2, p. btad080, 2023.
[13] K. Soman, P. W. Rose, J. H. Morris, R. E. Akbas, B. Smith, B. Peetoom, C.
Villouta-Reyes, G. Cerono, Y. Shi, A. Rizk-Jackson, et al., “Biomedical
knowledge graph-optimized prompt generation for large language
models,” Bioinformatics, vol. 40, no. 9, p. btae560, 2024.
[14] F. Frau, P. Loustalot, M. Törnqvist, N. Temam, J. Cupe, M. Montmerle, and
F. Augé, “Connecting electronic health records to a biomedical
knowledge graph to link clinical phenotypes and molecular endotypes
in atopic dermatitis,” Scientific Reports, vol. 15, no. 1, p. 3082, 2025.
[15] Y. Gao, R. Li, E. Croxford, J. Caskey, B. W. Patterson, M. Churpek, T.
Miller, D. Dligach, and M. Afshar, “Leveraging medical knowledge
graphs into large language models for diagnosis prediction: design and
application study,” Jmir AI, vol. 4, p. e58670, 2025.
[16] Y. Yan, Y. Hou, Y. Xiao, R. Zhang, and Q. Wang, “Knownet: guided health
information seeking from llms via knowledge graph integration,” IEEE
Transactions on Visualization and Computer Graphics, 2024.
[17] Y. Deng, S. Zhao, Y. Miao, J. Zhu, and J. Li, “Medka: a knowledge graph-
augmented approach to improve factuality in medical large language
models,” Journal of Biomedical Informatics, p. 104871, 2025.
[18] L. Ehrlinger and W. Wöß, “Towards a definition of knowledge graphs,”
SEMANTiCS (Posters, Demos, SuCCESS), vol. 48, no. 1–4, p. 2, 2016.
[19] E. Rajabi and S. Kafaie, “Building a disease knowledge graph,” in Caring
is Sharing – Exploiting the Value in Data for Health and Innovation,
pp. 701–705, IOS Press, 2023.
[20] L. Guan, Y. Huang, and J. Liu, “Biomedical question answering via multi-
level summarization on a local knowledge graph,” arXiv preprint
arXiv:2504.01309, 2025.
[21] D. Steinigen, R. Teucher, T. H. Ruland, M. Rudat, N. Flores-Herr, P.
Fischer, N. Milosevic, C. Schymura, and A. Ziletti, “Fact finder –
enhancing domain expertise of large language models by incorporating
knowledge graphs,” arXiv preprint arXiv:2408.03010, 2024.
[22] Y. Feng, L. Zhou, C. Ma, Y. Zheng, R. He, and Y. Li, “Knowledge graph–
based thought: a knowledge graph–enhanced llm framework for pan-
cancer question answering,” GigaScience, vol. 14, p. giae082, 2025.
[23] H. Luo, Z. Tang, S. Peng, Y. Guo, W. Zhang, C. Ma, G. Dong, M. Song,
W. Lin, Y. Zhu, et al., “Chatkbqa: a generate-then-retrieve framework
for knowledge base question answering with fine-tuned large language
models,” arXiv preprint arXiv:2310.08975, 2023.
[24] A. Tiwari, S. K. R. Malay, V. Yadav, M. Hashemi, and S. T. Madhusudhan,
“Auto-cypher: improving llms on cypher generation via llm-supervised
generation-verification framework,” in Proceedings of the 2025
Conference of the Nations of the Americas Chapter of the Association
for Computational Linguistics: Human Language Technologies
(Volume 2: Short Papers), pp. 623–640, 2025.
[25] R. Wang, Z. Zhang, L. Rossetto, F. Ruosch, and A. Bernstein, “Nlqxform:
a language model-based question to sparql transformer,” arXiv preprint
arXiv:2311.07588, 2023.
[26] M. R. Rezaei, R. S. Fard, J. L. Parker, R. G. Krishnan, and M. Lankarany,
“Agentic medical knowledge graphs enhance medical question
answering: bridging the gap between llms and evolving medical
knowledge,” arXiv preprint arXiv:2502.13010, 2025.
[27] Z. Dong, B. Peng, Y. Wang, J. Fu, X. Wang, Y. Shan, and X. Zhou, “Effiqa:
efficient question-answering with strategic multi-model collaboration
on knowledge graphs,” arXiv preprint arXiv:2406.01238, 2024.
[28] Y. Duan, Q. Zhou, Y. Li, C. Qin, Z. Wang, H. Kan, and J. Hu, “Research
on a traditional chinese medicine case-based question-answering
system integrating large language models and knowledge graphs,”
Frontiers in Medicine, vol. 11, p. 1512329, 2025.
[29] S. Mohammed, J. Fiaidhi, T. Sekar, K. Kushal, and S. Shankar,
“Investigations on using evidence-based graphrag pipeline using llm
tailored for answering usmle medical exam questions,” medRxiv, pp.
2025–05, 2025.
[30] H. Yang, J. Li, C. Zhang, A. P. Sierra, and B. Shen, “Large language model–
driven knowledge graph construction in sepsis care using multicenter
clinical databases: development and usability study,” Journal of
Medical Internet Research, vol. 27, p. e65537, 2025.
[31] K.-L. Hsieh, G. Plascencia-Villa, K.-H. Lin, G. Perry, X. Jiang, and Y. Kim,
“Synthesize heterogeneous biological knowledge via representation
learning for alzheimer’s disease drug repurposing,” Iscience, vol. 26,
no. 1, 2023.
[32] R. Angles and C. Gutierrez, “Survey of graph database models,” ACM
Computing Surveys (CSUR), vol. 40, no. 1, pp. 1–39, 2008.
[33] B. Chicho and A. O. Mohammed, “An empirical comparison of neo4j and
tigergraph databases for network centrality,” Science Journal of
University of Zakho, vol. 11, no. 2, pp. 190–201, 2023.
[34] I. Robinson, J. Webber, and E. Eifrem, Graph Databases: New
Opportunities for Connected Data, O’Reilly Media, 2015.
[35] A. Lysenko, I. A. Roznovăţ, M. Saqi, A. Mazein, C. J. Rawlings, and C.
Auffray, “Representing and querying disease networks using graph
databases,” BioData Mining, vol. 9, no. 1, p. 23, 2016.
[36] M. Šestak, M. Heričko, T. W. Družovec, and M. Turkanović, “Applying k-
vertex cardinality constraints on a neo4j graph database,” Future
Generation Computer Systems, vol. 115, pp. 459–474, 2021.
[37] M. Desai, R. G. Mehta, and D. P. Rana, “An empirical analysis to identify
the effect of indexing on influence detection using graph databases,”
International Journal of Innovative Technology and Exploring
Engineering, vol. 8, no. 9S, pp. 414–421, 2019.
[38] S. Beis, S. Papadopoulos, and Y. Kompatsiaris, “Benchmarking graph
databases on the problem of community detection,” in New Trends in
Database and Information Systems II, pp. 3–14, Springer, 2015.
[39] R. Wang, Z. Yang, W. Zhang, and X. Lin, “An empirical study on recent
graph database systems,” in International Conference on Knowledge
Science, Engineering and Management, pp. 328–340, Springer, 2020.
[40] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: pre-training of
deep bidirectional transformers for language understanding,” in
Proceedings of the 2019 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, vol. 1, pp. 4171–4186, 2019.
[41] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A.
Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models
are few-shot learners,” Advances in Neural Information Processing
Systems, vol. 33, pp. 1877–1901, 2020.
[42] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang, “Biobert:
a pre-trained biomedical language representation model for biomedical
text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, 2020.
[43] E. Alsentzer, J. R. Murphy, W. Boag, W.-H. Weng, D. Jin, T. Naumann,
and M. McDermott, “Publicly available clinical bert embeddings,”
arXiv preprint arXiv:1904.03323, 2019.
[44] Z. He, S. Sunkara, X. Zang, Y. Xu, L. Liu, N. Wichers, G. Schubiner, R.
Lee, and J. Chen, “Actionbert: leveraging user actions for semantic
understanding of user interfaces,” in Proceedings of the AAAI
Conference on Artificial Intelligence, vol. 35, pp. 5931–5938, 2021. | {
"author": "",
"creationDate": "D:20251020163736+03'00'",
"creationdate": "2025-10-20T16:37:36+03:00",
"creator": "Microsoft® Word for Microsoft 365",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20251020163736+03'00'",
"moddate": "2025-10-20T16:37:36+03:00",
"page": 15,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/JASTTytm_A+Hybrid+LLM–Knowledge+Graph_.pdf",
"subject": "",
"title": "A Hybrid LLM–Knowledge Graph Framework for Accurate Biomedical Question Answering",
"total_pages": 16,
"trapped": ""
} |
DALK: Dynamic Co-Augmentation of LLMs and KG to answer
Alzheimer’s Disease Questions with Scientific Literature
Dawei Li1*, Shu Yang2∗, Zhen Tan1, Jae Young Baik2, Sunkwon Yun3, Joseph Lee2, Aaron Chacko2,
Bojian Hou2, Duy Duong-Tran2,4, Ying Ding5, Huan Liu1†, Li Shen2†, Tianlong Chen3†
1School of Computing, and Augmented Intelligence, Arizona State University
2Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania Perelman School of Medicine
3Department of Computer Science, The University of North Carolina at Chapel Hill
4Department of Mathematics, United States Naval Academy
5School of Information, The University of Texas at Austin, Austin
Abstract
Recent advancements in large language mod-
els (LLMs) have achieved promising perfor-
mances across various applications. Nonethe-
less, the ongoing challenge of integrating
long-tail knowledge continues to impede the
seamless adoption of LLMs in specialized do-
mains. In this work, we introduce DALK, a.k.a.
Dynamic Co-Augmentation of LLMs and KG,
to address this limitation and demonstrate its
ability on studying Alzheimer’s Disease (AD),
a specialized sub-field in biomedicine and a
global health priority. With a synergized frame-
work of LLM and KG mutually enhancing each
other, we first leverage LLM to construct an
evolving AD-specific knowledge graph (KG)
sourced from AD-related scientific literature,
and then we utilize a coarse-to-fine sampling
method with a novel self-aware knowledge re-
trieval approach to select appropriate knowl-
edge from the KG to augment LLM infer-
ence capabilities. The experimental results,
conducted on our constructed AD question
answering (ADQA) benchmark, underscore
the efficacy of DALK. Additionally, we per-
form a series of detailed analyses that can of-
fer valuable insights and guidelines for the
emerging topic of mutually enhancing KG and
LLM. We will release the code and data at
https://github.com/David-Li0406/DALK.
1
Introduction
Alzheimer’s Disease (AD) is a neurodegenerative
disorder characterized by progressive declines in
cognitive and functional status over a span of
decades (Report, 2023). However, current AD ther-
apy developments are facing critical challenges
due to the lack of knowledge and understanding of
the underlying etiological mechanisms of the dis-
ease. Although scientific literature and dedicated
biomedical databases could supply rich sources of
* Equal Constributions
† Corresponding authors
AD knowledge, manual review of relevant informa-
tion is impossible due to the large volume.
As large language models (LLMs) (Brown et al.,
2020; Zhang et al., 2022; Anil et al., 2023; Touvron
et al., 2023) with chain-of-thought (CoT)-based
prompting (Wei et al., 2022; Wang et al., 2022;
Tong et al., 2023; Yao et al., 2023; Besta et al.,
2023) demonstrate strong language capabilities
across various tasks, there have been attempts to
leverage LLMs-based systems in general biomedi-
cal and AD-related applications (Mao et al., 2023;
Li et al., 2023c; Yan et al., 2024; Feng et al., 2023).
However, while the LLMs have shown promising
performances in many general tasks, recent studies
revealed LLMs’ limitations in long-tail (Kandpal
et al., 2023) and domain-specific (Li et al., 2023b,
2024) knowledge, thereby significantly impeding
their adaptations in vertical fields such as AD. To
deal with this issue, the most common strategies are
retrieval augmented generation (RAG) and domain-
specific LLMs training.
Nevertheless, directly applying these strategies
in the context like AD would still suffer from sev-
eral issues.
First, Data Quality: As in many
biomedical fields, scientific literature composes
the largest publicly available corpus source in AD.
Yet, the dense and information-overloaded nature
of scientific literature, when combined with auto-
matic retrieval methods, can lead to the retrieval
of irrelevant and noisy information. Previous re-
search has shown that noisy and irrelevant corpora
can significantly undermine the performance of
LLMs (Yu et al., 2023; Chen et al., 2024; Wu et al.,
2024). Second, Efficiency & Scale Issues: Being
an critical field of research, the knowledge of AD
is rapidly evolving with scientific advancements at
a remarkable pace and scale. However, retraining
a domain-specific LLM or updating certain knowl-
edge in it demands substantial computational re-
sources (Hu et al., 2021; Ovadia et al., 2023; Zhang
et al., 2024). This efficiency issue would also limit
arXiv:2405.04819v1 [cs.CL] 8 May 2024 | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 0,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
the sizes of domain-specific LLMs, consequently
affecting their performances.
To tackle these limitations, here we propose
a Dynamic Co-Augmentation of LLMs and KG
(DALK) framework that facilitates mutual benefits
between LLMs and knowledge graphs (KG) for
the AD domain. Initially, our framework addresses
the data quality challenge by extracting more struc-
tural and accurate knowledge from unstructured
and dense scientific literature and constructing a
domain-specific knowledge graph tailored to AD.
We employ two widely utilized knowledge graph
construction methods, namely pair-wise construc-
tion (Carta et al., 2023; Wadhwa et al., 2023) and
generative construction (Han et al., 2023; Bi et al.,
2024), to comprehensively assess their impact on
knowledge graph quality. Then, we adopt a coarse-
to-fine sampling method with a novel self-aware
knowledge retrieval approach to select appropri-
ate knowledge from the knowledge graph and thus
further address the data quality problem. Notably,
the tuning-free nature of our framework signifi-
cantly enhances efficiency and facilitates its ap-
plication in large-scale and API-based language
models (OpenAI, 2022). In the evaluation sec-
tion, we derive an Alzheimer’s Disease question
answering (ADQA) benchmark from existing gen-
eral medical QA datasets with millions of sam-
ples filtered by a curated keyword list and self-
sampling of LLMs. Our extensive experiment on
ADQA demonstrates the effectiveness of our frame-
work in domain-specific applications compared
with general biomedical LLMs and retrieval aug-
mented models. Further evaluation and analysis
provide valuable insights into constructing high-
quality knowledge graphs and sampling accurate
knowledge from them.
In summary, our contribution in this work can
be summarized as follows:
• We identify the constraints of the current meth-
ods for LLMs in domain-specific areas like
AD and introduce DALK, a co-augmentation
framework of the LLM and KG to address
these issues.
• We build AD-specific KG and QA benchmark.
Through extensive comparisons with other
methods, we showcase the effectiveness of
DALK.
• We delve into a comprehensive analysis of
our proposed method and provide valuable
insights and guidance on how to construct a
high-quality KG and sample accurate knowl-
edge from it.
2
Related Work
The
interplay
between
LLMs
and
KGs
KGs (Miller, 1995; Speer et al., 2017; Vrandeˇci´c
and Krötzsch, 2014) serve as structured represen-
tations of factual knowledge, typically expressed
as (head, relation, tail) triples. Their structured,
factual, and interpretable nature renders them ex-
cellent complements to parametric language mod-
els (Pan et al., 2024). Recently, with the rise of
large language models (LLMs), numerous studies
have delved into exploring the synergy between
LLMs and KGs for various purposes (Pan et al.,
2024; Tan et al., 2024). There are a lot of efforts in
conducting knowledge graph construction (Carta
et al., 2023; Wadhwa et al., 2023; Han et al., 2023;
Bi et al., 2024; Datta et al., 2024), completion (Wei
et al., 2023; Zhang et al., 2023b; Li et al., 2024)
with the aid of LLMs. Conversely, other works aim
to enhance LLMs by integrating knowledge sam-
pled from KGs during both training (Tang et al.,
2023; Luo et al., 2024; Dernbach et al., 2024;
Rangel et al., 2024) and inference (Kim et al., 2023;
Wen et al., 2023; Jiang et al., 2023; Sun et al.,
2023a) times. Our work distinguishes itself by
proposing a co-augmentation framework for LLMs
and KGs, facilitating their mutual enhancement,
and applying it to the domain of AD.
LLMs and KGs for AD research
LLMs and
KGs have both been applied to Alzheimer’s Dis-
ease research in previous studies. Pre-trained lan-
guage models are utilized to work on AD de-
tection and many other related tasks based on
speech recordings and transcripts (Balagopalan
et al., 2020; Agbavor and Liang, 2022), electronic
health records (EHRs) (Mao et al., 2023; Li et al.,
2023c; Yan et al., 2024), and tabular data (Feng
et al., 2023).
KGs have been widely used in
biomedical research, yet only a few are specifically
for AD research (Romano et al., 2023; Pu et al.,
2023; Hsieh et al., 2023; Nian et al., 2022; Daluwa-
tumulle et al., 2023). These KGs were generally
constructed from a variety of information derived
from heterogeneous biomedical databases (e.g. for
genes, drugs, pathways, etc.) or scientific literature
related to AD. Despite the aforementioned efforts
for LLMs and KGs in AD research, no prior study
has explored using LLM to augment AD-KG, or | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 1,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
vice versa, let alone the potential for mutual en-
hancement between the two as we propose here.
3
Our Methodology
This section elaborates on our dynamic co-
augmentation framework of LLMs and KG. Sec-
tion 3.1 presents the details of augmenting an AD-
specific evolving KG with LLMs and literature
corpus in a time-slicing fashion (i.e. year by year).
Following it, Section 3.2 describes the process of
sampling appropriate knowledge from the evolving
KG to enhance LLMs’ reasoning. Figure 1 illus-
trates an overall pipeline of our method DALK.
KGpair
KGgen
#Corpus
9,764
9,764
#Nodes
13,509
20,545
#Relations
3,952
3,651
#Triples
171,431
53,585
Table 1: Detailed statistics about our augmented knowl-
edge graph.
3.1
LLMs for KG
Corpus Collection
To create an AD-specific
knowledge graph, we follow (Pu et al., 2023) and
use the AD corpus collected by a domain expert
Professor Colin Masters at the University of Mel-
bourne who discovered amyloid proteins being the
potential cause of AD (Masters et al., 1985). The
corpus is based on his extensive bibliography of
representative AD-related papers and consists of
more than 16K PMID (PubMed ID)-indexed arti-
cles from 1977 to 2021. For our study, we focus on
the papers since 2011 which reflect the most recent
knowledge in the field and get 9,764 articles.
Entity Recognition
In order to identify knowl-
edge at the proper granularity level for AD, we
extract relevant entities from the corpus by utiliz-
ing the PubTator Central (PTC) (Wei et al., 2013)
developed and continuously maintained by NCBI.
PTC is a widely-used tool to provide state-of-the-
art annotations of biomedical concepts for PubMed
abstracts and full-text articles, and it supports six
bioconcept types including genes, diseases, chem-
icals, mutations, species and cell lines. We apply
PTC to the abstracts of all our AD papers and ob-
tain the relevant named entities which will serve as
nodes in the knowledge graph.
Relation Extraction
To build an accurate and
high-quality knowledge graph on AD, we aim to as-
sign a specific relation type between the two related
entities. Through a comprehensive survey of rela-
tion extraction methods for knowledge graph con-
struction, we categorize current approaches with
LLMs into two main groups: (a). Pair-wised Rela-
tion Extraction (Carta et al., 2023; Wadhwa et al.,
2023) aims to prompt the LLMs to describe the
relationship between any two entities in a segment
of text. (b). Generative Relation Extraction (Han
et al., 2023; Bi et al., 2024; Datta et al., 2024),
where LLMs directly output all related entity pairs
and their corresponding relationships. As shown
in Figure 2, we incorporate both of these rela-
tion extraction methods into our knowledge graph
augmentation process to provide a comprehensive
comparison between them. We denote the result-
ing knowledge graphs from these approaches as
KGpair and KGgen respectively.
Table 1 presents the detailed statistics about our
augmented knowledge graph, including the num-
ber of corpora we used, and the number of nodes,
relations and triples in KGpair and KGgen.
3.2
KG for LLMs
In this section, we begin by outlining our process
for sampling coarse-grained augmented knowledge
from our evolving knowledge graph (Section 3.2.1).
Subsequently, we delve into detail regarding our
self-aware knowledge retrieval method, which aims
to filter out noise and retrieve the most pertinent
knowledge to provide to the LLM (Section 3.2.2).
3.2.1
Coarse-grained Knowledge Sample
Given a question query Q, we first construct a
prompt and ask LLMs to extract all the domain-
specific entities E = {e1, e2, ...} from it.
Af-
terward, we adhere to the methodology proposed
by Wen et al. (2023) and execute a similarity-based
entity linking process to connect all entities within
E to the entity structure in our knowledge graph
G. Specifically, we employ a semantic similarity
model (Reimers and Gurevych, 2019) to encode all
entities in G and E into dense embeddings, denoted
as HG and HE, respectively. Subsequently, utiliz-
ing cosine similarity, we establish links between
each entity in E and its nearest neighbor entity in
G. This procedure yields an initial entity set EG
for the subsequent knowledge sampling step.
To build an evidence sub-graph to boost
LLMs’ reasoning process, we follow the previous | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 2,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
… Successful application of
AD preventative approaches
may hinge on an accurate and
comprehensive view of
comorbidities, including
cardiovascular disease,
diabetes, and head trauma.
Literature Corpus
LLMs for KG
KG for LLMs
Head
Relation
Tail
Diabetes
AD
Head trauma
…
…
…
risk factor
AD
risk factor
Extracted Triplets
Evolving KG
Evolving KG
AD
Cognition
level
Down’s
syndrome
Head
trauma
Diabetes
Selected Knowledge
LLMs
LLMs
Extract
Process
Augment
Q: Following are
predispositions to
Alzheimer's disease
except ___.
A). Down’s syndrome
B). Head trauma
C). Smoking
D). Low education group
Answer
Rerank
Sample
AD
Cognition
level
Down’s
syndrome
Head
trauma
Diabetes
Subgraph Sampling
Genetic
factors
Lifestyle
Input
Q: Following are
predispositions to
Alzheimer's disease
except ___.
A). Down’s syndrome
B). Head trauma
C). Smoking
D). Low education group
Question
Input
Figure 1: The overview pipeline of DALK. We first extract structural knowledge from unstructured corpora and
construct a domain-specific knowledge graph tailored to AD (Section 3.1). Then, we utilize a coarse-to-fine sampling
method with a novel self-aware knowledge retrieval approach to select appropriate knowledge from the knowledge
graph (Section 3.2).
… Successful application of AD preventative approaches… ,
including cardiovascular disease, diabetes, and head trauma.
… Successful application of AD preventative approaches… ,
including cardiovascular disease, diabetes, and head trauma.
Step1: Entity Recongnition
Step2: Relation Extraction
Pair-wised Relation Extraction
Generative Relation Extraction
…
Reading the following abstract: [Abstract]
Question:
What is the relationship between AD
and cardiovascular disease?
Question:
What is the relationship between AD
and diabetes?
Reading the following abstract: [Abstract]
Here are all the biomedicine-related
entities: [Entities]
Question:
Please extract all the valid relationship
between the provided entities.
Figure 2: The detailed process of AD-specific KG con-
struction.
study (Wen et al., 2023) and consider the following
two kinds of explorations in our AD-KG:
Path-based Exploration
entails the extraction of
a sub-graph from G to encompass all entities within
EG. The process unfolds as follows: (a) Begin
by selecting one node from e0
Q as the initial node,
denoted as e1, and place the remaining nodes into a
candidate node set, Ecand. Explore at most k hops
from e1 to identify the subsequent node, e2, where
e1 ∈Ecand. If e2 is successfully reached within
k hops, update the start node to e2 and remove e2
from Ecand. In the event e2 cannot be found within
k hops, concatenate the segment paths acquired
thus far and store them in Gpath
Q
. Subsequently,
choose another node e′
1 from Vcand as the new start
node, and eliminate both the original start node e1
and the current node e2 from Ecand. (b) Verify if
Ecand is empty. If not, repeat step (a) to identify
the next segment of the path. If Ecand is empty,
combine all segments to construct a set of sub-
graphs and place them into Gpath
Q
.
Neighbor-based Exploration
endeavors to aug-
ment the evidence relevant to the query within GQ.
This process consists of two steps: (a) Initially,
expand each node e within EG by 1-hop to incor-
porate their neighbors e′, thus appending triples
(e, r, e′) to Gnei
Q . (b) Then assess whether each e′
exhibits semantic relevance to the query. If affir-
mative, further expand the 1-hop neighbors of e′,
consequently adding triples (enei, r′, e′) to Gnei
Q .
After obtaining the two sub-graphs Gpath
Q
and
Gnei
Q , we perform post-processing to further prune
redundant information in sub-graphs and prompt
LLMs to describe the structure of each sub-graph.
3.2.2
Self-aware Knowledge Retrieval
In our initial experiment, we noticed the coarse-
grained knowledge sampled with the above-
mentioned approaches still contained redundant
and irrelevant information. This issue of noise is a
common challenge encountered in automatically-
constructed knowledge graphs (Fang et al., 2021;
Zhang et al., 2020; Li et al., 2022; Bi et al., 2024). | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 3,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Moreover, many recent works (Yu et al., 2023; Li
et al., 2023d; Chen et al., 2024; Wu et al., 2024)
have demonstrated LLMs can indeed be influenced
by such noisy information. To address this chal-
lenge, we borrow insights from the recent self-
powered LLMs (Wang et al., 2022; Pan et al., 2023;
Li et al., 2023a; Yuan et al., 2024; Tong et al.,
2024) and propose a self-aware knowledge retrieval
method to leverage LLMs’ ranking capability (Sun
et al., 2023b; Ma et al., 2023) to filter out noisy
information.
In particular, we directly prompt the LLM to
rerank the sampled knowledge and only retrieve
top k triples to provide for itself in the final-round
inference. Given the question Q and either the path-
based or neighbor-based sub-graph GQ, we create
prompt pself by filling the pre-defined template:
pself = Templateself(Q, GQ, k).
(1)
Then, we use pself as the input to prompt the LLM
to obtain the self-retrieved knowledge:
Gself
Q
= LLM(pself),
(2)
Finally, we provide the question Q and fine-grained
knowledge Gself
Q
to the LLM for reasoning and get
the predicted answer a in two steps:
pinference = Templateinference(Q, Gself
Q
),
(3)
a = LLM(pinference).
(4)
We provide detailed examples in Appendix A and B
to demonstrate the input and output in our DALK.
4
Main Experiment
4.1
ADQA Benchmark
For performance evaluation, we consider four
widely-used medical QA datasets spanning diverse
biomedical topics (Jin et al., 2021; Pal et al., 2022;
Hendrycks et al., 2021; Peñas et al., 2013) and de-
rive an AD-specific QA dataset from them. The
four medical QA datasets are all multiple-choice
based and include: 1) MedQA (Jin et al., 2021)
consisting of US Medical Licensing Examina-
tion (USMLE)-style questions, 2) MedMCQA (Pal
et al., 2022) containing medical school entrance
exam questions from India, 3) MMLU (Hendrycks
et al., 2021) consisting of diverse biomedical
and clinical questions from various sources, 4)
QA4MRE (Peñas et al., 2013) containing a sub-
set of questions for AD derived from PubMed
and Medline. In order to extract from the med-
ical QA datasets a subset of samples related to
AD for our evaluation, we referred to NIH’s Com-
mon Alzheimer’s and Related Dementias Research
Ontology (CADRO) 1. Jointly developed by the
National Institute on Aging and the Alzheimer’s
Association, CADRO is a three-tiered classifica-
tion system with eight main categories and a dozen
sub-categories for AD and related dementia, and it
contains common terminologies or keywords used
in the field. We derived from the CADRO a list of
AD-related keywords most relevant to the medical
QA datasets: <Aging, Alzheimer, Amyloid beta,
APOE, Dementia, Lipoprotein, Microglia>. Then,
we searched against each medical QA dataset for
matches with these keywords to find putative QA
samples, then further asked GPT-3.5-turbo to judge
for each putative sample whether the question is
indeed related to AD or not. Finally, we filtered
out a subset of such samples that are considered
highly relevant to AD to conduct our evaluation
(number of samples in each dataset is shown in
Table 2). More details about ADQA can be found
in Appendix C.
4.2
Experiment Settings
We apply our framework with OpenAI GPT-3.5-
turbo models (OpenAI, 2022). We also include the
following baseline methods for comparison:
Biomedical LLMs
Both ChatDoctor (Yunxiang
et al., 2023) and Med-Alpaca (Shu et al., 2023)
are fine-tuned versions of LLaMA (Touvron et al.)
on biomedical corpora.
Compared with them,
Meditron (Chen et al., 2023) is built on LLaMA-
2 (Touvron et al., 2023) and extends its pretrain-
ing on a comprehensively curated medical cor-
pus. BiomedGPT (Zhang et al., 2023a) is also
based on LLaMA-2 and pioneer as the first open-
source and generalist visual language AI for di-
verse biomedical tasks. Biomistral (Labrak et al.,
2024) is an open-source LLM crafted specifically
for the biomedical domain, optimized for effi-
ciency through quantization and model merging
techniques.
Retrieval-Augmented LLMs
Furthermore, we
also compare our method with several represen-
tative retrieval-augmented LLMs in the biomed-
ical domain. Almanac (Zakka et al., 2024) is a
novel approach utilizing OpenAI’s GPT model in-
1https://iadrp.nia.nih.gov/about/cadro | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 4,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
MedQA
MedMCQA
MMLU
QA4MRE
AVG
Biomedical LLMs
ChatDoctor (Yunxiang et al., 2023)
25.7
36.4
46.9
51.4
40.1
Med-Alpaca (Shu et al., 2023)
41.4
42.8
44.9
57.1
46.5
BiomedGPT (Zhang et al., 2023a)
38.8
41.9
48.9
42.6
43.1
Meditron (Chen et al., 2023)
27.6
31.4
36.7
25.7
30.4
Biomistral (Labrak et al., 2024)
44.7
49.5
53.1
68.6
54.0
Retrieval-augmented LLMs
GPT-3.5-turbo w/ Ada (OpenAI, 2024)
57.2
65.7
83.7
62.9
67.4
Almanac (Zakka et al., 2024)
48.0
69.5
71.4
60.0
62.2
Clinfo.ai (Lozano et al., 2023)
54.3
77.0
81.3
67.7
70.1
Clinfo.ai w/o PubMed API
49.3
68.6
79.6
74.3
67.9
GPT-3.5-turbo
50.0
71.9
83.6
62.9
67.1
DALK
57.9
75.2
85.4
71.4
72.6
Table 2: Experiment results on our constructed ADQA benchmark. The sample size is indicated after each dataset
name. The best results of each metric are in bold and the second-best results are underlined. The “AVG” column
represents the average accuracy score on the four sub-dataset.
tegrated with a Qdrant vector database to hold ex-
ternal sources of knowledge retrieved from local
corpus, web search, and calculators, designed to
answer open-domain clinical questions. Like Al-
manac, Lozano et al. (2023) introduced Clinfo.ai,
which is an open-source, end-to-end retrieval-
augmented LLM (GPT) to answer medical queries
using scientific literature summarizations derived
from PubMed search engine. We adopt both Al-
manac and Clinfo.ai with the same prompt as ours
to answer multiple-choice questions to suit the
ADQA benchmark. Lastly, we implement a sim-
ple retrieval-augmented GPT baseline with CoT
prompting similar to our proposed DALK. All the
GPT models used are set to GPT-3.5-turbo as de-
tailed in the next paragraph, to be consistent.
Implementation Details
We use the knowledge
graph constructed with the generative approach
(KGgen) in our main experiment and conduct an
ablation study on the knowledge graph with RE
method (KGpair) in Section 4.5. We use GPT-3.5-
turbo with the version “gpt-3.5-turbo-0301” and
set the sampling temperature to 0.7. We utilize 7B
versions of all the biomedical LLMs baselines. For
RAG methods, we split each document with a max
length of 128 and retrieve the top 3 most relevant
documents as the support evidence for LLMs to
do inference. We set the parameter k in our self-
aware knowledge retrieval to 5 and conduct further
analysis on it in Section 5.2.
4.3
Main Result
Table 2 shows the experimental results on our
ADQA benchmark. We note that upon applying our
dynamic co-augmentation framework, DALK’s per-
formance surpasses that of other biomedical LLMs
and RAG methods overall. It consistently achieves
either the best or the second-best accuracy score
across all sub-datasets and attains the highest AVG
score. Furthermore, the substantial improvement
over vanilla GPT-3.5-turbo underscores the efficacy
of our approach in domain-specific ADQA.
Furthermore, we observe that the performance
of biomedical-specific LLMs generally lags be-
hind that of GPT-3.5-turbo. We attribute this dis-
crepancy to the smaller size of these biomedical
LLMs. While they may perform adequately in gen-
eral medical contexts, they fall short in the AD
scenario, which demands more domain-specific
knowledge. In the case of GPT-3.5-turbo combined
with various RAG methods, it is evident that most
RAG methods enhance the models’ performance.
Among them, GPT-3.5-turbo with Clinfo.ai yields
the most significant improvement, boosting the ac-
curacy score from 67.1 to 70.1 compared to vanilla
GPT-3.5-turbo. However, it is important to note
that the original Clinfo.ai necessitates access to
the PubMed API, constituting an external resource.
When we disable this access and solely utilize the
same corpora as in DALK within the Clinfo.ai re-
trieval system, the improvement it brings becomes
marginal and incomparable to our method. Due to
the space limitation, we put more RAG results with
different hyper-parameters in Appendix D.
4.4
Ablation Study on Self-aware Knowledge
Retrieval
In this section, we evaluate the efficacy of our
proposed self-aware knowledge retrieval method
through an ablation study.
As depicted in Ta-
ble 3, we observe that while the dynamic co-
augmentation framework without the self-aware | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 5,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
MedQA
MedMCQA
MMLU
QA4MRE
AVG
AVG Length
107.4
23.8
342.9
17.6
122.9
GPT-3.5-turbo
50.0
71.9
83.6
62.9
67.1
DALK
57.9
75.2
85.4
71.4
72.6
DALK
w/o self-aware
knowledge retrieval
56.5
71.0
77.6
77.1
70.6
Table 3: Ablation study results with and without our
proposed self-aware knowledge retrieval.
knowledge retrieval module still enhances the
model’s performance, the overall improvement is
less pronounced. Furthermore, we observe that the
efficacy of self-aware knowledge retrieval corre-
lates with the length of queries within a given con-
text. For instance, a notable enhancement in per-
formance is evident within the MMLU sub-dataset
upon the implementation of self-aware knowledge
retrieval. We attribute this to the fact that questions
in the MMLU dataset typically contain longer con-
texts compared to other medical QA datasets inte-
grated into ADQA. Consequently, irrelevant knowl-
edge sourced from the context may exacerbate the
issue of information noise thus underscoring the ne-
cessity for self-aware retrieval. Conversely, within
QA4MRE, characterized by shorter query lengths,
the application of self-aware knowledge retrieval
can even lead to a decline in performance.
4.5
Ablation Study on KG Construction
AVG
#Triples
GPT-3.5-turbo
67.1
-
DALK
w/ Generative KG
72.6
53,585
DALK
w/ RE KG
66.3
171,431
Table 4: Ablation study results with generative construc-
tion and RE construction.
Table 4 illustrates the results of the ablation study
conducted using generatively constructed KG and
RE-constructed KG. Surprisingly, despite the RE
method yielding a KG with a larger scale and more
triples, knowledge sampled from it has unexpect-
edly resulted in a non-trivial drop in performance
within ADQA. After a manual examination of the
two constructed knowledge graphs, we find LLMs
with the RE construction method have a strong in-
clination to wrongly assign a relationship to two
unrelated entities, which has been exposed by the
previous studies (Wan et al., 2023). In contrast, the
generative construction approach exclusively out-
puts triples that LLMs confidently endorse, yield-
ing a smaller yet more precise knowledge graph.
This trade-off between coverage and accuracy un-
derscores the critical importance of denoising in
the construction of KGs by LLMs.
5
Further Analysis
5.1
Co-augmentation Analysis
Figure 3: The size of the knowledge graph (triplet num-
ber) and the KG-augmented GPT-3.5-turbo’s perfor-
mance (accuracy) over time.
To comprehensively understand how the perfor-
mance of LLMs evolves in response to increasing
KG sizes, we undertake a detailed co-augmentation
analysis. Illustrated in Figure 3, our experiments
aim to discern the changing performance trends of
LLMs as the knowledge triples accumulate annu-
ally. Our findings reveal that our framework effec-
tively fosters the co-evolution of LLMs and KG,
with the performance of KG-augmented LLMs ex-
hibiting a generally upward trajectory as the KG
expands. Notably, when we remove the self-aware
knowledge retrieval module, this upward trend be-
comes less significant. This further implies the
importance of sampling and selecting appropriate
knowledge for LLMs when the KG’s size increases.
5.2
Hyper-parameter Analysis
In this section, we do a hyper-parameter analy-
sis on the retrieval number k of our self-aware re-
trieval module. We select a group of value for k
([1,3,5,10,20,30]) and present the experiment re-
sults in Figure 4. We show the accuracy score on
MedQA, MedMCQA, QA4MRE and AVG with
different k. We find when k is small, an incre-
ment to it can lead to a performance enhancement.
After the best performance shows up, continually
increasing the value of k will cause a smooth de-
crease in the model accuracy score. This result
indicates the knowledge ranked in the top positions | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 6,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Path-based Sub-graph
Answer
Baseline
-
C %
DALK
-w/o self-aware
knowledge retrieval
neurofibrillary tangles->FORM BY->microtubule-associated protein tau...
...
entorhinal cortex->is a part of->brain->ASSOCIATES->mouse with
Alzheimer’s disease->brain region->temporal lobe
C %
DALK
Reranked Triples1: entorhinal cortex ->is a part of ->brain
Reranked Triples2: entorhinal cortex ->associates ->mouse with Alzheimer’s disease
Reranked Triples3: temporal lobe ->affected by ->Alzheimer’s disease
D !
Table 5: A case to show the effectiveness of DALK. The question is: “The area of the brain resistant to Neurofibrillary
tangles of Alzheimer’s disease is: A. Visual association areas B. Entorhinal coex C. Temporal lobe D.Lateral
geniculate body”
Figure 4: Different self-aware knowledge retrieval set-
tings in MedQA, MedMCQA, QA4MRE and the aver-
age score (AVG).
is more helpful while the knowledge ranked behind
is something not very useful, thus successfully val-
idating the capability of LLMs to do a fine-grained
knowledge reranking.
Moreover, we find the best k value is correlated
with the length of queries in each single dataset. For
example, the best performance in MedQA (average
query length is 107.4) shows up when k = 10 while
the best performance in MedMCQA and QA4MRE
shows up when k = 5 and 3 respectively. This is
consistent with our findings in Section 4.4 that a
longer query corresponds to a larger and noisier
sub-knowledge graph.
5.3
Sensitivity Analysis on ADQA Benchmark
In this section, we conduct a sensitivity analysis
for our constructed ADQA by conducting a leave-
one-out evaluation on AD-related keywords. We
do it by removing the samples with each keyword
in our keyword list and calculating the AVG score
of the remaining samples. As the result shown
in Table 6, we find not all of the keywords are
incorporated in our ADQA benchmark. Notably,
Benchmark
DALK
DALK
w/o self-aware
knowledge retrieval
ADQA
72.6
70.6
w/o “Alzheimer”
72.1
70.4
w/o “Dementia”
72.4
71.3
w/o “APOE”
73.2
71.2
w/o “Amyloid beta”
73.5
70.7
w/o “Aging”
72.9
71.4
w/o “Lipoprotein”
73.1
71.0
w/o “Microglia”
72.8
70.9
Table 6: Sensitivity analysis for ADQA benchmark with
a leave-one-out evaluation on AD-related keywords.
the keywords “CSF Biomarkers”, “Neurogenesis”,
“PET Amyloid”, “PET Tau”, “Tau Phosphorylation”
lack corresponding samples in ADQA. We believe
one critical work in the future for benchmarking
AD-related knowledge is to collect QA samples
to cover these missing keywords. Moreover, ana-
lyzing the performance variation upon removing
samples linked to each keyword offers insight into
determining the relevance of the keyword to AD.
5.4
Case Study
We put an example in Table 5 to showcase the effi-
cacy of DALK. We notice while the path-based sub-
graph contains the relevant knowledge to exclude
option C, it still involves other irrelevant informa-
tion and finally fails to prompt the LLMs to pro-
duce the correct answer. In contrast, our self-aware
knowledge retrieval method successfully chooses
the top 3 most relevant triples for the given problem
and results in the correct answer D.
6
Conclusion
In this research, we begin by analyzing the main
limitations of adopting the existing LLMs-based
methods in AD-specific areas. To address these
issues, we propose a novel approach in the merging
of large language models and knowledge graphs
in the context of Alzheimer’s Disease. Our team | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 7,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
provides an innovative dynamic co-augmentation
framework for the refinement of large language
models and knowledge graphs. Initially, our ap-
proach extracts structural insights from the unstruc-
tured scientific literature, crafting a specialized
knowledge graph for AD. Subsequently, we employ
a coarse-to-fine sampling technique coupled with
a unique self-aware knowledge retrieval strategy
to pinpoint relevant information from the knowl-
edge graph. The extensive evaluation conducted
in our constructed ADQA benchmark showcases
the effectiveness of our method and provides fur-
ther hints into the synergy of LLMs and knowledge
graph in the context of AD. In the future, we will
do more exploration in adopting and benchmarking
LLMs in the AD areas.
7
Limitations
In the development of our AD-KG, our primary
focus lies in the exploration of two distinct meth-
ods for extracting relationships between associated
entities. For entity recognition, we employ a strong
PubTator annotator directly, without delving into
the utilization of LLMs in this context. However,
we have observed that LLMs also exhibit promis-
ing entity extraction capabilities in Section 3.2.1.
We defer the refinement of methods for extract-
ing entities for KG construction with LLMs to fu-
ture works. Furthermore, a significant contribution
of our work is the establishment of the ADQA
benchmark. Nonetheless, the datasets utilized in
constructing ADQA primarily consist of medical
school exam questions, potentially exhibiting a do-
main gap from the scientific literature informing
AD-KG. One potential remedy is leveraging Pub-
medQA (Jin et al., 2019); however, it is hindered
by limited data amount. In the future, we will keep
gathering AD-related QA samples and expanding
the size of our ADQA benchmark.
8
Ethics Statement
We have familiarized ourselves with and honour
the ethical code set out in the ACL Code of Ethics2.
The knowledge graphs constructed in the paper
are based on published scientific literature from
PubMed. The ADQA dataset used in the study is
also derived from publicly available medical QA
datasets that are properly cited. We strive to ensure
our study upholds ethical principles and not cause
any kind of safety or privacy concerns. Although
2https://www.aclweb.org/portal/content/acl-code-ethics
not observed in our multiple-choice QA analysis,
we recognize the possibility of factual errors and
hallucinations when using pre-trained LLMs for
medical QA tasks in general, and we do not recom-
mend these models be applied in a practical setting
at present.
References
Felix Agbavor and Hualou Liang. 2022. Predicting
dementia from spontaneous speech using large lan-
guage models. PLOS Digital Health, 1:1–14.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin John-
son, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
preprint arXiv:2305.10403.
Aparna Balagopalan, Benjamin Eyre, Frank Rudzicz,
and Jekaterina Novikova. 2020. To BERT or not
to BERT: Comparing Speech and Language-Based
Approaches for Alzheimer’s Disease Detection. In
Proc. Interspeech 2020, pages 2167–2171.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Ger-
stenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz
Lehmann, Michal Podstawski, Hubert Niewiadomski,
Piotr Nyczyk, et al. 2023. Graph of thoughts: Solv-
ing elaborate problems with large language models.
arXiv preprint arXiv:2308.09687.
Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo,
Huajun Chen, and Ningyu Zhang. 2024. Codekgc:
Code language model for generative knowledge
graph construction.
ACM Transactions on Asian
and Low-Resource Language Information Process-
ing, 23(3):1–16.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
systems, 33:1877–1901.
Salvatore Carta, Alessandro Giuliani, Leonardo Piano,
Alessandro Sebastian Podda, Livio Pompianu, and
Sandro Gabriele Tiddia. 2023. Iterative zero-shot llm
prompting for knowledge graph construction. arXiv
preprint arXiv:2307.01128.
Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun.
2024.
Benchmarking large language models in
retrieval-augmented generation. In Proceedings of
the AAAI Conference on Artificial Intelligence, vol-
ume 38, pages 17754–17762.
Zeming Chen, Alejandro Hernández Cano, Angelika
Romanou, Antoine Bonnet, Kyle Matoba, Francesco
Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf,
Amirkeivan Mohtashami, et al. 2023. Meditron-70b:
Scaling medical pretraining for large language mod-
els. arXiv preprint arXiv:2311.16079. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 8,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Geesa Daluwatumulle, Rupika Wijesinghe, and Ruvan
Weerasinghe. 2023. In silico drug repurposing us-
ing knowledge graph embeddings for alzheimer’s
disease. In Proceedings of the 9th International Con-
ference on Bioinformatics Research and Applications,
ICBRA ’22, page 61–66, New York, NY, USA. As-
sociation for Computing Machinery.
Preetha Datta, Fedor Vitiugin, Anastasiia Chizhikova,
and Nitin Sawhney. 2024. Construction of hyper-
relational knowledge graphs using pre-trained large
language models. arXiv preprint arXiv:2403.11786.
Stefan Dernbach, Khushbu Agarwal, Alejandro Zu-
niga, Michael Henry, and Sutanay Choudhury. 2024.
Glam: Fine-tuning large language models for domain
knowledge graph alignment via neighborhood par-
titioning and generative subgraph encoding. arXiv
preprint arXiv:2402.06764.
Tianqing Fang,
Hongming Zhang,
Weiqi Wang,
Yangqiu Song, and Bin He. 2021. Discos: bridg-
ing the gap between discourse knowledge and com-
monsense knowledge. In Proceedings of the Web
Conference 2021, pages 2648–2659.
Yingjie Feng, Xiaoyin Xu, Yueting Zhuang, and Min
Zhang. 2023.
Large language models improve
alzheimer’s disease diagnosis using multi-modality
data. In 2023 IEEE International Conference on
Medical Artificial Intelligence (MedAI), pages 61–66.
IEEE.
Jiuzhou Han, Nigel Collier, Wray Buntine, and Ehsan
Shareghi. 2023. Pive: Prompting with iterative verifi-
cation improving graph-based generative capability
of llms. arXiv preprint arXiv:2305.12392.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
2021. Measuring massive multitask language under-
standing. In International Conference on Learning
Representations.
Daniel Scott Himmelstein, Antoine Lizee, Christine
Hessler, Leo Brueggeman, Sabrina L Chen, Dexter
Hadley, Ari Green, Pouya Khankhanian, and Ser-
gio E Baranzini. 2017. Systematic integration of
biomedical knowledge prioritizes drugs for repurpos-
ing. Elife, 6:e26726.
Kang-Lin Hsieh, German Plascencia-Villa, Ko-Hong
Lin, George Perry, Xiaoqian Jiang, and Yejin Kim.
2023. Synthesize heterogeneous biological knowl-
edge via representation learning for alzheimer’s dis-
ease drug repurposing. iScience, 26(1):105678.
Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu,
Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen,
et al. 2021. Lora: Low-rank adaptation of large lan-
guage models. In International Conference on Learn-
ing Representations.
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye,
Wayne Xin Zhao, and Ji-Rong Wen. 2023. Structgpt:
A general framework for large language model to
reason over structured data. In Proceedings of the
2023 Conference on Empirical Methods in Natural
Language Processing, pages 9237–9251.
Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng,
Hanyi Fang, and Peter Szolovits. 2021. What disease
does this patient have? a large-scale open domain
question answering dataset from medical exams. Ap-
plied Sciences, 11(14):6421.
Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W
Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset
for biomedical research question answering. arXiv
preprint arXiv:1909.06146.
Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric
Wallace, and Colin Raffel. 2023. Large language
models struggle to learn long-tail knowledge. In In-
ternational Conference on Machine Learning, pages
15696–15707. PMLR.
Jiho Kim, Yeonsu Kwon, Yohan Jo, and Edward Choi.
2023. Kg-gpt: A general framework for reasoning
on knowledge graphs using large language models.
In Findings of the Association for Computational
Linguistics: EMNLP 2023, pages 9410–9421.
Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-
Antoine Gourraud, Mickael Rouvier, and Richard
Dufour. 2024.
Biomistral: A collection of open-
source pretrained large language models for medical
domains. arXiv preprint arXiv:2402.10373.
Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei,
Jianwei Cui, and Bin Wang. 2022. C3kg: A chi-
nese commonsense conversation knowledge graph.
In Findings of the Association for Computational
Linguistics: ACL 2022, pages 1369–1383.
Dawei Li, Yaxuan Li, Dheeraj Mekala, Shuyao
Li, Xueqi Wang, William Hogan, Jingbo Shang,
et al. 2023a.
Dail:
Data augmentation for in-
context learning via self-paraphrase. arXiv preprint
arXiv:2311.03319.
Dawei Li, Zhen Tan, Tianlong Chen, and Huan Liu.
2024. Contextualization distillation from large lan-
guage model for knowledge graph completion. arXiv
preprint arXiv:2402.01729.
Dawei Li, Hengyuan Zhang, Yanran Li, and Shiping
Yang. 2023b. Multi-level contrastive learning for
script-based character understanding. In Proceedings
of the 2023 Conference on Empirical Methods in
Natural Language Processing, pages 5995–6013.
Rumeng Li, Xun Wang, and Hong Yu. 2023c. Two
directions for clinical data generation with large lan-
guage models: Data-to-label and label-to-data. In
Proceedings of the 2023 Conference on Empirical
Methods in Natural Language Processing, pages
7129–7143.
Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin.
2023d. Compressing context to enhance inference
efficiency of large language models. In Proceedings | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 9,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
of the 2023 Conference on Empirical Methods in
Natural Language Processing, pages 6342–6353.
Alejandro Lozano, Scott L Fleming, Chia-Chun Chiang,
and Nigam Shah. 2023. Clinfo. ai: An open-source
retrieval-augmented large language model system for
answering medical questions using scientific litera-
ture. In PACIFIC SYMPOSIUM ON BIOCOMPUT-
ING 2024, pages 8–23. World Scientific.
Xindi Luo, Zequn Sun, Jing Zhao, Zhe Zhao, and Wei
Hu. 2024. Knowla: Enhancing parameter-efficient
finetuning with knowledgeable adaptation. arXiv
preprint arXiv:2403.14950.
Yubo Ma, Yixin Cao, Yong Hong, and Aixin Sun. 2023.
Large language model is not a good few-shot informa-
tion extractor, but a good reranker for hard samples!
In Findings of the Association for Computational
Linguistics: EMNLP 2023, pages 10572–10601.
Chengsheng Mao, Jie Xu, Luke Rasmussen, Yikuan
Li, Prakash Adekkanattu, Jennifer Pacheco, Borna
Bonakdarpour, Robert Vassar, Li Shen, Guoqian
Jiang, Fei Wang, Jyotishman Pathak, and Yuan Luo.
2023. Ad-bert: Using pre-trained language model to
predict the progression from mild cognitive impair-
ment to alzheimer’s disease. Journal of Biomedical
Informatics, 144:104442.
C L Masters, G Simms, N A Weinman, G Multhaup,
B L McDonald, and K Beyreuther. 1985. Amyloid
plaque core protein in alzheimer disease and down
syndrome. Proceedings of the National Academy of
Sciences, 82(12):4245–4249.
George A Miller. 1995. Wordnet: a lexical database for
english. Communications of the ACM, 38(11):39–41.
Yi Nian, Xinyue Hu, Rui Zhang, Jingna Feng, Jingcheng
Du, Fang Li, Yong Chen, and Cui Tao. 2022. Mining
on alzheimer’s diseases related knowledge graph to
identity potential ad-related semantic triples for drug
repurposing. BMC Bioinformatics, 23.
OpenAI. 2022. Introducing chatgpt.
OpenAI. 2024. New embedding models and api up-
dates.
Oded Ovadia, Menachem Brief, Moshik Mishaeli, and
Oren Elisha. 2023. Fine-tuning or retrieval? com-
paring knowledge injection in llms. arXiv preprint
arXiv:2312.05934.
Ankit Pal, Logesh Kumar Umapathi, and Malaikannan
Sankarasubbu. 2022. Medmcqa: A large-scale multi-
subject multi-choice dataset for medical domain ques-
tion answering. In Proceedings of the Conference
on Health, Inference, and Learning, volume 174 of
Proceedings of Machine Learning Research, pages
248–260. PMLR.
Liangming Pan, Michael Saxon, Wenda Xu, Deepak
Nathani, Xinyi Wang, and William Yang Wang. 2023.
Automatically correcting large language models: Sur-
veying the landscape of diverse self-correction strate-
gies. arXiv preprint arXiv:2308.03188.
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Ji-
apu Wang, and Xindong Wu. 2024. Unifying large
language models and knowledge graphs: A roadmap.
IEEE Transactions on Knowledge and Data Engi-
neering.
Anselmo Peñas, Eduard H. Hovy, Pamela Forner, Ál-
varo Rodrigo, Richard F. E. Sutcliffe, and Roser
Morante. 2013. Qa4mre 2011-2013: Overview of
question answering for machine reading evaluation.
In International Conference of the Cross-Language
Evaluation Forum for European Languages, pages
303–320. Springer.
Yiyuan Pu, Daniel Beck, and Karin Verspoor. 2023.
Graph embedding-based link prediction for literature-
based discovery in alzheimer’s disease. Journal of
Biomedical Informatics, 145:104464.
Julio C Rangel, Tarcisio Mendes de Farias, Ana Claudia
Sima, and Norio Kobayashi. 2024. Sparql generation:
an analysis on fine-tuning openllama for question
answering over a life science knowledge graph. arXiv
preprint arXiv:2402.04627.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 3982–3992.
Alzheimer’s
Association
Report.
2023.
2023
alzheimer’s disease facts and figures. Alzheimer’s
& Dementia, 19(4):1598–1695.
Joseph Romano, Van Truong, Rachit Kumar, Mythr-
eye Venkatesan, Britney Graham, Yun Hao, Nick
Matsumoto, Xi Li, Zhiping Wang, Marylyn Ritchie,
Li Shen, and Jason Moore. 2023. The alzheimer’s
knowledge base – a knowledge graph for therapeutic
discovery in alzheimer’s disease research (preprint).
Journal of Medical Internet Research.
Chang Shu, Baian Chen, Fangyu Liu, Zihao Fu, Ehsan
Shareghi, and Nigel Collier. 2023.
Visual med-
alpaca: A parameter-efficient biomedical llm with
visual capabilities.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: An open multilingual graph of gen-
eral knowledge. In Proceedings of the AAAI confer-
ence on artificial intelligence, volume 31.
Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Saizhuo
Wang, Chen Lin, Yeyun Gong, Heung-Yeung Shum,
and Jian Guo. 2023a. Think-on-graph: Deep and
responsible reasoning of large language model with
knowledge graph. arXiv preprint arXiv:2307.07697.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang
Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and
Zhaochun Ren. 2023b. Is chatgpt good at search?
investigating large language models as re-ranking
agents. In Proceedings of the 2023 Conference on
Empirical Methods in Natural Language Processing,
pages 14918–14937. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 10,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng
Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh
Karami, Jundong Li, Lu Cheng, and Huan Liu. 2024.
Large language models for data annotation: A survey.
arXiv preprint arXiv:2402.13446.
Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su,
Suqi Cheng, Dawei Yin, and Chao Huang. 2023.
Graphgpt: Graph instruction tuning for large lan-
guage models. arXiv preprint arXiv:2310.13023.
Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei
Teng, and Jingbo Shang. 2024. Can llms learn from
previous mistakes? investigating llms’ errors to boost
for reasoning. arXiv preprint arXiv:2403.20046.
Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang,
Zi Lin, Simeng Han, and Jingbo Shang. 2023. Elimi-
nating reasoning via inferring with planning: A new
framework to guide llms’ non-linear thinking. arXiv
preprint arXiv:2310.12342.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothee Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, et al. Llama: Open and efficient foundation
language models.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al-
bert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023.
Llama 2:
Open founda-
tion and fine-tuned chat models.
arXiv preprint
arXiv:2307.09288.
Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wiki-
data: a free collaborative knowledgebase. Communi-
cations of the ACM, 57(10):78–85.
Somin Wadhwa, Silvio Amir, and Byron C Wallace.
2023. Revisiting relation extraction in the era of
large language models. In Proceedings of the 61st
Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 15566–
15589.
Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu,
Haiyue Song, Jiwei Li, and Sadao Kurohashi. 2023.
Gpt-re: In-context learning for relation extraction
using large language models. In Proceedings of the
2023 Conference on Empirical Methods in Natural
Language Processing, pages 3534–3547.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2022. Self-consistency improves
chain of thought reasoning in language models. In
The Eleventh International Conference on Learning
Representations.
Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013.
Pubtator: a web-based text mining tool for assisting
biocuration. Nucleic acids research, 41(W1):W518–
W522.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits rea-
soning in large language models. Advances in Neural
Information Processing Systems, 35:24824–24837.
Yanbin Wei, Qiushi Huang, Yu Zhang, and James Kwok.
2023. Kicgpt: Large language model with knowl-
edge in context for knowledge graph completion. In
Findings of the Association for Computational Lin-
guistics: EMNLP 2023, pages 8667–8683.
Yilin Wen, Zifeng Wang, and Jimeng Sun. 2023.
Mindmap: Knowledge graph prompting sparks graph
of thoughts in large language models. arXiv preprint
arXiv:2308.09729.
Siye Wu, Jian Xie, Jiangjie Chen, Tinghui Zhu, Kai
Zhang, and Yanghua Xiao. 2024. How easily do
irrelevant inputs skew the responses of large language
models? arXiv preprint arXiv:2404.03302.
Chao Yan, Monika Grabowska, Alyson Dickson, Bing-
shan Li, Zhexing Wen, Dan Roden, C. Stein, Pe-
ter Embí, Josh Peterson, Qiping Feng, Bradley Ma-
lin, and Wei-Qi Wei. 2024.
Leveraging genera-
tive ai to prioritize drug repurposing candidates for
alzheimer’s disease with real-world clinical valida-
tion. npj Digital Medicine, 7.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L Griffiths,
Yuan Cao,
and Karthik
Narasimhan. 2023.
Tree of thoughts: Deliberate
problem solving with large language models. arXiv
preprint arXiv:2305.10601.
Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin
Ma, Hongwei Wang, and Dong Yu. 2023. Chain-of-
note: Enhancing robustness in retrieval-augmented
language models. arXiv preprint arXiv:2311.09210.
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho,
Sainbayar Sukhbaatar, Jing Xu, and Jason Weston.
2024.
Self-rewarding language models.
arXiv
preprint arXiv:2401.10020.
Li Yunxiang, Li Zihan, Zhang Kai, Dan Ruilong, and
Zhang You. 2023. Chatdoctor: A medical chat model
fine-tuned on llama model using medical domain
knowledge. arXiv preprint arXiv:2303.14070.
Cyril Zakka, Rohan Shad, Akash Chaurasia, Alex R
Dalal, Jennifer L Kim, Michael Moor, Robyn Fong,
Curran Phillips, Kevin Alexander, Euan Ashley,
et al. 2024.
Almanac—retrieval-augmented lan-
guage models for clinical medicine.
NEJM AI,
1(2):AIoa2300068.
Hengyuan Zhang, Yanru Wu, Dawei Li, Zacc Yang, Rui
Zhao, Yong Jiang, and Fei Tan. 2024. Balancing spe-
ciality and versatility: a coarse to fine framework for
supervised fine-tuning large language model. arXiv
preprint arXiv:2404.10306. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 11,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song,
and Cane Wing-Ki Leung. 2020. Aser: A large-scale
eventuality knowledge graph. In Proceedings of the
web conference 2020, pages 201–211.
Kai Zhang, Jun Yu, Zhiling Yan, Yixin Liu, Eashan Ad-
hikarla, Sunyang Fu, Xun Chen, Chen Chen, Yuyin
Zhou, Xiang Li, et al. 2023a. Biomedgpt: A uni-
fied and generalist biomedical generative pre-trained
transformer for vision, language, and multimodal
tasks. arXiv preprint arXiv:2305.17100.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher De-
wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
Yichi Zhang, Zhuo Chen, Wen Zhang, and Huajun Chen.
2023b. Making large language models perform bet-
ter in knowledge graph completion. arXiv preprint
arXiv:2310.06671. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 12,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
A
Details of LLMs for KG
Table 7 and 8 present examples of our two KG construction methods respectively. For both methods, we
adopt a select-or-generate prompt to instruct the LLM whether to choose a relation from hetionet (Him-
melstein et al., 2017), a well-built general medical KG, or generate a new one to describe the relationship
between two entities. In the RE construction method, we also conduct a type matching (Table 9) for each
entity from type name of PubTator to that of Hetionet and ask the LLM to choose from the relation set
that corresponds to the two entities’ types (Table 10).
Input
Read the following abstract, extract the relationships between each entity.You can
choose the relation from: (covaries, interacts, regulates, resembles, downregulates,
upregulates, associates, binds, treats, palliates), or generate a new predicate to
describe the relationship between the two entities. Output all the extract triples in
the format of "head | relation | tail". For example: "Alzheimer’s disease | associates
| memory deficits" Abstract: Thiamine pyrophosphate (TPP) and the activities of
thiamine-dependent enzymes are reduced in Alzheimer’s disease (AD) patients.
In this study, we analyzed the relationship between thiamine deficiency (TD) and
amyloid precursor protein (APP) processing in both cellular and animal models of
TD. In SH-SY5Y neuroblastoma cells overexpressing APP, TD promoted maturation
of beta-site APP cleaving enzyme 1 (BACE1) and increased beta-secretase activity
which resulted in elevated levels of beta-amyloid (Abeta) as well as beta-secretase
cleaved C-terminal fragment (beta-CTF). An inhibitor of beta-secretase efficiently
reduced TD-induced up-regulation of Abeta and beta-CTF. Importantly, thiamine
supplementation reversed the TD-induced alterations. Furthermore, TD treatment
caused a significant accumulation of reactive oxygen species (ROS); antioxidants
suppressed ROS production and maturation of BACE1, as well as TD-induced
Abeta accumulation. On the other hand, exogenous Abeta(1-40) enhanced TD-
induced production of ROS. A study on mice indicated that TD also caused Abeta
accumulation in the brain, which was reversed by thiamine supplementation. Taken
together, our study suggests that TD could enhance Abeta generation by promoting
beta-secretase activity, and the accumulation of Abeta subsequently exacerbated
TD-induced oxidative stress.Entity: Thiamine (thiamine), deficiency increases beta-
secretase, TPP (Thiamine pyrophosphate), Alzheimer’s disease (AD), patients,
amyloid precursor protein (beta-amyloid (Abeta)), neuroblastoma, BACE1 (beta-site
APP cleaving enzyme 1), Abeta, ROS (reactive oxygen species), Abeta accumulation,
mice. Output:
Output
Thiamine deficiency | downregulates | TPP, Thiamine deficiency | regulates |
Alzheimer’s disease, Alzheimer’s disease | involves | beta-secretase, Amyloid precur-
sor protein | produces | beta-amyloid, Amyloid precursor protein | in | neuroblastoma,
BACE1 | is | beta-secretase, BACE1 | regulates | beta-amyloid, Abeta | is | beta-
amyloid, Abeta | in | mice, Abeta | causes | reactive oxygen species, Abeta | regulates
| ROS
Table 7: An example to extract knowledge triples using generative construction method. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 13,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Input
Read the following abstract, answer the following question. Abstract: Thiamine
pyrophosphate (TPP) and the activities of thiamine-dependent enzymes are reduced
in Alzheimer’s disease (AD) patients. In this study, we analyzed the relationship
between thiamine deficiency (TD) and amyloid precursor protein (APP) process-
ing in both cellular and animal models of TD. In SH-SY5Y neuroblastoma cells
overexpressing APP, TD promoted maturation of beta-site APP cleaving enzyme 1
(BACE1) and increased beta-secretase activity which resulted in elevated levels of
beta-amyloid (Abeta) as well as beta-secretase cleaved C-terminal fragment (beta-
CTF). An inhibitor of beta-secretase efficiently reduced TD-induced up-regulation
of Abeta and beta-CTF. Importantly, thiamine supplementation reversed the TD-
induced alterations. Furthermore, TD treatment caused a significant accumulation
of reactive oxygen species (ROS); antioxidants suppressed ROS production and
maturation of BACE1, as well as TD-induced Abeta accumulation. On the other
hand, exogenous Abeta(1-40) enhanced TD-induced production of ROS. A study
on mice indicated that TD also caused Abeta accumulation in the brain, which
was reversed by thiamine supplementation. Taken together, our study suggests that
TD could enhance Abeta generation by promoting beta-secretase activity, and the
accumulation of Abeta subsequently exacerbated TD-induced oxidative stress.Entity:
Thiamine (thiamine), deficiency increases beta-secretase, TPP (Thiamine pyrophos-
phate), Alzheimer’s disease (AD), patients, amyloid precursor protein (beta-amyloid
(Abeta)), neuroblastoma, BACE1 (beta-site APP cleaving enzyme 1), Abeta, ROS
(reactive oxygen species), Abeta accumulation, mice. Question: predict the relation-
ship between Disease entity "sclerosis" and Disease entity "multiple sclerosis", first
choose from the following options: A. resembles B. no-relation C. others, please
specify by generating a short predicate in 5 words. Answer: Let’s think step by step:
Output
1. Sclerosis is a disease of the central nervous system. 2. Multiple sclerosis is
the most common form of sclerosis. 3. So the relationship between sclerosis and
multiple sclerosis should be "sclerosis is a kind of multiple sclerosis". So the answer
is: A. resembles. So the answer is: A. resembles
Table 8: An example to extract the relationship between “sclerosis” and “multiple sclerosis” using RE construction
method.
PubTator Type
Hetionet Type
Gene
genes
Chemical
compounds
Disease
diseases
Table 9: Entity type match from PubTator to Hetionet.
Type-Type
Relations
genes-genes
["covaries", "interacts", "regulates"]
disease-disease
["resembles"]
compounds-compounds
["resembles"]
genes-diseases
["downregulates","associates","upregulates"]
genes-compounds
["binds", "upregulates", "downregulates"]
compounds-diseases
["treats", "palliates"]
Table 10: Type-type to relation match in Hetionet. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 14,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
B
Details of KG for LLMs
In this section, we provide detailed input and output for adopting KG to augment LLMs, including
path-based and neighbor-based sub-graph sampling results (Table 11), self-aware knowledge retrieval
(Table 12), describing sub-graphs with LLMs (Table 13) and inference with sampled knowledge (Table 14).
The question we showcase here is “The area of the brain resistant to Neurofibrillary tangles of Alzheimer’s
disease is: A. Visual association areas B. Entorhinal coex C. Temporal lobe D.Lateral geniculate body”,
which same as the one we use in Section 5.4.
Path-based Sub-graph
neurofibrillary tangles->FORM BY->microtubule-associated protein tau->BINDS->
(18)F-THK-5117->ADMINISTERED TO->rats->has->Alzheimer’s disease ->
Alzheimer’s disease ->affects->human->has->AD->DISEASE OF->Brain
entorhinal cortex->is a part of->brain->ASSOCIATES->
mouse with Alzheimer’s disease->brain region->temporal lobe
Neighbor-based Sub-graph
neurofibrillary tangles->FORM BY->microtubule-associated protein tau
Alzheimer’s disease ->causes->neuronal death
Alzheimer’s disease ->associates->cognitive decline
Alzheimer’s disease ->affects->human
Alzheimer’s disease ->has subtype->neurodegenerative diseases
Table 11: An example of path-based and neighbor-based sub-graph for the question.
Input
There is a question and some knowledge graph. The knowledge graphs follow entity->relationship
->entity list format.
Graph:
neurofibrillary tangles->FORM BY->microtubule-associated protein tau->BINDS->
(18)F-THK-5117->ADMINISTERED TO->rats->has->Alzheimer’s disease ->Alzheimer’s disease ->
affects->human->has->AD->DISEASE OF->Brain
entorhinal cortex->is a part of->brain->ASSOCIATES->
mouse with Alzheimer’s disease->brain region->temporal lobe}
Question:
The area of the brain resistant to Neurofibrillary tangles of Alzheimer’s disease is:
A. Visual association areas
B. Entorhinal coex
C. Temporal lobe
D.Lateral geniculate body
Please rerank the knowledge graph and output at most 5 important and relevant triples for
solving the given question. Output the reranked knowledge in the following format:
Reranked Triple1: xxx ——>xxx
Reranked Triple2: xxx ——>xxx
Reranked Triple3: xxx ——>xxx
Reranked Triple4: xxx ——>xxx
Reranked Triple5: xxx ——>xxx
Answer:
Output
Reranked Triples1: entorhinal cortex ->is a part of ->brain
Reranked Triples2: entorhinal cortex ->associates ->mouse with Alzheimer’s disease
Reranked Triples3: temporal lobe ->affected by ->Alzheimer’s disease
Table 12: An example of our self-aware knowledge retrieval in path-based sub-graph. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 15,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Input
There are some knowledge graph paths. They follow entity->relationship->entity format.
Reranked Triples1: entorhinal cortex ->is a part of ->brain
Reranked Triples2: entorhinal cortex ->associates ->mouse with Alzheimer’s disease
Reranked Triples3: temporal lobe ->affected by ->Alzheimer’s disease
Use the knowledge graph information. Try to convert them to natural language, respectively.
Use single quotation marks for entity name and relation name.
And name them as Path-based Evidence 1, Path-based Evidence 2,...
Output:
Output
Path-based Evidence 1: ’Entorhinal cortex’ is a part of ’brain’.
Path-based Evidence 2: ’Entorhinal cortex’ associates ’mouse’ with ’Alzheimer’s disease’.
Path-based Evidence 3: ’Temporal lobe’ is affected by ’Alzheimer’s disease’.
Table 13: An example to describe the self-retrieved path-based sub-graph with the LLM.
Input
Question: The area of the brain resistant to Neurofibrillary tangles of Alzheimer’s disease is:
A.Visual association areas
B.Entorhinal coex
C.Temporal lobe
D.Lateral geniculate body
You have some medical knowledge information in the following:
###Path-based Evidence 1: ’Entorhinal cortex’ is a part of ’brain’.
Path-based Evidence 2: ’Entorhinal cortex’ associates ’mouse’ with ’Alzheimer’s disease’.
Path-based Evidence 3: ’Temporal lobe’ is affected by ’Alzheimer’s disease’.
###Neighbor-based Evidence 1: ’neurofibrillary tangles’ are ’FORMED BY’ ’
microtubule-associated protein tau’.
Neighbor-based Evidence 2: ’Alzheimer’s disease’ ’CAUSES’ ’neuronal death’.
Neighbor-based Evidence 3: ’Alzheimer’s disease’ ’ASSOCIATES’ with ’cognitive decline’.
Neighbor-based Evidence 4: ’Alzheimer’s disease’ ’AFFECTS’ ’human’.
Neighbor-based Evidence 5: ’Alzheimer’s disease’ ’HAS SUBTYPE’ ’neurodegenerative diseases’."
Answer: Let’s think step by step:
Output
In Alzheimer’s disease, neurofibrillary tangles are composed of
abnormal aggregations of tau protein within neurons, leading to
neuronal dysfunction and eventual cell death. The lateral geniculate
body is a relay center in the thalamus that processes visual
information from the retina to the visual cortex. Since the lateral
geniculate body is not directly involved in memory and cognitive
function, it is less likely to be affected by neurofibrillary tangles in
Alzheimer’s disease compared to other brain regions such as the
entorhinal cortex and temporal lobe which play crucial roles in memory
and cognitive processing. Therefore, the area of the brain resistant to
neurofibrillary tangles of Alzheimer’s disease is the lateral geniculate
body (option D)
Table 14: An example of inference with selected knowledge
C
ADQA Benchmark
Table 15 shows the detailed prompt of our self-sample method to construct ADQA benchmark. Table 16
gives the statistics of our ADQA benchmark.
Input
Judge whether the question below is related to Alzheimer’s Disease. Please answer yes or no.
Question: Treatable causes of dementia are ___.
a).AD b).Hypothyroidism c).Multi-infarct dementia d).SDH e).Hydrocephalus
Is the question related to Alzheimer’s Disease? Answer:
Output
Yes
Table 15: An example from MedMCQA to self-sample AD-related QA sample with LLMs. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 16,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Dataset
MedQA
MedMCQA
MMLU
QA4MRE
Total
Number
152
210
49
35
446
Table 16: Statistics of our ADQA benchmark.
D
Further Experiment for RAG
MedQA
MedMCQA
NMMLU
QA4MRE
AVG
Almanac w/ 256 chunk size
50.0
69.0
67.3
62.9
62.3
Almanac w/ top 10 docuemnt
48.7
68.6
65.3
62.9
61.4
Almanac w/ CoT
50.0
65.7
77.6
65.7
64.7
Clinfo.ai w/ 256 chunk size
48.6
66.7
81.6
65.7
65.7
Clinfo.ai w/ top 5 docuemnt
43.4
68.1
77.6
68.6
64.4
Clinfo.ai w/ CoT
48.7
68.6
79.6
68.6
65.0
Table 17: Further experiment in RAG methods with different hyper-parameter settings. | {
"author": "",
"creationDate": "D:20240509001837Z",
"creationdate": "2024-05-09T00:18:37+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240509001837Z",
"moddate": "2024-05-09T00:18:37+00:00",
"page": 17,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2405.04819v1.pdf",
"subject": "",
"title": "",
"total_pages": 18,
"trapped": ""
} |
Databases and ontologies
BioThings Explorer: a query engine for a federated
knowledge graph of biomedical APIs
Jackson Callaghan
1,†, Colleen H. Xu
1,†, Jiwen Xin1,†, Marco Alvarado Cano1,
Anders Riutta
2, Eric Zhou1, Rohan Juneja1, Yao Yao1, Madhumita Narayan1,
Kristina Hanspers2, Ayushi Agrawal
2, Alexander R. Pico2, Chunlei Wu
1,*, Andrew I. Su
1,*
1Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, CA 92037, United States
2Data Science and Biotechnology, Gladstone Institutes, University of California, San Francisco, CA 94158, United States
*Corresponding authors. Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, CA 92037, United States.
E-mails: cwu@scripps.edu (C.W.) and asu@scripps.edu (A.I.S.)
†These authors contributed equally to this work.
Associate Editor: Peter N. Robinson
Abstract
Summary: Knowledge graphs are an increasingly common data structure for representing biomedical information. These knowledge graphs can
easily represent heterogeneous types of information, and many algorithms and tools exist for querying and analyzing graphs. Biomedical knowl-
edge graphs have been used in a variety of applications, including drug repurposing, identification of drug targets, prediction of drug side effects,
and clinical decision support. Typically, knowledge graphs are constructed by centralization and integration of data from multiple disparate sour-
ces. Here, we describe BioThings Explorer, an application that can query a virtual, federated knowledge graph derived from the aggregated infor-
mation in a network of biomedical web services. BioThings Explorer leverages semantically precise annotations of the inputs and outputs for
each resource, and automates the chaining of web service calls to execute multi-step graph queries. Because there is no large, centralized
knowledge graph to maintain, BioThings Explorer is distributed as a lightweight application that dynamically retrieves information at query time.
Availability and implementation: More information can be found at https://explorer.biothings.io and code is available at https://github.com/bio
things/biothings_explorer.
1 Introduction
While downloadable files are the most common way to share
biomedical data, application programming interfaces (APIs)
are another popular and powerful mechanism for data dis-
semination. Accessing data through APIs has many comple-
mentary advantages relative to downloading local copies of
data. APIs typically allow users to query for specific subsets
of the data that are of interest. API queries are often highly
indexed, leading to efficient data retrieval. Finally, API access
allows for easy incorporation of the most up-to-date data into
other computational applications or workflows.
While APIs offer many advantages in terms of data acces-
sibility, these advantages do not immediately translate into
efficient data integration. APIs generally follow some com-
mon architectures and protocols [e.g. representational state
transfer (REST), output in JavaScript Object Notation
(JSON)], but alignment at this technical level does not guar-
antee either syntactic or semantic interoperability. For ex-
ample, APIs can use different identifiers for the same gene,
different data structures to represent gene attributes, and
different terms to describe the relationships between bio-
medical entities.
There have been some efforts to define and enforce seman-
tic and syntactic standards to achieve data interoperability.
Examples of this approach include the Beacon API from the
GA4GH consortium (Rambla et al. 2022) and the DAS speci-
fication for sharing annotations of genomic features (Dowell
et al. 2001). These efforts rely on the active participation of
API developers in adopting a community standard for their
API endpoints.
Here, we explore the use of semantically precise API anno-
tations as a complementary approach to achieving API inter-
operability. We divided this work into two components. First,
we created an extension of the widely used OpenAPI annota-
tion standard (http://spec.openapis.org/oas/v3.1.0) to seman-
tically describe the APIs’ inputs and outputs, and a registry to
organize these API annotations. Second, we created an appli-
cation called BioThings Explorer to consume the API meta-
data and to execute multi-hop graph queries that span
multiple APIs. Together, this architecture allows users to
query a large, federated knowledge graph based on an inter-
connected network of biomedical APIs. This federated design
offers a unique approach for creating knowledge graphs that
is complementary to the more common strategy of centraliza-
tion and local data integration.
Received: 18 April 2023; Revised: 18 August 2023; Editorial Decision: 11 September 2023; Accepted: 12 September 2023
V
C The Author(s) 2023. Published by Oxford University Press.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which
permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Bioinformatics, 2023, 39(9), btad570
https://doi.org/10.1093/bioinformatics/btad570
Advance Access Publication Date: 14 September 2023
Applications Note
Downloaded from https://academic.oup.com/bioinformatics/article/39/9/btad570/7273783 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20230927155028+05'30'",
"creationdate": "2023-09-27T15:50:28+05:30",
"creator": "Arbortext Advanced Print Publisher 9.0.114/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"format": "PDF 1.3",
"keywords": "",
"modDate": "D:20260111011054+00'00'",
"moddate": "2026-01-11T01:10:54+00:00",
"page": 0,
"producer": "Adobe LiveCycle PDF Generator; modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"subject": "",
"title": "OP-CBIO230618 1..4",
"total_pages": 4,
"trapped": ""
} |
2 A registry of semantically annotated APIs
The first step in creating a network of interoperable APIs is to
annotate each API in a semantically precise way. We built this
API annotation system on the OpenAPI specification, the de
facto standard for documenting API metadata in a human-
and machine-readable format. OpenAPI describes basic API
metadata (e.g. title, description, version, contact info), as well
as key information on the operation of the API endpoints (e.g.
server URL, endpoint input parameters, endpoint response
schemas).
However, this basic OpenAPI specification does not include
key domain-specific information that is necessary to facilitate
downstream API interoperability. Therefore, we defined an
OpenAPI extension to capture semantically precise annota-
tions of each API endpoint. These annotations include the se-
mantic types and identifier namespaces of biomedical entities
that are both used in querying (inputs) and found in the re-
sponse (outputs), the JSON path to the output identifier val-
ues in the JSON response, and the predicate describing the
relationship
between
the
input
and
output
entities
(Supplementary Fig. S1).
We also chose a strategy to map arbitrary JSON data struc-
tures to an established biological data model. In this effort,
we mapped API output to the Biolink Model (Unni et al.
2022), a community-developed data model that was adopted
and extended by the NCATS Translator consortium (Fecho
et al. 2022). The Biolink model defines the universe of seman-
tic types that can be used in BioThings Explorer, the allowed
identifier systems for each semantic type and the allowed
predicates. We provided a guide to writing these API annota-
tions in our GitHub repository (https://github.com/biothings/
biothings_explorer/blob/main/docs/README-writing-x-bte.
md).
To annotate and catalog APIs with our OpenAPI extension,
we leveraged the SmartAPI registry (https://smart-api.info/)
(Zaveri et al. 2017). We created SmartAPI registrations for 34
APIs with semantic annotations. The selection of these APIs
that are queried by BioThings Explorer is set in a local,
instance-specific configuration file. This collection of API
annotations can be thought of as a “meta-knowledge graph”
(meta-KG), where the nodes represent types of biomedical en-
tities (genes, diseases, drugs) and the edges represent APIs that
describe relationships between two types of biomedical enti-
ties. The SmartAPI meta-KG currently contains 35 nodes and
1817
edges.
(The
complete
meta-KG
is
shown
in
Supplementary Table S1 and a partial rendering is shown in
Fig. 1.)
3 API interoperability using BioThings Explorer
The second step in creating our federated biomedical knowl-
edge graph was to create BioThings Explorer, an engine to au-
tonomously
query
the
SmartAPI
meta-KG,
query
the
annotated APIs to retrieve associations between biomedical
entities, and integrate those APIs’ responses. The input to
BioThings Explorer is a query graph and the syntax for
encoding the query graph was defined by the NCATS
Translator consortium (Fecho et al. 2022). The topology of
the query graph and the constraints on its nodes and edges de-
fine the query (Fig. 2).
BioThings Explorer executes the query in three distinct
phases: query-path planning, query-path execution, and inte-
gration and scoring.
3.1 Query-path planning
For every edge in a query graph, BioThings Explorer consults
the SmartAPI registry for APIs that serve those types of associ-
ations (Fig. 2). For example, in Fig. 1, associations between
diseases and genes can be found using APIs from the
Comparative Toxicogenomics Database (Davis et al. 2023)
and the Biolink API from the Monarch Initiative (Mungall
et al. 2017, Unni et al. 2022), while associations between
genes and chemicals can be found using MyChem.Info
(https://mychem.info/) (Lelong et al. 2022). The sequence of
API calls that can satisfy the original query is a “query-path
plan.”
3.2 Query-path execution
In this phase, BioThings Explorer programmatically and au-
tonomously executes each query in each query-path plan
based on the semantic annotations for each API identified in
the previous phase. BioThings Explorer calls each API, using
the SmartAPI annotation to construct calls with the correct
syntax and appropriate input identifier, and maps the API
responses to the Biolink Model (Unni et al. 2022). BioThings
Explorer also performs ID-to-object translation, which facili-
tates the chaining of API calls from one step in the query-path
to the next step. This ID translation step is critical when suc-
cessive APIs in the query-path plan use different identifiers to
represent the same biomedical entity (e.g. NCBI Gene ID
versus Ensembl Gene ID). ID translation is currently handled
by
the
Translator
Node
Normalizer
(https://github.com/
NCATSTranslator/Translator-All/wiki/Node-Normalizer). The
output of this phase is a set of edges for each step of the query-
path, which represent the associations between biomedical enti-
ties retrieved from the APIs.
3.3 Integration and scoring
In this final phase, these sets of edges from the API queries are
assembled into result sub-graphs, each of which matches the
topology of the query graph. Each result is then scored based
on a variety of factors including number of paths, length of
paths, and semantic similarity between concepts based on the
Normalized Google Distance (Cilibrasi and Vitanyi 2007).
4 Deployment and usage
The BioThings Explorer knowledge graph is entirely com-
posed from a federated network of APIs. Because there is no
local assembly and storage of a large knowledge graph,
BioThings Explorer is a very lightweight application that can
be easily deployed on almost any standard personal com-
puter. The ability of every user to create a local instance of
BioThings Explorer removes centralized bottlenecks associ-
ated with large queries and/or heavy usage. The code reposi-
tory that describes the installation process is at https://github.
com/biothings/biothings_explorer. BioThings Explorer cur-
rently
relies
on
two
external
dependencies—the
Node
Normalizer service and the Biolink Model (Unni et al. 2022).
However, nothing in the BioThings Explorer architecture is
reliant on these specific tools, so these dependencies can be
substituted for alternatives if desired.
For users who prefer not to create a local instance of
BioThings Explorer, we also maintain a community instance
for general use through the NCATS Translator Consortium
(https://explorer.biothings.io/).
2
Callaghan et al.
Downloaded from https://academic.oup.com/bioinformatics/article/39/9/btad570/7273783 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20230927155028+05'30'",
"creationdate": "2023-09-27T15:50:28+05:30",
"creator": "Arbortext Advanced Print Publisher 9.0.114/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"format": "PDF 1.3",
"keywords": "",
"modDate": "D:20260111011054+00'00'",
"moddate": "2026-01-11T01:10:54+00:00",
"page": 1,
"producer": "Adobe LiveCycle PDF Generator; modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"subject": "",
"title": "OP-CBIO230618 1..4",
"total_pages": 4,
"trapped": ""
} |
5 Discussion
Integration of existing data from multiple disparate sources is
a key step in assessing the state of current knowledge. There
are many existing efforts to create biomedical knowledge
graphs by integrating locally downloaded data and standard-
izing it using a common data model (Himmelstein et al. 2017;
Fecho et al. 2021; Mayers et al. 2022; Wood et al. 2022;
Morris et al. 2023). These efforts result in centralized knowl-
edge graphs of substantial size, often with millions of nodes
and tens of millions of edges.
BioThings Explorer offers a unique strategy for data inte-
gration, focusing on creating a federated knowledge graph by
semantically annotating APIs. Rather than bringing all data
into a massive, centralized graph database, this federated de-
sign instead allows knowledge to remain behind each resour-
ce’s API. Data are retrieved at query time by dynamically
executing API calls and semantically parsing the results. This
architecture
functionally
separates
data
dissemination
(through API creation) from data modeling and data integra-
tion (through semantic annotations).
This approach has several advantages. First, by moving the
requirements for interoperability from implementation in
code to semantic API annotation, we significantly lower the
barrier to participation in our API ecosystem. Second, by sep-
arating these roles into distinct layers, we promote the overall
modularity of our system. These components can develop and
evolve in parallel, and these two roles can even be undertaken
by separate teams (e.g. one team semantically annotates an
API that was created by another team). Third, this design
facilitates
an
iterative
approach
to
API
annotation.
Developers and API annotators can first provide a minimal
set of API metadata, which can later be extended based on fu-
ture needs and use cases.
The federated design of BioThings Explorer also has some
notable
limitations.
First,
our
OpenAPI
extensions
in
SmartAPI to semantically annotate APIs only work on APIs
that follow the REST protocol and provide output in JSON
Figure 1. A visualization of the meta-KG for BioThings Explorer. The nodes in this graph are the semantic types of biomedical entities that BioThings
Explorer can retrieve associations between (limited to the top eight most common semantic types). The edges between nodes show what associations
between biomedical entities exist in the semantic API network that is accessible through BioThings Explorer. The edge label shows the number of APIs
that can retrieve those types of associations, which is also represented by the edge width.
Figure 2. Deconstruction of a query in BioThings Explorer. (A) A free-text
representation of a query that can be answered by BioThings Explorer.
(B) The graph representation of the same query. The exact syntax of this
graph query is specified in the Translator Reasoner API standard
described in Fecho et al. (2022) and shown in Supplementary Fig. S2.
(C) The deconstruction of the graph query into multiple API calls by
consulting the meta-KG in the SmartAPI registry.
BioThings Explorer
3
Downloaded from https://academic.oup.com/bioinformatics/article/39/9/btad570/7273783 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20230927155028+05'30'",
"creationdate": "2023-09-27T15:50:28+05:30",
"creator": "Arbortext Advanced Print Publisher 9.0.114/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"format": "PDF 1.3",
"keywords": "",
"modDate": "D:20260111011054+00'00'",
"moddate": "2026-01-11T01:10:54+00:00",
"page": 2,
"producer": "Adobe LiveCycle PDF Generator; modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"subject": "",
"title": "OP-CBIO230618 1..4",
"total_pages": 4,
"trapped": ""
} |
format. Second, because the entire federated KG is never in-
stantiated in a single place, reasoning and scoring methods
that rely on having the entire knowledge graph in memory
cannot be used with BioThings Explorer.
In sum, we believe that knowledge graphs enable many ex-
citing use cases in biomedical research (Nicholson and Greene
2020), and that BioThings Explorer is complementary to
existing approaches for assembling knowledge graphs, offer-
ing powerful and unique capabilities for both scientific data
analysts and tool developers.
Supplementary data
Supplementary data are available at Bioinformatics online.
Conflict of interest
None declared.
Funding
Support for this work was provided by the National Center
for Advancing Translational Sciences, National Institutes of
Health, through the Biomedical Data Translator program,
awards OT2TR003427 and OT2TR003445.
Data availability
BioThings Explorer is implemented as a NodeJS application.
The primary repository for the BioThings Explorer project is
at https://github.com/biothings/biothings_explorer, which in
turn links to and incorporates other repositories as sub-
modules. All code is released under the Apache 2.0 open-
source software license.
References
Cilibrasi RL, Vitanyi PMB. The google similarity distance. IEEE Trans
Knowl Data Eng 2007;19:370–83.
Davis AP, Wiegers TC, Johnson RJ et al. Comparative toxicogenomics
database
(CTD):
update
2023.
Nucleic
Acids
Res
2023;51:
D1257–62.
Dowell RD, Jokerst RM, Day A et al. The distributed annotation system.
BMC Bioinformatics 2001;2:7.
Fecho K, Bizon C, Miller F et al. A biomedical knowledge graph system
to propose mechanistic hypotheses for real-world environmental
health observations: cohort study and informatics application. JMIR
Med Inform 2021;9:e26714.
Fecho K, Thessen AE, Baranzini SE et al. Progress toward a universal
biomedical data translator. Clin Transl Sci 2022;15:1838–47.
Himmelstein DS, Lizee A, Hessler C et al. Systematic integration
of biomedical knowledge prioritizes drugs for repurposing. Elife
2017;6:e26726.
Lelong S, Zhou X, Afrasiabi C et al. BioThings SDK: a toolkit for build-
ing
high-performance
data
APIs
in
biomedical
research.
Bioinformatics 2022;38:2077–9.
Mayers M, Tu R, Steinecke D et al. Design and application of a knowl-
edge network for automatic prioritization of drug mechanisms.
Bioinformatics 2022;38:2880–91.
Morris JH, Soman K, Akbas RE et al. The scalable precision medicine
open knowledge engine (SPOKE): a massive knowledge graph of bio-
medical information. Bioinformatics 2023;39(2):btad080.
Mungall CJ, McMurry JA, Ko¨hler S et al. The monarch initiative: an in-
tegrative data and analytic platform connecting phenotypes to geno-
types across species. Nucleic Acids Res 2017;45:D712–22.
Nicholson DN, Greene CS. Constructing knowledge graphs and their
biomedical applications. Comput Struct Biotechnol J 2020;18:
1414–28.
Rambla J, Baudis M, Ariosa R et al. Beacon v2 and Beacon networks: a
‘lingua franca’ for federated data discovery in biomedical genomics,
and beyond. Hum Mutat 2022;43:791–9.
Unni DR, Moxon SAT, Bada M et al. Biolink Model: a universal schema
for knowledge graphs in clinical, biomedical, and translational sci-
ence. Clin Transl Sci 2022;15:1848–55.
Wood EC, Glen AK, Kvarfordt LG et al. RTX-KG2: a system for build-
ing a semantically standardized knowledge graph for translational
biomedicine. BMC Bioinformatics 2022;23:400.
Zaveri A, Dastgheib S, Wu C et al. smartAPI: towards a more intelligent
network of web APIs. In: Blomqvist, E., Maynard, D., Gangemi, A.,
et al. (eds) The Semantic Web. ESWC 2017. Lecture Notes in
Computer Science, 2017, 154–69. https://link.springer.com/chapter/
10.1007/978-3-319-58451-5_11#chapter-info.
4
Callaghan et al.
Downloaded from https://academic.oup.com/bioinformatics/article/39/9/btad570/7273783 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20230927155028+05'30'",
"creationdate": "2023-09-27T15:50:28+05:30",
"creator": "Arbortext Advanced Print Publisher 9.0.114/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"format": "PDF 1.3",
"keywords": "",
"modDate": "D:20260111011054+00'00'",
"moddate": "2026-01-11T01:10:54+00:00",
"page": 3,
"producer": "Adobe LiveCycle PDF Generator; modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/btad570.pdf",
"subject": "",
"title": "OP-CBIO230618 1..4",
"total_pages": 4,
"trapped": ""
} |
GigaScience , 2025, 14 , 1–12
DOI: 10.1093/gigascience/giae082
RESEARCH
Kno wledge gr aph–based thought: a kno wledge
graph–enhanced LLM framework for pan-cancer
question ans w ering
Yichun Feng
1 ,2 ,‡ , Lu Zhou 2 ,‡ , Chao Ma 3 ,‡ , Yikai Zheng 2 , Ruikun He 4 ,5 , * , and Yixue Li 1 ,2 , *
1 Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, 310024 Hangzhou, China
2 Guangzhou National Laboratory, Guangzhou International Bio Island, 510005 Guangzhou, China
3 Smartquerier Gene Technology (Shanghai) Co ., Ltd. , 200100 Shanghai, China
4 BYHEALTH Institute of Nutrition & Health, 510663 Guangzhou, China
5 Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences Shanghai, 200030 Shanghai, China
∗Corr espondence addr ess . Ruikun He , BYHEALTH Institute of Nutrition & Health, 510663 Guangzhou, China. E-mail: herk@by-health.com ; Yixue Li, Guangzhou
National Laboratory, No. 9 XingDaoHuanBei Road, Guangzhou International Bio Island, 510005 Guangzhou. E-mail: yxli@sibs.ac.cn .
‡ Contributed equally.
Abstract
Bac kgr ound: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences.
Howev er, their r eal-world application is often limited by issues like erroneous outputs and hallucinatory responses.
Results: We developed the knowledge graph–based thought (KGT) framework, an innov ati v e solution that integrates LLMs with knowl-
edge graphs (KGs) to impr ov e their initial r esponses by utilizing v erifia b le information from KGs, thus significantl y r educing factual
err ors in r easoning. The KGT fr amew ork demonstr ates str ong adapta bility and performs well acr oss v arious open-source LLMs. No-
ta b l y, KGT can facilitate the discov er y of new uses for existing drugs through potential drug–cancer associations and can assist in
pr edicting r esistance by anal yzing r elev ant biomarkers and genetic mechanisms. To ev aluate the kno wledge graph question ans wer-
ing task within biomedicine , w e utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named
pan-cancer question answering.
Conclusions: The KGT fr amew ork substantiall y impr ov es the accuracy and utility of LLMs in the biomedical field. This study serves
as a proof of concept, demonstrating its exceptional performance in biomedical question answering.
Ke yw ords: pan-cancer knowledge graph, large language model, knowledge graph question answering, prompt engineering
Ke y P oints:
r We introduce a framework combining large language
models (LLMs) with knowledge gr a phs (KGs) to impr ov e
factual accuracy in LLM reasoning.
r Our system is a flexible arc hitectur e that seamlessly in-
tegr ates v arious LLMs.
r Utilizing a pan-cancer knowledge gr a ph, we hav e pr o-
posed the first knowledge gr a ph question answering
benchmark in the field of biomedicine.
r Case studies r e v eal our method enhanced LLMs in ad-
dressing biomedical challenges such as drug reposition-
ing, r esistance r esearc h, individualized tr eatment, and
biomarker analysis.
r The method performs favor abl y in comparison to exist-
ing methods.
Introduction
With the increasing prominence of large language models (LLMs)
in the field of artificial intelligence, the advent of influential mod-
els such as ChatGPT [ 1 ] and Llama [ 2 ] consequently catalyze the
de v elopment of a wide array of applications in biomedicine and
health care. Ho w ever, LLMs still face the challenge of factual hal-
lucination, wher e they gener ate incorr ect statements due to lim-
ited inherent knowledge [ 3 ]. Factual hallucination presents a sig-
nificant challenge for the practical use of LLMs, especially in real-
world scenarios where factual accuracy is crucial. Consequently,
there is a growing focus on addressing factual hallucinations in
LLMs within the field of natural language processing (NLP) [ 4 , 5 ].
LLMs often struggle to ca ptur e and access factual knowledge,
primarily due to 3 aspects: the inability to comprehend ques-
tions due to the lack of contextual information, the insufficient
knowledge to generate accurate ans wers , and the incapacity to
recall specific facts [ 6 ]. Consequently, researchers consider the
fine-tuning technique as a solution to address these issues. For
example, MedAlpaca [ 7 ] builds upon medical data to fine-tune
Stanford Alpaca for applications related to medical question
ans wering and dialogue . ChatDoctor [ 8 ] is designed to simulate
a conversation between a doctor and a patient by fine-tuning
Recei v ed: May 27, 2024. Re vised: A ugust 14, 2024. Accepted: October 2, 2024
© The Author(s) 2025. Published by Oxford Uni v ersity Pr ess GigaScience. This is an Open Access article distributed under the terms of the Cr eati v e Commons
Attribution License ( https://cr eati v ecommons.org/licenses/by/4.0/ ), which permits unrestricted reuse, distribution, and reproduction in any medium, provided
the original work is pr operl y cited.
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 0,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
2 | GigaScience , 2025, Vol. 14
LLaMA with medical liter atur e. Additionall y, Med-P aLM [ 9 ] shows
promising performance on the MedQA exam based on clinical
cor por a and human feedback. Meanwhile, aiming at the Chinese
medical domain, LLMs such as BenTsao [ 10 ], DoctorGLM [ 11 ], and
HuatuoGPT [ 12 ] are developed on the Chinese medical dialogue
data. Mor e r ecentl y, Zhongjing [ 13 ] and ChiMed-GPT [ 14 ] adopted
full pipeline training from pretraining, SFT, to reinforcement
learning with human feedback (RLHF) [ 15 ]. While fine-tuning can
reduce hallucinations in LLMs, it brings about considerable train-
ing expenses. Additionally, it poses a critical challenge known
as catastrophic forgetting. This issue manifests when a model
for gets its pr e viousl y learned information as a consequence of
parameter modifications during the acquisition of new tasks.
This forgetfulness results in a deterioration of performance on
prior tasks, consequently constraining the model’s practical
applicability [ 16 , 17 ].
In addition to fine-tuning, r esearc hers also enhance the out-
put of LLMs through the field of prompt engineering. Prompt en-
gineering focuses on the creation and optimization of prompts
to impr ov e the effectiv eness of LLMs acr oss v arious a pplications
and r esearc h domains [ 18 ]. It can enhance the ca pabilities of LLMs
in a wide range of complex tasks, including question answering,
sentiment classification, and commonsense reasoning. Chain-of-
thought (CoT) prompts [ 19 ] enable complex reasoning capabili-
ties by incor por ating intermediate r easoning steps . T he Automatic
Pr ompt Engineer (APE) pr oposes an automatic pr ompt gener ation
method aimed at enhancing the performance of LLMs [ 20 ]. Prompt
engineering offers a straightforw ar d approach to harnessing the
potential of LLMs without fine-tuning.
On the other hand, knowledge gr a phs (KGs) ar e r epositories of
vast quantities of high-quality structured data, offering the po-
tential to effectiv el y mitigate the issue of factual hallucinations
when integrated with LLMs . Hence , employing KGs for question
answering can enhance the precision of the responses and fur-
nish a dependable foundation for the factual verification of in-
formation produced by LLMs. Knowledge gr a ph question answer-
ing (KGQA) has long been a hot r esearc h topic. Befor e the ad-
vent of LLMs, certain studies [ 21–23 ] typically begin by r etrie ving
a subgr a ph r elated to the question to r educe the searc h space,
then perform m ultihop r easoning on this basis . T his r etrie v al-
plus-r easoning par adigm has shown its adv anta ges ov er dir ect
r easoning acr oss the entir e KG [ 24 , 25 ]. Additionall y, r esearc hers
tackle KGQA by parsing the question into a structured query lan-
guage (e.g., SPARQL) and using a query engine to obtain accu-
rate answers [ 26 , 27 ]. UniKGQA [ 28 ] introduces a unified fine-
tuning fr ame work for r etrie v al and r easoning, mor e closel y linking
these 2 stages. Ho w ever, traditional KGQA methods usually per-
form poorly in accurate semantic understanding and high-quality
text generation due to the lack of LLMs for r etrie v al and r eason-
ing. Hence, r ecent r esearc h is incr easingl y utilizing external KGs
to enhance LLMs in addressing KGQA challenges. For instance,
StructGPT [ 29 ] navigates through knowledge graphs by identify-
ing pathways from an initial seed entity to the target answer en-
tity, while Think-on-Gr a ph (ToG) [ 30 ] intr oduces iter ativ e explo-
ration of the knowledge gr a ph, whic h can become inefficient with
v ery lar ge KGs. Additionall y, Reasoning on Gr a phs (RoG) [ 31 ] ne-
cessitates fine-tuning to accur atel y gener ate and plan the r elation
paths. KG-GPT [ 32 ] opts for r etrie ving an entire subgraph from the
knowledge gr a ph and then deduces the answer thr ough infer ence.
Although these methods hav e ac hie v ed gr atifying r esults in gen-
er al ar eas, as shown in Fig. 1 B, when the intermediate entity in
the multihop question is unknown, it is impossible to retrieve the
a ppr opriate knowledge from the KG.
In this article, we intr oduce an innov ativ e fr ame work called
knowledge gr a ph–based thought (KGT), whic h integr ates LLMs
with KGs through employing LLMs for reasoning on the schema
of KGs to mitigate factual hallucinations of LLMs, as shown in
Fig. 1 C. Unlike traditional methods, KGT does not dir ectl y r etrie v e
factual information based on the question. Instead, it uses LLMs
to infer entity information on the schema of the knowledge graph,
generating an optimal subgraph based on k e y information di-
r ectl y extr acted fr om the question and inferr ed information fr om
the sc hema. Subsequentl y, the optimal subgr a ph is used to in-
fer the answer to the question through LLMs. KGT r equir es no
fine-tuning, offers seamless integration with multiple LLMs, and
is plug-and-play, facilitating easy deployment. It demonstrates
generalizability, making it adaptable for use with diverse knowl-
edge gr a phs . T his fr ame work is tailor ed for wide-r anging a ppli-
cations in numerous biomedical challenges, such as (i) enhanc-
ing clinical decision-making for physicians and medical organi-
zations, (ii) delivering medical advice to patients and health care
pro viders , (iii) unco vering crucial biomarkers for early disease de-
tection and tailored therapy, and (iv) exploring novel therapeutic
applications for existing medications through insights into their
mechanisms , side effects , and the biological processes of associ-
ated diseases . Furthermore , we utilize the SmartQuerier Oncol-
ogy Knowledge Gr a ph (SOKG), a pan-cancer knowledge gr a ph de-
veloped by SmartQuerier, to create a benchmark for the KGQA
task within biomedicine, named pan-cancer question answering
(PcQA). We release this benchmark and its accompanying knowl-
edge gr a ph, whic h is a subgr a ph of the SOKG, in [ 33 ]. This bench-
mark is curr entl y the sole question-ans wering dataset a vailable
in the domain of biomedical knowledge gr a phs.
Materials and Methods
Kno wledge gr aph introduction
In this w ork, w e tac kle the pr oblem of logical r easoning ov er
the KG K : E × R that store entities ( E ) and relations ( R ). With-
out loss of generality, KG can be organized as a set of triplets
{ (e 1 , r, e 2 ) } ⊆K, wher e eac h r elation r ∈ R exists between the pair
of entities (e 1 , e 2 ) ∈ E × E. We define a relational path { (t 1 , r, t 2 ) } as
a sequence of entity types ( T ) and the relation between them,
where (t 1 , t 2 ) ∈ T × T . In contrast, a relational chain { (e 1 , r, e 2 ) }
refers to a specific set of relational triplets between entities. To
further enrich the KG, attribute information is included through
pairs (e , at t r ) , where at t r represents an attribute associated with
an entity e , thereby enhancing the KG’s semantic richness
and precision by incorporating detailed characteristics of each
entity.
Within the specialized realm of pan-cancer resear ch, w e use
a subgr a ph of the SOKG that pr ovides detailed oncological in-
formation. As depicted in Table 1 , SOKG includes a collection of
over 3 million entities, which is substantially larger than the en-
tity count in the compared knowledge graphs, SynLethKG [ 34 ] and
SDKG [ 35 ], with 540,012 and 165,062 entities, r espectiv el y. Further-
mor e, SOKG’s nearl y 6 million unique concept relations exceed
those of SynLethKG and SDKG, which have 2,231,921 and 727,318
r elations, r espectiv el y. Additionall y, SOKG includes 98 distinct at-
tribute types, enriching data comprehension and improving the
efficiency and precision of queries, a capability not matched by
SynLethKG or SDKG, which do not include comparable attributes.
For this r esearc h, we utilize only a subgraph of the SOKG, which
is available as open data [ 33 ], while the full knowledge graph re-
mains proprietary.
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 1,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
Knowledge gr a ph–based thought | 3
A
B
C
Figure 1: Illustr ativ e examples contr asting our work with pr e vious efforts. (A) LLM-onl y-based infer ence , ans wering questions solel y thr ough the
inherent knowledge of LLMs. (B) Subgraph-based inference, enhancing LLMs by retrieving the knowledge from KGs based on the question. If
intermediate entities are not provided in the multihop question, no appropriate knowledge can be retrieved. (C) Graph schema–based inference,
enhancing r etrie v al ca pabilities by r easoning intermediary entity types on the sc hema of the KG, using the knowledge of the KG to enhance LLMs’
responses.
Table 1. Comparison of SOKG with SynLethKG and SDKG
Entity types
Relational types
Nodes
Edges
Attributes
SynLethKG
11
24
54,012
2,231,921
0
SDKG
7
12
165,062
727,318
0
SOKG
24
21
3,640,259
10,656,273
98
Task description
In order to tackle a div erse arr ay of c hallenges in the field
of biomedicine, we have designed 4 categories of problems: 1-
hop pr oblems, m ultihop pr oblems, intersection pr oblems, and at-
tribute pr oblems, as illustr ated in Table 2 . Based on these 4 types
of tasks, we le v er a ge the SOKG to establish a benchmark for
the KGQA task within biomedicine, named PcQA. Unlike KGQA
tasks in general domains, such as MetaQA [ 36 ] and FACTKG [ 37 ],
whic h typicall y pr ovide the entity types of intermediate entities,
KGQA problems in the biomedical domain often do not have
any information about intermediate entities. Instead, the infor-
mation about intermediate entities must be inferred from the
question itself rather than being dir ectl y pr ovided, as shown in
Supplementary Table S1 . Additionally, our PcQA dataset includes
attributes such as whether a drug is tar geted ther a py or if a
mutated gene is oncogenic. This makes our tasks slightly more
challenging and better suited to the actual needs of biomedical
KGQA.
One-hop problems
One-hop pr oblems involv e single-r elation c hain r easoning, wher e
the objective is to deduce the tail entity T ? given a head entity
H 1 and a relation R 1 , or to infer the relation R ? when a head en-
tity H 1 and a tail entity T 1 are known, as depicted in equations ( 1 )
and ( 2 ).
H 1 + R 1 → T ?
(1)
H 1 + T 1 → R ?
(2)
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 2,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
4 | GigaScience , 2025, Vol. 14
Table 2. Four differ ent r easoning types of task. Eac h r easoning type ma y include o v erla pping questions, so the sum acr oss the 4 differ ent
reasoning types of the task may exceed the total number of questions
Reasoning type
Claim example
Graph
Question number
One-hop
What types of cancer can be treated with diethylstilbestrol?
243
Multihop
What genetic mutations are present in adenoid cystic
carcinoma?
124
Intersection
Which drugs are ALK in basaloid large cell carcinoma of
the lung sensitivity to?
37
Attribute
What is the maximum age for recruitment of clinical trials
for patients with meningioma?
59
Multihop problems
Multihop pr oblems involv e m ultiple-r elation c hain r easoning that
can be br oadl y categorized into 2 types . T he first category in volves
deducing potential relationships between entities by navigating
thr ough indir ect r elations. By examining the indir ect r elations
( R 1 , R 2 ) between a head entity H 1 and a tail entity T 1 , it is possible
to infer an unknown or potential relation R ? linking them dir ectl y.
This inference process is encapsulated in the following equation:
H 1 + T 1 → R 1 + R 2 → R ?
(3)
The second category extends the reasoning to include the discov-
ery of entities themselves b y follo wing a path from a head entity
through intermediate relations to a final tail entity. Starting with
a head entity H 1 , coupled with an indir ect r elation R 1 , an inter-
mediary entity M can be inferred. This intermediary entity M is
then applied with an indirect relation R 2 to deduce the final tail
entity T ? . This infer ence pr ocess is summarized in the following
equation:
H 1 + R 1 → M + R 2 → T ?
(4)
Intersection problems
Intersection pr oblems r efer to taking the intersection of multi-
ple relational chains. Two head entities ( H 1 , H 2 ) lead to the deduc-
tion of 2 types of tail entities ( T 1 , T 2 ) based on different relations
( R 1 , R 2 ). The final tail entity T ? is determined by intersecting these
2 types of tail entities ( T 1 , T 2 ). This inference process is summa-
rized as following:
H 1 + R 1 → T 1
(5)
H 2 + R 2 → T 2
(6)
T 1 ∩ T 2 → T ?
(7)
Attribute problems
Attribute problems refer to the attribute information of the en-
tity, where the task involves retrieving the attributes of a known
head entity H 1 or determining whether the tail entity T 1 , identified
through a known head entity H 1 and relation R 1 , satisfies the at-
tributes specified in the query, as illustrated in equations ( 8 ) and
( 9 ).
H 1 → P H
1
(8)
H 1 + R 1 → P T
1
(9)
Datasets
In the continuously evolving field of biomedical r esearc h, the in-
tegration of LLMs with KGs offers a more efficient and effective
method for knowledge discovery and utilization, particularly in
adv ancing cancer r esearc h. Nonetheless, we note a scarcity of
a ppr opriate datasets for e v aluating these sophisticated method-
ologies within this field. To address this, we leverage the SOKG
to establish a benchmark for the KGQA task within biomedicine,
named PcQA. Our questions were carefully crafted by experts
based on the content of the knowledge gr a ph. GPT-4 [ 38 ] was
then emplo y ed to generate Cypher queries, which were used to re-
trie v e answers fr om the knowledge gr a ph. The gener ated Cypher
queries and corresponding answers underwent an initial r e vie w
by a biomedical PhD candidate, who manually verified and cor-
rected the dataset against the knowledge graph. Finally, the en-
tire dataset was thoroughly reviewed by 2 biomedical experts to
ensure its accuracy and reliability. This multistep process was
meticulously designed to uphold the highest standards of qual-
ity throughout the dataset creation. This dataset, along with the
accompan ying knowledge gr a ph, is completel y open source [ 33 ].
The PcQA includes 405 data entries, covering a wide range of appli-
cations in the field of pan-cancer r esearc h, including genetic pre-
disposition to cancer, medication tr eatment planning, drug r epo-
sitioning, identification of potential drug targets, studies on drug
r esistance, and pr edictions of cancer pr ogr ession and metastasis.
By deeply exploring cancer-related reasoning and information re-
trie v al c hallenges, this dataset can inspir e r esearc hers and clini-
cians to gain a deeper understanding of cancer and explore more
effectiv e tr eatment methods.
KGT fr ame w ork
T he o v er all fr ame work of KGT is laid out in Fig. 2 . When users
input their question in natural language, the first step is to ana-
lyze the question, extracting the main information with the goal of
breaking down the question into smaller, more manageable units.
This main information is then passed to an LLM, which applies
gr a ph r easoning on the sc hema gr a ph of the knowledge gr a ph,
yielding the optimal r elational path. Subsequentl y, a r etrie v al
statement is generated, and a subgraph is constructed within the
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 3,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
Knowledge gr a ph–based thought | 5
F igure 2: F r ame work of KGT. (A) Question anal ysis. Decompose the question and extr act its k e y information. (B) Gr a ph sc hema–based infer ence. Input
the types of the head and tail entities into the gr a ph sc hema of the knowledge gr a ph, complete the gr a ph r easoning, and obtain the optimal r elational
path. (C) Subgr a ph construction. Gener ate a query statement and r etrie v e the subgr a ph. (D) Infer ence. Complete the final r easoning and output the
results in natural language. Note: The symbol “×” r epr esents content that has been filtered out by the LLM, while “✓ ” denotes the optimal content
selected by the LLM.
KG thr ough searc h. The r elational c hains and attributes in the
subgr a ph ar e then fed back into the LLM to finalize the reason-
ing and generate an output in natural language.
Question analysis
Key information extraction
The user inputs a question text ( Q ) in natural language, which
is initially deconstructed and parsed. An LLM is applied to ana-
lyze the question, resulting in the identification of the head entity
name ( H n ), the tail entity type ( T t ), and the attributes of tail entity
( T a ). The prompt for the LLM to extract k e y information from the
question is presented in Supplementary Fig. S1 .
Retrie ving ke y information from the KG
Based on H n , a fixed Cypher format is set to query the head entity
type ( H t ), facilitating subsequent reasoning.
Graph schema–based inference
Construction of a graph based on KG schema
Based on the entity types ( E t ) and the relations ( R ) between them
in the SOKG, an undirected graph G is established where E t serve
as nodes N and R act as edges P.
Candidate path sear c h
Br eadth-first searc h (BFS) is emplo y ed to identify the shortest
paths connecting H t and T t from the constructed graph G. Initi-
ate the search at H t , creating a queue to hold nodes encountered
along the way. Sim ultaneousl y, form a set to tr ac k nodes that have
been visited to avoid r e visiting them. Insert H t into the queue. Con-
tinue processing as long as the queue remains nonempty, remov-
ing a node from the queue at each step. For each of its unvisited
neighbors, enqueue the neighbor, mark it as visited, and log the
pathway from H t to this neighbor. Upon arrival at T t , use the accu-
mulated path data to compile the set of shortest paths ( SPs ) from
H t to T t , with each individual path within the set r eferr ed to as an
SP . The nodes in each SP represent entity types, while the edges
denote the relationships between these entity types.
Optimal path selection
By utilizing embedding technology, textual information is mapped
into a low-dimensional space, resulting in N-dimensional real-
v alue v ectors . T he similarity between each SP and the Q is calcu-
lated based on their r espectiv e r eal-v alue v ectors, with the SP ex-
hibiting the highest similarity being selected as the optimal path
( OP ).
Similarity (Q, SP) =
Q · SP
∥ Q∥ × ∥ SP∥
=
n
i =1 (Q i × SP i )
n
i =1 Q 2
i ×
n
i =1 SP 2
i
(10)
OP = max
Q,SP Similarity (Q, SP)
(11)
Subgraph construction
Query statement generation
Input H t , H n , T t , T a , and OP into an LLM to generate a query
statement, such as Cypher . T ext2Cypher Prompt is presented in
Supplementary Fig. S2 .
Subgr aph gener ation
Enter the query statement in the KG to obtain a reasonable sub-
gr a ph.
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 4,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
6 | GigaScience , 2025, Vol. 14
Inference
Subgraph inference
Based on the relational chains and attribute data in the subgraph,
determine the r ele v ance to the question text. Prune an y err oneous
information, r etaining onl y the corr ect r elational c hains.
Natural language output
The LLM divides the subgr a ph into multiple relational chains,
each of which outputs a sentence in natural language, and then
the LLM generates natural language output. LLMs Inference and
Output Prompt is presented in Supplementary Fig. S3 .
Results
Ev alua tion criteria
We use e v aluators based on GPT-4 [ 38 ], BERTScore [ 39 ], and
ROUGE [ 40 ] to assess the accuracy of the generated ans wers . As a
scoring bot, GPT-4 e v aluates and assigns scores based on the sim-
ilarity in meaning between 2 sentences. GPT-4–based Evaluation
Pr ompt is pr esented in Supplementary Fig. S4 . BERTScor e e v alu-
ates semantic similarity using context-sensiti ve embed dings, of-
fering a compr ehensiv e e v aluation of langua ge model outputs.
ROUGE, on the other hand, e v aluates the longest common subse-
quence (LCS) between the generated text and the reference text,
focusing on sequence-based similarity to assess the fluency and
the pr eserv ation of semantic content.
Baselines
To assess the adv anta ges of our fr ame w ork, w e compare it with
se v er al a ppr oac hes that can be dir ectl y a pplied for KGQA tasks
without fine-tuning. We introduce a straightforw ar d baseline ap-
pr oac h, named Base, which is similar to KG-GPT [ 32 ], curr entl y
the leading method in the KGQA field, excluding the sentence
segmentation step of KG-GPT. Initiall y, this involv es le v er a ging
an LLM to r etrie v e r ele v ant information fr om the KG by gener at-
ing a query statement. Then, another LLM is used to answer the
question with the r etrie v ed information. To enhance the baseline,
we incor por ate CoT pr ompting [ 19 ] and in-context learning (ICL)
tec hniques [ 41 ], collectiv el y r eferr ed to as CoT&ICL. The pr ompts
for these methods are illustrated in Supplementary Table S5 . Ad-
ditionally, we implement KG-GPT [ 32 ] to enhance the r etrie v al and
r easoning ca pabilities of the LLMs. For a fair comparison, all meth-
ods are based on Code-Llama-13B [ 42 ].
To further underscore the efficacy of our fr ame w ork, w e con-
duct a compar ativ e anal ysis of KGT, whic h is built upon Code-
Llama-13B, a gainst 2 highl y ca pable lar ge langua ge models that
ar e pr ominent in the general and biomedical domains: ChatGPT-
3.5 [ 1 ] and Taiyi [ 43 ]. ChatGPT-3.5, a leader in tasks across the
general domain, has exhibited competitive performance in a wide
r ange of a pplications. To compensate for its limited biomedi-
cal kno wledge, w e emplo y 2 methodologies pr e viousl y described,
Base and CoT&ICL, as advanced baselines to augment ChatGPT-
3.5’s capabilities. Taiyi, a cutting-edge LLM in biomedicine, pre-
trained on 2 trillion tokens, le v er a ges its extensiv e biomedical
knowledge base for direct question answering, bypassing the need
for knowledge gr a ph r etrie v al.
Due to the scarcity of KGQA datasets within the biomedical
domain, all experiments are conducted on our ne wl y pr oposed
benchmark, named PcQA.
Compar a ti v e analysis across different KGQA
methods
We e v aluated the ca pabilities of v arious methods based on
Code-Llama-13B, with the experimental results presented in Ta-
ble 3 . The experimental results indicate that the Code-Llama-
13B model, enhanced with KGT, consistently surpasses competing
methods across all metrics assessed. Notabl y, KG-GPT impr ov es
the F1 score by 15.7% over previous methods CoT&ICL, while our
method KGT increases the F1 score by 33% over KG-GPT. Because
KG-GPT overlooks the impact of entity types and attributes on an-
swers within the biomedical domain, this ac hie v ement positions
our a ppr oac h as a pioneering benc hmark in biomedical KGQA,
eclipsing pr e viousl y established best pr actices.
Compar a ti v e analysis across di v erse LLMs
We present a comparative study of KGT applied to Code-Llama-
13B against 2 highly capable LLMs in the general and biomedical
domains, with experimental results displayed in Table 4 . Code-
Llama-13B, enhanced by KGT, significantly outperforms its peers,
ac hie ving the highest marks in e v ery assessment metric: a GPT-
4 Eval score of 92.4, a BERTScore of 97.7, and a ROUGE F1 score
of 86.8. Remarkably, our approach’s F1 score surpasses that of
ChatGPT-3.5 with the Base method by 52.7%, the CoT&ICL method
by 36.3%, and Taiyi’s base model by 67.3%. These results highlight
KGT’s substantial contribution to improving the performance of
lar ge langua ge models for the pan-cancer KGQA task. Ev en when
integrated with open-source general models, KGT exhibits re-
markable performance, outstripping both the recognized state-
of-the-art closed-source large language models and those specif-
icall y tailor ed for the biomedical domain. This showcases KGT’s
adeptness at parsing and le v er a ging knowledge gr a ph data, set-
ting a new standard for future research and applications in the
field.
Assessing KGT’s effecti v eness on di v erse LLM
platforms
To underscore the adaptability and effectiveness of our KGT
fr ame work when applied to a range of large language models,
we conduct experiments on se v er al LLMs: Zephyr [ 44 ], Llama-2
[ 2 ], and Code-Llama [ 42 ]. The outcomes, illustrated in Fig. 3 , re-
veal that while the CoT&ICL techniques significantly boost per-
formance in terms of F1 score, our KGT methodology delivers
e v en mor e substantial enhancements acr oss all e v aluated mod-
els . T his demonstrates not only the effectiveness of CoT&ICL as a
performance-enhancing strategy but also highlights the superior
advancements and impact of KGT, establishing its dominance and
efficiency in knowledge gr a ph question-answering tasks.
Ablation study for dissecting the components of
KGT
In our effort to illuminate the individual contributions of the com-
ponents that constitute our KGT fr ame work and their collective
impact on enhancing the performance of LLMs, we define 4 foun-
dational modules: (i) question analysis for the extraction of piv-
otal information, (ii) gr a ph sc hema–based infer ence to identify
the optimal relational chains in the knowledge graph, (iii) the
generation of query statements to facilitate subgraph construc-
tion, and (iv) the inference process coupled with the articulation
of results in natural language . T his ablation study, grounded on
the Code-Llama-13B model, is meticulously designed to e v alu-
ate the efficacy of these components. Since gr a ph sc hema–based
infer ence r equir es the pr ocess of question anal ysis, the ques-
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 5,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
Knowledge gr a ph–based thought | 7
Table 3. Comparison of results between KGT and other commonly used methods based on the Code-Llama-13B. The best results are
displayed in bold for each indicator
ROUGE (%)
Method
GPT-4 Eval (%)
BERTScore (%)
Recall
Precision
F1 score
Base
46.6
85.3
25.3
28.5
24.5
CoT&ICL
57.9
88.8
38.9
39.4
37.6
KG-GPT
68.2
93.5
55.2
55.8
53.3
KGT (ours)
92.4
97.7
87.4
87.7
86.8
Table 4. Comparison of KGT based on Code-Llama-13B with results from other commonly used models . T he best results are displayed
in bold for each indicator
ROUGE (%)
Model
Method
GPT-4 Eval (%)
BERTScore (%)
Recall
Precision
F1 score
ChatGPT-3.5
Base
65.4
91.0
42.7
32.3
34.1
CoT&ICL
70.3
93.3
57.0
50.6
50.5
Taiyi
\
40.6
85.3
15.4
39.6
19.5
Code-Llama-13B
KGT (ours)
92.4
97.7
87.4
87.7
86.8
F igure 3: P erformance of v arious models using differ ent str ategies.
tion analysis module cannot be removed in isolation; simulta-
neousl y, subgr a ph construction is indispensable for knowledge
gr a ph r etrie v al. If the subgr a ph construction module is indepen-
dently omitted, the outputs of the initial 2 modules will not
impact the final results, making the isolated exclusion of this
component illogical. Ther efor e, we intr oduce 3 specific ablated
configurations for examination: (i) excluding gr a ph sc hema–based
inference (without GSBI), (ii) omitting both question analysis and
gr a ph sc hema–based infer ence (without QA&GSBI), and (iii) r e-
mo ving question analysis , graph schema–based inference , and
subgr a ph construction (without QA&GSBI&SC), effectiv el y by-
passing the structured query of the SOKG and relying solely on
the LLM’s inherent knowledge for question answering.
The results of the ablation study, as shown in Table 5 , demon-
strate that when we remove the GSBI, we observe a 20% decrease
in the F1 score . Remo ving both GSBI and QA results in an addi-
tional 8.6% decrease in the F1 score compared to removing GSBI
alone . Furthermore , remo ving GSBI, QA, and SC together leads to a
46% decrease in the F1 score compared to removing just GSBI and
QA. The experiments r e v eal that SC is crucial; its absence forces
the LLM to r el y solel y on its inher ent knowledge, significantl y
r educing effectiv eness. GSBI is also k e y, as it aids in navigating
complex multihop questions by providing necessary intermediate
entity information for subgr a ph construction. QA is equall y essen-
tial, ensuring accurate identification of entities and properties for
corr ect subgr a ph construction. All these v ariants under perform
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 6,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
8 | GigaScience , 2025, Vol. 14
Table 5. Ablation study of the KGT fr ame work under Code-Llama-13B
ROUGE (%)
Method
GPT-4 Eval (%)
BERTScore (%)
Recall
Precision
F1 score
KGT (ours)
92.4
97.7
87.4
87.7
86.8
Without GSBI
71.8
95.5
68.1
69.8
66.8
Without QA&GSBI
69.7
94.7
55.0
66.3
58.2
Without QA&GSBI&SC
24.7
77.4
14.8
12.3
12.2
compared to the complete KGT, indicating that each of the 3 mod-
ules is vital for the final performance . Furthermore , such obser-
vations confirm that our KGT can indeed le v er a ge knowledge to
enhance the final performance of LLMs.
Implementation settings
Our knowledge gr a ph is quite large, with a complex schema, and
typicall y involv es input tokens within 1,300. Our experiment does
not r equir e fine-tuning, and the infer ence time is r elated to the
model size and computational resources. For example, when us-
ing our method, KGT, with the Code-Llama-13B model on an 80 GB
A100 GPU, it occupies 33 GB of VRAM. Without any acceleration
fr ame works, the infer ence r equir es 4 passes, eac h taking ar ound
20 seconds.
Case studies
Drug repositioning
Drug r epositioning emer ges as a pr omising str ategy to acceler ate
the process of drug development. This approach involves identi-
fying new therapeutic uses for existing drugs, thereby saving time
and r esources typicall y r equir ed for bringing a ne w drug to market
[ 45 ]. Our system is capable of investigating the potential reposi-
tioning of carteolol for the treatment of hemangiomas . T he exam-
ple is shown in Supplementary Table S2 and r elational dia gr am is
shown in Fig. 4 A. Utilizing the system’s knowledge gr a ph, a r ela-
tional chain is delineated, illustrating that propranolol, another
inhibitor of ADRB1, is effectiv el y emplo y ed in the treatment of
hemangiomas . T he system harnesses this insight to formulate a
hypothesis that carteolol, by virtue of its similar mechanism of
inhibition, could be potentially repositioning for treating heman-
giomas [ 46 ]. This hypothesis would serve as a precursor to clin-
ical trials and r esearc h, potentiall y expediting the availability of
an additional ther a peutic option for patients with hemangiomas.
Drug resistance resear c h
Drug resistance in cancer treatment poses a significant challenge
in clinical oncology. Understanding the genetic basis of resistance
can lead to mor e effectiv e tr eatment str ategies and personalized
medicine a ppr oac hes. Researc h in drug r esistance involv es deter-
mining why certain cancer-carrying mutated genes are not re-
sponsive to specific drugs and finding ways to overcome this re-
sistance [ 47 ]. Our system is capable of exploring drug resistance
in cancer. The example is shown in Supplementary Table S3 , and
a relational diagram is shown in Fig. 4 B. The KG data indicate
that the ALK-p.L1196M m utation, whic h is associated with gas-
tric cancer, has a known resistance to nalatinib [ 48 , 49 ]. The LLM
processes this information and infers that due to this resistance,
nalatinib might not be an effective medication for treating can-
cers caused by the ALK-p.L1196M mutation. The case highlights
the critical importance of understanding specific gene–drug in-
teractions in drug resistance research. It demonstrates how cer-
tain gene mutations could render a drug ineffective, which in turn
could guide oncologists in choosing alternative treatments or de-
v eloping ne w drugs that can bypass or tar get the r esistance mec h-
anisms. By accelerating the process of understanding drug resis-
tance, these artificial intelligence–driven systems can contribute
to impr ov ed patient outcomes and the optimization of cancer
tr eatment pr otocols.
Individualized treatment
Details
on
individualized
tr eatment
ar e
pr ovided
in
Supplementary Case Studies A . It is important to note that this
example is included solely to illustrate the technical capabilities
of the proposed method. The output generated in this example
has not been validated for clinical use, and further validation in
clinical settings would be r equir ed befor e an y suc h a pplication.
Selection and understanding of biomarkers
Details on selection and understanding of biomarkers are pro-
vided in Supplementary Case Studies B .
Discussion
In this article, we introduce a novel framework KGT, which em-
ploys LLMs for reasoning on the schema of KGs, to enhance the
reasoning abilities of LLMs in areas with missing domain data
by utilizing domain-specific knowledge gr a phs, suc h as oncology
knowledge gr a phs, ther eby addr essing the issue of factual hal-
lucinations in LLMs. Our method excels in extr acting, v alidat-
ing, and refining factual knowledge throughout the LLMs’ rea-
soning process. It seamlessly integrates with various LLMs, in-
cluding open-source models like Code-Llama, and enhances the
capabilities of LLMs solely through prompt engineering and in-
context learning without any fine-tuning. This grants it significant
generalizability.
We possess an extensive oncology knowledge graph and have
established a benchmark based on it to e v aluate the ca pabilities of
various methods. When tested on PcQA using various open-source
LLMs, the KGT fr ame work performs exceptionall y well, sur passing
the current best methods by 33%. This significant improvement
positions our a ppr oac h as a pioneering benchmark in biomedical
KGQA, setting a new standard that advances beyond previously
established best pr actices. Additionall y, thr ough case studies, our
a ppr oac h has been shown to effectiv el y pr ovide ther a peutic plans,
gener ate v aluable hypotheses for drug r epositioning, identify po-
tential drug targets, and study drug resistance . T his underscores
the pr actical v alue of the KGT fr ame work in deliv ering insight-
ful contributions that aid in the de v elopment and optimization
of treatment strategies. Each case study’s conclusions are further
v alidated by e vidence fr om pr e viousl y published r esearc h pa pers,
enhancing the credibility and impact of our findings.
Ho w e v er, it is important to note that the constructed QA
dataset and the corresponding published subset of the SOKG
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 7,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
Knowledge gr a ph–based thought | 9
A
B
C
D
Figure 4: (A), (B), (C), and (D) r espectiv el y r epr esent the r elational dia gr ams of drug r epositioning, drug r esistance r esearc h, individualized tr eatment,
and selection and understanding of biomarkers.
wer e specificall y designed to v alidate the effectiv eness of the KGT
fr ame work within this study. While the dataset is highly relevant
to biomedical applications, its scope is primarily focused on vali-
dating the proposed method. T herefore , it ma y not co ver all poten-
tial use cases. Additionally, our system curr entl y has the dr awbac k
of not performing fuzzy matching; if a drug name is misspelled by
e v en 1 letter, it fails to r etrie v e information from the knowledge
gr a ph. Ther efor e, we plan to impr ov e this aspect in the future to
enhance the system’s usability and reliability. Our ultimate goal
is to create a robust framework applicable to the rapidly evolving
domain of medical knowledge, supporting health care profession-
als in delivering personalized, precise medication tailored to the
individual needs of each patient.
Finally, we affirm that this study serves as a proof of concept,
aiming to showcase the technical feasibility and initial efficacy of
the method, which has not been validated in actual clinical prac-
tice. In any clinical or medical decision-making, reliance should
always be placed on the judgment and guidance of professional
health care practitioners.
Additional Files
Supplementary Table S1. Comparison of PcQA with MetaQA and
FACTKG in multihop tasks . T he types of intermediate entities are
indicated in bold.
Supplementary Table S2. Example of drug repositioning.
Supplementary Table S3. Example of drug resistance research.
Supplementary Table S4. Example of individualized treatment.
Supplementary Table S5. Example of selection and understand-
ing of biomarkers.
Supplementary Table S6. Prompts for Base and CoT&ICL.
Supplementary Fig. S1. Prompt for k e y information extraction.
Supplementary Fig. S2. Prompt for query statement generation.
Supplementary Fig. S3. Prompt for LLM inference and output.
Supplementary Fig. S4. Prompt for GPT-4–based evaluation.
Supplementary Fig. S5. (A), (B), (C), and (D) r espectiv el y r epr esent
the r elational dia gr ams of drug r epositioning, drug r esistance r e-
searc h, individualized tr eatment, and selection and understand-
ing of biomarkers.
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 8,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
10 | GigaScience , 2025, Vol. 14
Abbreviations
APE: automatic prompt engineer; BFS: breadth-first search; CF:
catastr ophic for getting; CoT: c hain of thought; GPT: gener ativ e
pr etr ained tr ansformer; ICL: in-context learning; KG: knowledge
gr a ph; KGQA: knowledge gr a ph question answering; LLM: large
langua ge model; NLP: natur al langua ge pr ocessing; PcQA: pan-
cancer question answ ering; RLHF: reinfor cement learning with
human feedback; SFT: supervised fine-tuning.
Availability of Source Code and
Requirements
Project name: bioKGQA-KGT
r Pr oject homepa ge: https:// github.com/ yichun10/ bioKGQA-
KGT.git
r Operating system(s): Linux (Ubuntu)
r Resource usage in inference ste p: A Lin ux (Ubuntu) system
with at least 2 CPU cores and 32 GB of VRAM. The GPU card
needs at least 60 GB VRAM (either two 32 GB V100s or one 80
GB A100)
r Pr ogr amming langua ge: Shell Script (Bash) with Python
3.10.13
r Other r equir ements: Python 3.10.13 with GPU/CPU support,
neo4j 5.13.0 (please see more requirements on GitHub repos-
itory)
r Licenses: MIT license
r Resear ch Resour ce Identifier (#RRID): SCR_025176
Ethical Statement
This study involves the generation of a biomedical question-
answer dataset derived from a biomedical knowledge gr a ph de-
veloped by our team. The knowledge gr a ph has been meticulously
constructed using nonpersonalized data obtained from various
credible biomedical sources . T he data collection and utilization
pr ocesses strictl y compl y with all r ele v ant legal r egulations and
ethical guidelines, ensuring the highest standards of data secu-
rity and privacy. The dataset adheres rigorously to data protection
principles and contains no sensitive personal information or iden-
tifiable individual health data. Furthermore, as the data collection
and processing activities in this study do not involve human sub-
jects, this r esearc h did not r equir e ethical r e vie w or a ppr ov al.
Author Contributions
Y.F. and L.Z. conceiv ed the pr oject. Y.F. pr oposed a KGQA benc h-
mark, de v eloped the KGT fr ame work, implemented the code, con-
ducted the experiments, and drafted the manuscript. C.M. con-
tributed the SmartQuerier Oncology Knowledge Gr a ph. Y.L. and
L.Z. supervised the study. All authors read and a ppr ov ed the final
manuscript.
Funding
This work was supported in part by funds from the Na-
tional K ey Researc h and De v elopment Pr ogr am of China (Nos.
2022YFF1202101 and 2023YFC3041600), the Chinese Academy
of Sciences Research Fund (No. XDB38050200), and the Self-
supporting Pr ogr am of Guangzhou National Labor atory (Nos.
SRPG22001 and SRPG22007).
Da ta Av ailability
We have publicly provided a subset of the SmartQuerier Oncol-
ogy Knowledge Gr a ph necessary for r epr oducing the r esearc h. An
arc hiv al copy of the code and the subgr a ph of the knowledge
gr a ph used in this r esearc h is av ailable via Softwar e Herita ge [ 33 ],
and the code and datasets can be accessed via GitHub [ 50 ]. Ad-
ditionall y, the pr ompts used in inter actions with LLMs [ 1 , 2 , 38 ,
42–44 ] during this r esearc h ar e av ailable in the supplemental ma-
terial. For access to the complete SmartQuerier Oncology Knowl-
edge Gr a ph data, please contact at service@smartquerier.com.
Competing Interests
Chao Ma is emplo y ed b y SmartQuerier Gene Technology (Shang-
hai) Co., a company active in the biomedical field relevant to the
content of this r esearc h. The SmartQuerier Oncology Knowledge
Gr a ph (SOKG) used in this study is proprietary to SmartQuerier
Gene Technology (Shanghai) Co. The other authors declare that
they have no competing interests.
References
1.
OpenAI. ChatGPT (Nov 30 version) [large language model]. 30
Nov 2022. https:// chat.openai.com/ chat .
2.
Touvron H, Martin L, Stone K, et al. Llama 2: Open foundation
and fine-tuned chat models [large language model]. 2023. arXiv
pre print arXi v:230709288. 19 Jul 2023. https:// doi.org/ 10.48550/a
rXiv.2307.09288 .
3.
Ji Z, Lee N, Frieske R, et al. Survey of hallucination in natural
langua ge gener ation. ACM Comput Surv 2023;55(12):1–38. https:
// doi.org/ 10.1145/ 3571730 .
4.
Liu T, Zheng X, Chang B, et al. To w ar ds faithfulness in open do-
main table-to-text generation from an entity-centric view. In:
Yang Qiang, ed. Proceedings of the AAAI Conference on Arti-
ficial Intelligence. AAAI Press. Vol. 35; 2021:13415–423. https:
// doi.org/ 10.48550/arXiv.2102.08585 .
5.
Kang D, Hashimoto T. Impr ov ed natur al langua ge gener ation
via loss truncation. 2020. arXiv preprint arXiv:200414589. 1 May
2020. https:// doi.org/ 10.48550/arXiv.2004.14589 .
6.
Pan S, Luo L, Wang Y, et al. Unifying large language models
and knowledge gr a phs: a r oadma p. IEEE Tr ans Knowl Data Eng.
2024;36(7):3580–99. https:// doi.org/ 10.1109/ TKDE.2024.3352100 .
7.
Han T, Adams LC, P a paioannou JM, et al. MedAlpaca—an open-
source collection of medical conversational AI models and train-
ing data. 2023. arXiv preprint arXiv:230408247. 4 Oct 2023. https:
// doi.org/ 10.48550/arXiv.2304.08247 .
8.
Yunxiang L, Zihan L, Kai Z, et al. Chatdoctor: A medical chat
model fine-tuned on Llama model using medical domain knowl-
edge. Cureus 2023;15(6):e40895. https:// doi.org/ 10.7759/ cureus
.40895 .
9.
Singhal K, Azizi S, Tu T, et al. Lar ge langua ge models encode clin-
ical knowledge. Nature 2023;620:172–80. https:// doi.org/ 10.103
8/s41586- 023- 06291- 2 .
10. Wang H, Liu C, Xi N, et al. Huatuo: tuning Llama model with Chi-
nese medical knowledge. 2023. arXi v pre print arXi v:230406975.
14 Apr 2023. https:// doi.org/ 10.48550/arXiv.2304.06975 .
11. Xiong H, Wang S, Zhu Y, et al. Doctorglm: fine-tuning your
Chinese doctor is not a herculean task. 2023. arXiv preprint
arXiv:230401097. 17 Apr 2023. https:// doi.org/ 10.48550/arXiv.2
304.01097 .
12. Zhang H, Chen J, Jiang F, et al. HuatuoGPT, to w ar ds tam-
ing language model to be a doctor. 2023. arXiv preprint
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 9,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
Knowledge gr a ph–based thought | 11
arXiv:230515075. 24 May 2023. https:// doi.org/ 10.48550/arXiv.2
305.15075 .
13. Yang S, Zhao H, Zhu S, et al. Zhongjing: enhancing the Chi-
nese medical capabilities of large language model through
expert feedback and real-world multi-turn dialogue. In: Pro-
ceedings of the AAAI Conference on Artificial Intelligence.
2023;38(17):19368–76. https:// doi.org/ 10.1609/ aaai.v38i17.2990
7 .
14. Tian Y, Gan R, Song Y, et al. ChiMed-GPT: a Chinese medical large
language model with full training regime and better alignment
to human pr efer ences. 2023. arXiv pr e print arXi v:231106025. 15
Jul 2024. https:// doi.org/ 10.48550/arXiv.2311.06025 .
15. Ouyang L, Wu J, Jiang X, et al. Tr aining langua ge models
to follow instructions with human feedback. Adv Neur Inf
Proc Syst 2022;35:27730–44. https:// doi.org/ 10.48550/arXiv.2203.
02155 .
16. Luo Y, Yang Z, Meng F, et al. An empirical study of catas-
tr ophic for getting in lar ge langua ge models during continual
fine-tuning. 2023. arXi v pre print arXi v:230808747. 17 Aug 2023.
https:// doi.org/ 10.48550/arXiv.2308.08747 .
17. Li Z, Hoiem D. Learning without forgetting. IEEE Trans Pattern
Anal Machine Intell 2017;40(12):2935–47. https:// doi.org/ 10.110
9/TPAMI.2017.2773081 .
18. Liu V, Chilton LB. Design guidelines for prompt engineering text-
to-ima ge gener ativ e models. In: Pr oceedings of the 2022 CHI
Conference on Human Factors in Computing Systems. 2022:1–
23. https:// doi.org/ 10.1145/ 3491102.3501825 .
19. W ei J, W ang X, Schuurmans D, et al. Chain-of-thought prompt-
ing elicits reasoning in large language models. Adv Neur Inf Proc
Syst 2022;35:24824–37. https:// doi.org/ 10.48550/arXiv.2201.1190
3 .
20. Zhou Y, Muresanu AI, Han Z, et al. Large language mod-
els ar e human-le v el pr ompt engineers. 2022. arXiv pr eprint
arXiv:221101910. 3 Nov 2022. https:// doi.org/ 10.48550/arXiv.221
1.01910 .
21. Sun H, Dhingra B, Zaheer M, et al. Open domain question an-
swering using early fusion of knowledge bases and text. In:
Riloff E., Chiang D., Hockenmaier J., and Tsujii J.eds. Proceed-
ings of the 2018 Conference on Empirical Methods in Natural
Langua ge Pr ocessing. 2018:4231–42. https:// doi.org/ 10.18653/v
1/D18-1455 .
22. Sun H, Bedrax-Weiss T, Cohen WW. Pullnet: open domain ques-
tion answering with iter ativ e r etrie v al on knowledge bases and
text. In: Proceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th International
Joint Conference on Natural Language Processing (EMNLP-
IJCNLP). Association for Computational Linguistics; 2019:2380–
90. https:// doi.org/ 10.18653/v1/ D19-1242 .
23. Zhang J, Zhang X, Yu J, et al. Subgr a ph r etrie v al enhanced model
for multi-hop knowledge base question answering. 2022. arXiv
pre print arXi v:220213296. 27 Jul 2022. https:// doi.org/ 10.48550/a
rXiv.2202.13296 .
24. Chen Y, Wu L, Zaki MJ. Bidir ectional attentiv e memory networks
for question answering over knowledge bases. In: Proceedings
of the 2019 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers); Minneapolis,
Minnesota: Association for Computational Linguistics; 2913–23.
https:// doi.org/ 10.18653/v1/ N19-1299 .
25. Saxena A, Tripathi A, Talukdar P. Impr oving m ulti-hop question
ans wering o v er knowledge gr a phs using knowledge base em-
beddings . In: J urafsky D., Chai J., Schluter N., and T etreault J .eds.
Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics. Association for Computational Lin-
guistics; 2020:4498–507. https:// doi.org/ 10.18653/v1/ 2020.acl-m
ain.412 .
26. Lan Y, He G, Jiang J, et al. A survey on complex knowledge base
question ans wering: methods , challenges and solutions . In: Pro-
ceedings of the 30th International Joint Conference on Artificial
Intelligence (IJC AI-21). IJC AI; 4483–91. https:// doi.org/ 10.24963/i
jcai.2021/611 .
27. Das R, Zaheer M, Thai D, et al. Case-based reasoning for nat-
ur al langua ge queries ov er knowledge bases. In: Pr oceedings
of the 2021 Conference on Empirical Methods in Natural Lan-
gua ge Pr ocessing. Online and Punta Cana: Association for Com-
putational Linguistics; 2021:9594–611. https:// doi.org/ 10.18653
/v1/2021.emnlp-main.755 .
28. Jiang J, Zhou K, Zhao WX, et al. Unikgqa: unified r etrie v al and
reasoning for solving multi-hop question ans wering o ver knowl-
edge gr a ph. 2022. arXiv pr eprint. arXiv:221200959. 2 Dec 2022.
https:// doi.org/ 10.48550/arXiv.2212.00959 .
29. Jiang J, Zhou K, Dong Z, et al. Structgpt: a gener al fr ame work
for lar ge langua ge model to r eason ov er structur ed data. In: Pr o-
ceedings of the 2023 Conference on Empirical Methods in Nat-
ur al Langua ge Pr ocessing. Singa por e. Association for Computa-
tional Linguistics; 2023:9237–51. https:// doi.org/ 10.18653/v1/ 20
23.emnlp-main.574 .
30. Sun J, Xu C, Tang L, et al. Think-on-gr a ph: deep and responsible
reasoning of large language model on knowledge graph. In: The
Twelfth International Conference on Learning Representations.
Vienna, Austria: arXiv; 2024. https:// doi.org/ 10.48550/arXiv.230
7.07697 .
31. Luo L, Li YF, Haf R, et al. Reasoning on gr a phs: faithful and
inter pr etable lar ge langua ge model r easoning. In: The Twelfth
International Conference on Learning Representations.Vienna,
Austria. arXiv; 2024. https:// doi.org/ 10.48550/arXiv.2310.01061 .
32. Kim J, Kwon Y, Jo Y, et al. KG-GPT: A general framework for
reasoning on knowledge graphs using large language models.
In: Bouamor H., Pino J., and Bali K.eds. Findings of the Asso-
ciation for Computational Linguistics: EMNLP. Singa por e: As-
sociation for Computational Linguistics. 2023; 9410–21. https:
// doi.org/ 10.48550/arXiv.2310.11220 .
33. Feng Y, Zhou L, Ma C, et al.. Knowledge gr a ph-based thought:
a knowledge gr a ph enhanced LLMs fr ame work for pan-cancer
question answering (Version 1). 2024 Softwar e Herita ge [Com-
puter softwar e]. https://arc hive.softwar eheritage.or g/br owse/
sna pshot/1906dbbfc88c9d1c8b7acf7deb7495e8002cbafa/dir ect
ory/ ?origin _ url=https:// github.com/yichun10/ bioKGQA-KGT .
34. W ang J, W u M, Huang X, et al. SynLethDB 2.0: a web-based
knowledge gr a ph database on synthetic lethality for novel anti-
cancer drug discovery. Database 2022;2022:baac030. https://doi.
org/ 10.1093/ database/baac030 .
35. Zhu C, Yang Z, Xia X, et al. Multimodal reasoning based on
knowledge gr a ph embedding for specific diseases. Bioinformat-
ics 2022;38(8):2235–45. https:// doi.org/ 10.1093/ bioinformatics/b
tac085 .
36. Zhang Y, Dai H, Kozar e v a Z, et al. Variational reasoning
for question answering with knowledge gr a ph. In: McIlr aith
S.A. and Weinberger K.Q. eds. Proceedings of the AAAI confer-
ence on artificial intelligence. Vol. 32. New Orleans, Louisiana,
USA: {AAAI} Press; 2018. https:// doi.org/ 10.48550/arXiv.1709.04
071 .
37. Kim J, Park S, Kwon Y, et al. FactKG: fact verification via rea-
soning on knowledge gr a phs. In: Rogers A., Boyd-Gr aber J.L., and
Okazaki N.eds. Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Pa-
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 10,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
12 | GigaScience , 2025, Vol. 14
pers), Toronto, Canada: Association for Computational Linguis-
tics; 2023:16190–206.https:// doi.org/ 10.48550/arXiv.2305.06590 .
38. Achiam J, Adler S, Agarwal S, et al. GPT-4 Technical Report
(Mar 14 version) [large language model]. 2023. arXiv preprint
arXiv:230308774. 14 Mar 2023. https:// doi.org/ 10.48550/arXiv.2
303.08774 .
39. Zhang T, Kishore V, Wu F, et al. Bertscor e: e v aluating text gen-
eration with bert. In: 8th International Conference on Learn-
ing Re presentations. Ad dis Ababa, Ethiopia. arXi v. 2020; https:
// doi.org/ 10.48550/arXiv.1904.09675 .
40. Lin CY. Rouge: a pac ka ge for automatic e v aluation of sum-
maries. In: Text Summarization Br anc hes Out. Barcelona, Spain:
Association for Computational Linguistics; 2004:74–81. https:
// aclanthology.org/W04-1013/ .
41. Dong Q, Li L, Dai D, et al. A survey for in-context learning. 2022.
arXi v pre print arXi v:230100234. 31 Dec 2022. https:// doi.org/ 10
.48550/arXiv.2301.00234 .
42. Roziere B, Gehring J, Gloeckle F, et al. Code Llama: open founda-
tion models for code [large language model]. 2023. arXiv preprint
arXiv:230812950. 24 A ug 2023. https:// doi.org/ 10.48550/arXiv.2
308.12950 .
43. Luo L, Ning J, Zhao Y, et al. Taiyi: a bilingual fine-tuned large
language model for diverse biomedical tasks [large language
model]. J. Am. Medical Informatics Assoc. 2024;31(9):1865–74.
https:// doi.org/ 10.1093/ jamia/ ocae037 .
44. Tunstall L, Beeching E, Lambert N, et al.. Zephyr: direct distil-
lation of LM alignment [lar ge langua ge model]. 25 Oct 2023.
https:// doi.org/ 10.48550/arXiv.2310.16944 .
45. He S, Liu X, Ye X, et al. Analysis of drug repositioning
and prediction techniques: a concise review. Curr Top Med
Chem. 2022;22(23):1897–906. https:// doi.org/ 10.2174/ 15680266
22666220317164016 .
46. Gan Lq, Wang H, Ni Sl, et al. A pr ospectiv e study of topical car-
teolol ther a py in Chinese infants with superficial infantile he-
mangioma. Pediatr Dermatol 2018;35(1):121–25. https://doi.org/
10.1111/pde.13361 .
47. Gottesman MM. Mechanisms of cancer drug resistance. Annu
Rev Med 2002;53(1):615–27. https:// doi.org/ 10.1146/ annurev.me
d.53.082901.103929 .
48. Alshareef A, Zhang HF, Huang YH, et al. The use of cellular
thermal shift assay (CETSA) to study crizotinib resistance in
ALK-expressing human cancers. Sci Rep 2016;6(1):33710. https:
// doi.org/ 10.1038/ srep33710 .
49. Simionato F, Frizziero M, Carbone C, et al. Curr ent str ategies to
ov ercome r esistance to ALK-inhibitor a gents. Curr Drug Metab
2015;16(7):585–96. https:// doi.org/ 10.2174/ 138920021666615081
2142059 .
50. Feng Y, Zhou L, Ma C, et al.. bioKGQA-KGT: knowledge gr a ph-
based thought. 10 Feb 2024. https:// github.com/yichun10/ bioK
GQA-KGT .
Recei v ed: May 27, 2024. Re vised: A ugust 14, 2024. Accepted: October 2, 2024
© The Author(s) 2025. Published by Oxford Uni v ersity Pr ess GigaScience. This is an Open Access article distributed under the terms of the Cr eati v e Commons Attribution License
( https://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Downloaded from https://academic.oup.com/gigascience/article/doi/10.1093/gigascience/giae082/7943459 by guest on 11 January 2026 | {
"author": "Feng Yichun, Zhou Lu, Ma Chao, Zheng Yikai, He Ruikun, Li Yixue",
"creationDate": "D:20250106105103Z",
"creationdate": "2025-01-06T10:51:03+00:00",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111011531+00'00'",
"moddate": "2026-01-11T01:15:31+00:00",
"page": 11,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/giae082.pdf",
"subject": "DOI: 10.1093/gigascience/giae082 GigaScience, 14, 0, 00-00-2025. Abstract: In recent years, large language models (LLMs) have shown promise in various domains, notably in biomedical sciences. However, their real-world application is often limited by issues like erroneous outputs and hallucinatory responses.We developed the knowledge graph–based thought (KGT) framework, an innovative solution that integrates LLMs with knowledge graphs (KGs) to improve their initial responses by utilizing verifiable information from KGs, thus significantly reducing factual errors in reasoning. The KGT framework demonstrates strong adaptability and performs well across various open-source LLMs. Notably, KGT can facilitate the discovery of new uses for existing drugs through potential drug–cancer associations and can assist in predicting resistance by analyzing relevant biomarkers and genetic mechanisms. To evaluate the knowledge graph question answering task within biomedicine, we utilize a pan-cancer knowledge graph to develop a pan-cancer question answering benchmark, named pan-cancer question answering.The KGT framework substantially improves the accuracy and utility of LLMs in the biomedical field. This study serves as a proof of concept, demonstrating its exceptional performance in biomedical question answering.",
"title": "Knowledge graph–based thought: a knowledge graph–enhanced LLM framework for pan-cancer question answering",
"total_pages": 12,
"trapped": ""
} |
An LLM-based Knowledge Synthesis and Scientific Reasoning Framework
for Biomedical Discovery
Oskar Wysocki1,2, Magdalena Wysocka2, Danilo S. Carvalho2, Alex Bogatu2,
Danilo Gusicuma1, Maxime Delmas1, Harriet Unsworth2, André Freitas1,2,3
1Idiap Research Institute, Switzerland
2National Biomarker Centre, CRUK-MI, Univ. of Manchester, United Kingdom
3Department of Computer Science, Univ. of Manchester, United Kingdom
Correspondence: firstname.lastname@idiap.ch1firstname.lastname@manchester.ac.uk2
Abstract
We present BioLunar, developed using the Lu-
nar framework, as a tool for supporting bio-
logical analyses, with a particular emphasis
on molecular-level evidence enrichment for
biomarker discovery in oncology. The platform
integrates Large Language Models (LLMs) to
facilitate complex scientific reasoning across
distributed evidence spaces, enhancing the ca-
pability for harmonizing and reasoning over
heterogeneous data sources. Demonstrating
its utility in cancer research, BioLunar lever-
ages modular design, reusable data access and
data analysis components, and a low-code user
interface, enabling researchers of all program-
ming levels to construct LLM-enabled scien-
tific workflows. By facilitating automatic sci-
entific discovery and inference from heteroge-
neous evidence, BioLunar exemplifies the po-
tential of the integration between LLMs, spe-
cialised databases and biomedical tools to sup-
port expert-level knowledge synthesis and dis-
covery.
1
Introduction
Contemporary biomedical discovery represents a
prototypical instance of complex scientific reason-
ing, which requires the coordination of controlled
in-vivo/in-silico interventions, complex multi-step
data analysis pipelines and the interpretation of the
results under the light of previous evidence (avail-
able in different curated databases and in the litera-
ture) (Paananen and Fortino, 2019; Nicholson and
Greene, 2020). This intricacy emerges out of the
inherent complexity of biological mechanisms un-
derlying organism responses, which are defined by
a network of multi-scale inter-dependencies (Bog-
dan et al., 2021). While more granular data is being
generated by the evolution of instruments, assays
and methods, and the parallel abundance of experi-
mental interventions (Dryden-Palmer et al., 2020),
there a practical barrier for integrating and coher-
ing this evidence space into a specific context of
analysis.
Within biomedical discovery, the language in-
terpretation capabilities of Large Language Mod-
els (LLMs) can provide an integrative framework
for harmonising and reasoning over distributed ev-
idence spaces and tools, systematising and low-
ering the barriers to access and reason over mul-
tiple structured databases, textual bases such as
PubMed, enriching the background knowledge
through specialised ontologies and serving as in-
terfaces to external analytical tools (e.g. mechanis-
tic/perturbation models, gene enrichment models,
etc). In this context, LLMs can serve as a linguis-
tic analytical layer which can reduce the syntactic
impedance across diverse functional components:
once an adapter to an external component is built it
can be integrated and reused in different contexts,
creating a monotonic increase of functional compo-
nents. Complementarily, from a Biomedical-NLP
perspective, in order to address real-world prob-
lems, LLMs need to be complemented with mech-
anisms which can deliver contextual control (e.g.
via Retrieval Augmented Generation: RAG: ac-
cess the relevant background knowledge and facts)
and perform the analytical tasks which are integral
to contemporary biomedical inference (’toolform-
ing’).
Emerging LLM-focused coordination frame-
works such as LangChain1, Flowise2 and Lunar3
provide the capabilities to deliver a composition
of functional components, some of them under a
low-code/no-code use environment, using the ab-
straction of workflows. While there are general-
purpose coordination frameworks, there is a lack of
specialised components for addressing biomedical
analyses.
In this paper we demonstrate BioLunar, a suite of
components developed over the Lunar environment
1https://python.langchain.com
2https://github.com/FlowiseAI/Flowise
3https://lunarbase.ai
arXiv:2406.18626v1 [q-bio.QM] 26 Jun 2024 | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 0,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
to support biological analyses. We demonstrate the
key functionalities of the platform contextualised
within a real-use case in the context of molecular-
level evidence enrichment for biomarker discovery
in oncology.
2
BioLunar
BioLunar enables the creation of LLM-based
biomedical scientific workflows using software
components with standardised APIs. A workflow
is composed of components and subworkflows con-
nected through input-output relationships, and are
capable of handling multiple inputs. In the user in-
terface, components are clustered according to their
function (see Fig.1). Creating a workflow does not
require programming knowledge since components
are predefined and merely require data inputs or
parameter settings. However, for users who wish to
write custom code, ’Python Coder’ and ’R Coder’
components are provided, enabling the definition
of custom methods. These custom components can
be saved and subsequently accessed in the ’Custom’
group tab.
In the paper we describe an exemplar biomed-
ical workflow designed to integrate evidence and
infer conclusions from bioinformatics pipeline re-
sults. Specifically, the biomedical workflow queries
expert knowledge bases (KBs) that continuously
compile clinical, experimental, and population ge-
netic study outcomes, aligning them with assertions
relevant to the significance of the observed gene or
variant. It then employs Natural Language Infer-
ence (NLI) (via LLM) to integrate and harmonise
the evidence space and interpreting the results, cul-
minating in a comprehensive summary for the en-
tire gene set input. This interpretation takes into
account the bioanalytical context supplied by the
user.
2.1
Exemplar Workflow
Next-generation sequencing (NGS) assays play a
pivotal role in the precise characterisation of tu-
mours and patients in experimental cancer treat-
ments. NGS findings are essential to guide the
design of novel biomarkers and cancer treatments.
Nevertheless, the clinical elucidation of NGS find-
ings subsequent to initial bioinformatics analysis
often requires time-consuming manual analysis pro-
cedures which are vulnerable to errors. The inter-
pretation of molecular signatures that are typically
yielded by genome-scale experiments are often
supported by pathway-centric approaches through
which mechanistic insights can be gained by point-
ing at a set of biological processes. Moreover,
gene and variant enrichment benefits from heteroge-
neous curated data sources which pose challenges
to seamless integration. Furthermore, there are
different levels of supporting evidence and there-
fore prioritising conclusions is crucial. Automating
evidence interpretation, knowledge synthesis and
leveraging evidence-rich gene set reports are fun-
damental for addressing the challenges in precision
oncology and the discovery of new biomarkers.
2.2
User interface
The user interface facilitates an agile workflow
construction by enabling users to select and ar-
range components via drag-and-drop from func-
tionally grouped categories, such as, i.a.: ’Prompt
Query’ featuring NLI components, ’Knowledge
Bases’ components, ’Extractors’ for retrieving files
from zip archives or extracting text and tables from
PDF files, and ’Coders’, which allow for the cre-
ation of custom components using Python or R
scripts.
Components allow for individual execution, edi-
tion, or configuration adjustment via a visual inter-
face. Workflows can be executed, saved, or shared.
Each component has designated input and output
capabilities, enabling seamless integration where
the output from one can directly feed into another.
Users have the flexibility to manually input values
if no direct connection is established. Additionally,
a component’s output can feed into multiple compo-
nents. The system’s architecture supports effortless
expansion, adding branches and components with-
out affecting the existing workflow, thus facilitating
scalable customization to meet changing require-
ments. The user interface with an example of a
workflow is presented in Fig.1 and in demo video
https://youtu.be/Hc6pAA_5Xu8.
2.3
Knowledge bases
The current framework integrates a diverse set
of knowledge bases which are relevant for pre-
cision oncology. To identify gene mutations as
biomarkers for cancer diagnosis, prognosis, and
drug response, we integrated CIViC4 and On-
coKB5. CIViC provides molecular profiles (MPs)
of genes, each linked to clinical evidence, with
4https://civicdb.org
5https://www.oncokb.org | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 1,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Figure 1: BioLunar interface. An exemplary workflow of Gene Enrichment with an input gene set, knowledge base
query and LLM interpretation components.
a molecular score indicating evidence quality, as-
sessed by human annotators. The Gene Ontology6
(GO) offered gene function insights, and the Hu-
man Protein Atlas7 supplied a list of potential drug
targets and transcription factors. We employed
COSMIC8 for somatic mutation impacts in can-
cer, the largest resource in this field. Our analysis
also included KEGG9, Reactome10, and WikiPath-
ways11 for pathway information, enriching our in-
vestigation with scientific literature via PubMed’s
API 12.
In the following subsections, we showcase ex-
amples of components, subworkflows, and work-
flows constructed using the BioLunar framework,
motivated by the biomarker discovery/precision on-
cology themes.
2.4
Construction and reuse of specialised
prompts
BioLunar employs standard LLM interfaces, al-
lowing the use of different models according to
users’ preferences. The prompt components allows
for the composition of specialised prompt chains
which can be later reused, defining a pragmatic
pathway for specialised Natural Language Infer-
ence (NLI) via prompt decomposition/composition.
This approach allows for the creation of reasoning
6https://geneontology.org
7https://www.proteinatlas.org
8https://cancer.sanger.ac.uk/cosmic
9https://www.kegg.jp/kegg/
10https://reactome.org
11https://www.wikipathways.org
12https://pubmed.ncbi.nlm.nih.gov
chains that combines user’s instructions with the
results of database queries and analyses from spe-
cialised tools within the context of the study. An
instantiated example of the Azure Open AI prompt
is described in Fig.1.
2.5
Subworkflow component
The subworkflow component enables the reuse
of an existing workflow within another workflow,
functioning as a component with specified inputs
and outputs. This feature simplifies the composi-
tion of more complex workflows and avoids the
repetition of defining identical steps for the same
task. Subworkflows can be selected like other com-
ponents from the left panel in the interface, offering
access to all available workflows for easy integra-
tion. Examples of subworkflows are presented in
Fig.2,3.
2.6
Gene Enrichment subworkflow
One example of a specialised subworkflow is the
Gene Enrichment subworkflow (Fig.1,2A) begins
with uploading the targeted gene sets.
Then a
component accesses a specific KB — such as
Gene Ontology, KEGG, Reactome, or WikiPath-
ways—defined by the user, using gprofiler API13.
This component identifies gene groups with a sta-
tistically significant overlap with the input gene set,
according to a Fisher’s test, and calculates p-values,
recall, and precision. The user then specifies a vari-
able to rank these groups and selects the top N for
further analysis. The output includes both a inter-
13https://biit.cs.ut.ee/gprofiler/page/apis | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 2,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Gene Enrichment
Sort and
select top N
Upload set
of genes
Interpret
results
(LLM)
Save results
Human Protein Atlas
Compute
overlap and
statistics
Upload set
of genes
Interpret
results
(LLM)
Save results
Query
Human
Protein Atlas
Run Gene
Enrichment
Provide
'Context'
Provide
'Context'
A)
B)
Figure 2: A) Gene Enrichment workflow - uses the gprofiler API to access i.a. Gene Ontology, KEGG, WikiPathways,
Reactome; B) Human Protein Atlas workflow. Compares and interprets the input and reference gene sets.
pretation performed by an NLI component (through
LLM) and a table featuring the names, descriptions,
and statistics of the top N selected groups.
2.7
Human Protein Atlas subworkflow
In the Human Protein Atlas subworkflow, given
a gene set, an associated external KB is queried
by selecting ‘Transcription factors’ from the HPA
database using a dedicated query-database connec-
tor. A reusable component, ’Analyze overlap’, then
identifies genes that overlap and calculates relevant
statistics. Similarly to the Gene Enrichment sub-
workflow, the results are interpreted by an prompt-
based NLI component and presented alongside a
table summarising the findings (Fig.2B,A.7).
2.8
CIVIC subworkflow
This subworkflow exemplifies a more complex com-
position of components (Fig.3). This subworkflow
initiates by querying the CIVIC database for input
genes, yielding, among other things, gene descrip-
tions in clinical contexts, and their variants and
molecular profiles (MPs), which are essential for
the final interpretation. Additionally, users spec-
ify the analysis context, including aspects such as
cancer types or subtypes, treatments, populations,
etc. Initially, gene descriptions are analysed by a
prompt-based NLI component within this defined
context. Subsequently, MPs scored below a prede-
fined threshold (set at a MP score of 10) are tagged
as less known, reflecting lower scientific evidence
and ranking by CIVIC annotators. The evidence
supporting these lesser-known MPs is then inter-
preted by a prompt-based NLI component, con-
sidering the broader analysis context. Conversely,
evidence from well-known MPs, scoring above 10,
undergoes a similar interpretation process.
For genes without identified MPs in CIVIC, a
sequence of components perform further evidence
retrieval from PubMed. An NLI module gener-
ates context-based keywords for PubMed queries,
which are combined with the names of genes lack-
ing MPs. A ’PubMed search’ component then re-
trieves N publications, including metadata, cita-
tion counts and MeSH terms (used later for context
alignment validation). The abstracts of these pub-
lications are interpreted by an NLI module in the
context of the analysis.
All clinical evidence interpretations are then suc-
cinctly summarised by via a prompt component,
taking into account the context of the analysis.
These interpretations, along with tabular results,
constitute the output.
2.9
Bioworkflow - comprehensive analysis for
a set of genes.
The exemplar bioworkflow composes multiple sub-
workflows (Fig.4), each dedicated to a specific
multi-step and specialised task, which are typically
defined by the composition of heterogeneous com-
ponents, most commonly connectors and query in-
stance components to specialised databases (e.g. | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 3,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Workflow - CivicDB analysis
Discovery / new knowledge
Get genes
details and
molecular
profiles
Upload set
of genes
Save results
Provide
'Context'
Interpret genes
in the context
(LLM)
Identify well
known
molecular
profiles
Identify
molecular
profiles without
evidence
Query
PubMed
and select
top N
Prepare
PubMed
subquery
(LLM)
Interpret
publications in
the context
(LLM)
Interpret all
evidence in the
context (LLM)
Identify less
known
molecular
profiles
Interpret all
evidence in the
context (LLM)
Summarize all
interpretations in
the context
(LLM)
Query CIVIC
database
Figure 3: CIVIC evidence analysis workflow. prompt-based NLI components are fed by both the results and context
of the analysis in order to produce relevant evidence-based conclusions.
CIVIC, HPA, PubMed, OncoKB), external spe-
cialised analytical tools (toolformers for gene en-
richment analysis) and chains of specialised in-
terpretation prompts (e.g. selection, filtering, ex-
traction, summarisation). This setup forms a com-
prehensive workflow which exemplifies the close
dialogue between LLMs and genomic analysis, en-
compassing gene enrichment, comparison with ref-
erence gene sets, and access to evidence within
an experimental medicine setting. Additionally,
it queries PubMed publications within the CIVIC
component to seek evidence for molecular profiles
not yet described. Its componentised architecture
facilitates the extensibility of the workflow with
new sources, prompts and external tools. Conclu-
sions drawn from each subworkflow are interpreted
within the analysis context, being integrated in a
comprehensive summary. All findings are com-
piled in a report, exported as a PDF file.
2.10
Software description
BioLunar uses the LunarVerse backend for its op-
erations. LunarVerse is downloaded and installed
by the setup script included with the demonstration
code. Some of its components need user specific
configuration to work, such as private API keys,
which are defined in a configuration file indicated in
the setup instructions. LunarVerse is distributed un-
der a open software license. The workflow can also
be operated via a graphical interface (LunarFlow)
Running a workflow can be done in two ways:
i) directly, by calling the LunarVerse engine on a
specified workflow descriptor file; ii) through the
Web interface, by pressing the “Run” button.
The first way is the default one in the demonstra-
tion code. It returns a copy of the workflow descrip-
tor, with all component output fields filled, which
is then used to extract and filter the desired outputs,
based on the component labels. It is also the best
way to automate multiple workflow runs and to
integrate their outputs into other systems.The sup-
porting code is available at https://github.com/
neuro-symbolic-ai/lunar-bioverse-demo.
2.11
Report
The Bioworkflow, as outlined in point 2.9, gener-
ates a report in PDF (Fig.5) format that begins by
outlining the context of the study, analysis details,
dates, and software versions at the top. The report
is enhanced with hyperlinks for easy navigation to
specific sections.
A "General Statistics" table provides a compre-
hensive overview of key metrics aggregated from
all components, aiming to consolidate information
for each gene throughout the analysis, with hyper-
links directing to the report sections where this
information originates.
Subsequent sections categorise genes into var- | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 4,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Provide
'Context'
CIVIC subworkflow
Conclusion (LLM)
Results (table)
Report
Overall conclusion
(LLM)
HPA subworkflow
Conclusion (LLM)
Results (table)
COSMIC subworkflow
Conclusion (LLM)
Results (table)
OncoKB subworkflow
Conclusion (LLM)
Results (table)
Conclusion (LLM)
Gene Enrichment
subworkflow
Conclusion (LLM)
Results (table)
Conclusion (LLM)
Results (table)
Conclusion (LLM)
Results (table)
Results (table)
Upload set
of genes
GE summary
(LLM)
Nik-Zainal-93 subworkflow
Conclusion (LLM)
Results (table)
Figure 4: Diagram of the Bioworkflow.
ious tables based on biological aspects and the
KBs consulted. These include Molecular Function
for genes sharing ontologies, drug target checks
based on the Human Protein Atlas, assessments
of cancer-related genes, Pathway Analysis and
Mapping via WikiPathways, and classification of
gene alterations by clinical relevance. By correlat-
ing genes with known functional information, the
workflow identifies statistically significant enriched
terms and summarizes these findings using LLM,
which also furnishes evidence.
LLM interprets each table, offering textual con-
clusions relevant to the analysis context. A final
summary, crafted using LLM, synthesizes all re-
sults within the given context. Importantly, all
LLM interpretations are grounded in concrete ev-
idence, with sources cited alongside the narrative.
This approach underscores the rigor of the analysis
by highlighting distinct sources that substantiate
the relevance of each gene and variant.
3
Case study
To demonstrate the capabilities of the Biowork-
flow, we analyzed outputs in two different scenar-
ios, each producing a distinct set of genes from
separate bioinformatics analyses. We entered these
gene sets along with their analysis contexts into
the Bioworkflow and executed it. Subsequently,
we qualitatively assessed the output reports (see
Fig.A.8,A.9), considering both the statistical data
and the interpretations provided by the prompt-
based NLI modules.
In Scenario 1, the user aims to explore the unique
molecular characteristics of HER2-low breast can-
cer to determine if it constitutes a distinct category
within breast cancer types, where the input genes
are ERBB2, ESR1, PIK3CA, CBFB, SF3B. The
report shows genomic alterations and genomic sig-
natures that were identified, including ERBB2 am-
plification, mutations in PIK3CA and ESR1, which
are important biomarkers in the selection of breast
cancer treatment. For the remaining two genes,
evidence was found confirming that these are new,
significantly mutated genes for which there is pre-
clinical evidence of actionability in clinical prac-
tice.
In Scenario 2, the user aims to discover new
genes that could lead to more accurate breast can-
cer diagnoses, enhancing treatment strategies and
addressing the disease’s complexity. His numeri-
cal analysis resulted in a set of genes (DIXDC1,
DUSP6, PDK4, CXCL12, IRF7, ITGA7, NEK2,
NR3C1) that require investigation. The report in-
forms that none of the genes is an oncogene (con-
firmation according to OncoKB), two of the genes | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 5,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Cancer type: Breast Cancer
Sample size: 27
Cancer type: breast cancer
Biopsy site: primary site
Center:
Informed consent:
Sample sent:
Export report
Analysis run date: 2024-02-24
Pipeline version: v2.0. details
Context: Breast cancer (BC) presents a significant global health challenge,
with its incidence steadily rising and mortality rates remaining high.
Its heterogeneous nature complicates treatment strategies,
contributing to issues like recurrence and drug resistance. more
Navigation Menu
Gene set dashboard
Context
Toolbox
Individual gene description
ERBB2
Gene
Gene Info
Oncogene,
more
Variant
ERBB2 L755S
Functional relevance evidence
Evidence (curated):
>Oncogenic, OncoKB
>Sensitivity/Response, Resistance, CIViC
Hallmark
Yes, more
ERBB2 D769Y
Evidence (curated):
>Oncogenic, OncoKB
>Sensitivity/Response, CIViC
ERBB2 R143G
Others/Inconclusive:
>Inconclusive/weaker evidence, OncoKB
HPA:
drug targets
FDA approved
drug target,
more
Evidence supporting
the variant functional relevance
Current biomarker clinical relevance
Cancer biomarkers
reported at present
Investigational, 1 assertions
Cancer repurposing, 4 assertions
Hypothetical, 2 assertions
Investigational, 1 assertions
Hypothetical, 1 assertions
General Statistics
General Statistics
ERBB2
This gene is considered an oncogene.
ERBB2, a receptor tyrosine kinase, is altered by mutation, amplification and/or
overexpression in various cancer types, most frequently in breast, esophagogastric and
endometrial cancers.
more gene info
Gene mutations based on previous pan-cancer cohorts
ERBB2 L755S
http://oncokb.org/#/gene/ERBB2/alteration/L755S
Effect: Oncogenic
ERBB2 L755S
Please check the original assertions provided by each
knowledgebase listed below
http://oncokb.org/#/gene/ERBB2/alteration/Oncogenic
Mutations
Biomarker: ERBB2 Oncogenic Mutations
Effect: drug Responsive
Evidence level: LEVEL_3A
Drug: Neratinib
Disease: Breast Cancer
Gene with the hallmark description
This gene has a hallmark. more
This gene has a stimulating effect via individual cancer
hallmarks.
Interactive pop-up windows
Access to external resources
with the original evidence
Gene-detailed view
1
1
3
3
2
2
4
4
Final conclusion generated by LLM
ERBB2, commonly referred to as HER2, is amplified and/or overexpressed in 20-30% of invasive breast carcinomas. HER2-positive breast cancer is treated in a separate manner from other subtypes of breast
cancer and commonly presents as more aggressive disease. Metastatic HER2-positive breast cancer is now commonly treated with HER2-targeted therapy. Apart from being amplified/overexpressed, ERBB2
activating mutations have been shown to have clinical importance in HER2-negative breast cancer. These mutations have shown sensitivity to the tyrosine kinase inhibitor neratinib, and highlight the importance
of clinical sequencing efforts in treating breast cancer. ERBB2 L755S was one of the first ERBB2 variants to be functionally classified (Bose et al. 2013). This mutation was not shown to be an activating mutation,
unlike many of the other variants queried. This mutation was also shown to confer resistance to the tyrosine kinase inhibitor lapatinib in MCF10A cell lines.
Final LLM Conclusion
LLM Description
ERBB2 (Erb-B2 Receptor Tyrosine
Kinase 2) is a Protein Coding gene.
Among its related pathways are
Drug-mediated inhibition of ERBB2
signaling and Signaling by ERBB2
KD Mutants. Gene Ontology (GO)
annotations related to this gene
more
LLM Description
Figure 5: The BioLunar report’s overview, produced by Bioworkflow.
are potential drug targets and one is FDA approved
drug targets. According to the KEGG-based enrich-
ment analysis, these genes were mainly enriched
through several signaling pathways including tu-
mor necrosis factor (TNF) signaling pathway. Us-
ing LLMs in conjunction with a PubMed search
component, papers were searched in PubMed that
describe various gene variants and the genes have
been indicated as prospective biomarkers associ-
ated with breast cancer.
Note that in scenario 2, for genes lacking molec-
ular profiles in the KB, a search in PubMed was
conducted. This approach enables the workflow to
automatically uncover and search for non-obvious
and previously unknown relationships. Essentially,
if a gene is absent from the database, it suggests
that its relevance is relatively novel and not yet
documented. Therefore, seeking out the most re-
cent publications that describe this gene within the
analysis context represents a significant advantage,
provided by the workflow that integrates various
components.
4
Related Work
Bioinformatics Pipelines Over the past decade,
three scientific workflow management systems
such as Galaxy (gal, 2022), Snakemake (Köster
and Rahmann, 2012), and Nextflow (Di Tommaso
et al., 2017), have been instrumental to bioinfor-
maticians to systematise their complex analytical
processes. Nextflow targets bioinformaticians and
facilitates gene enrichment analysis, annotate bi- | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 6,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
ological sequences, and perform gene expression
analysis by including modules supported by various
bioinformatics tools. These workflow systems are
currently centred around the composition of spe-
cialised bioinformatics software, configuration pa-
rameters and supporting datasets, facilitating reuse
and reproducibility. In contrast, this paper explores
the concept on using LLMs within a specialised
workflow environment to support the interpretation
and integration of multiple analytical processes.
5
Conclusion
In this paper we provided a demonstration of a
scientific workflow based on LLMs to support spe-
cialised gene analyses using oncology and gene
enrichment as a driving motivational scenario. The
framework is built using the Lunar framework
and allows for the composition of specialised an-
alytical workflows, integrating external databases
(Retrieval Augmented Generation), external tools
(ToolFormers) and contextualised chains of LLM-
based interpretation. The paper highlights that
a workflow environment with specialised compo-
nents for RAG, ToolFormers and a set of spe-
cialised prompts-based Natural Language Infer-
ence can serve as the foundation for streamlining
and automating complex analytical process within
a biomedical setting. . We showcase analytical
applications within the biomedical domain, partic-
ularly in oncology, constructively progressing to-
wards more complex gene analysis workflows. The
developed bioworkflow demonstrates the LLMs
can be instrumental in enabling a complex end-
to-end highly-specialised analytical workflow, in
a reproducible manner, supporting the integration
of heterogeneous evidence, synthesising conclu-
sions and while simultaneously documenting and
linking to the data sources within a comprehensive
output report. The proposed workflow is based on
a low-code paradigm that enables domain experts,
regardless of their programming skills, to construct
and scientific workflows enabled by generaqtive AI
amethods.
Limitations
• The current demonstration uses external LLM-
based APIs but can be adapted to open source
LLM models.
• The LLM-based inferences require a critical
supporting quantitative evaluation and halluci-
nations are possible. The current workflow is
motivated by a hypothesis generation process,
which is fully human supervised and does not
have direct clinical applications.
Acknowledgements
This work was partially funded by The Ark
foundation, by the European Union’s Horizon
2020 research and innovation program (grant no.
965397) through the Cancer Core Europe DART
project, and by the Manchester Experimental Can-
cer Medicine Centre and the NIHR Manchester
Biomedical Research Centre.
References
2022. The galaxy platform for accessible, reproducible
and collaborative biomedical analyses: 2022 update.
Nucleic Acids Research, 50(W1):W345–W351.
Paul Bogdan, Gustavo Caetano-Anollés, Anna Jolles,
Hyunju Kim, James Morris, Cheryl A Murphy,
Catherine Royer, Edward H Snell, Adam Steinbren-
ner, and Nicholas Strausfeld. 2021. Biological Net-
works across Scales—The Theoretical and Empirical
Foundations for Time-Varying Complex Networks
that Connect Structure and Function across Levels of
Biological Organization. Integrative and Compara-
tive Biology, 61(6):1991–2010.
Paolo Di Tommaso, Maria Chatzou, Evan W Floden,
Pablo Prieto Barja, Emilio Palumbo, and Cedric
Notredame. 2017. Nextflow enables reproducible
computational workflows.
Nature biotechnology,
35(4):316–319.
K.D. Dryden-Palmer, C.S. Parshuram, and W.B. Berta.
2020. Context, complexity and process in the im-
plementation of evidence-based innovation: a realist
informed review. BMC Health Services Research,
20(81):1472–6963.
Johannes Köster and Sven Rahmann. 2012.
Snake-
make—a scalable bioinformatics workflow engine.
Bioinformatics, 28(19):2520–2522.
David N. Nicholson and Casey S. Greene. 2020. Con-
structing knowledge graphs and their biomedical ap-
plications. Computational and Structural Biotechnol-
ogy Journal, 18:1414–1428.
Jussi Paananen and Vittorio Fortino. 2019. An omics
perspective on drug target discovery platforms. Brief-
ings in Bioinformatics, 21(6):1937–1953. | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 7,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
A
Appendix | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 8,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Scenario 1
Context:
The analysis focuses on HER2-low breast cancer (HLBC), a subtype that
challenges traditional classifications based on HER2 expression and ERBB2
amplification. Despite being operationally defined, HLBCs constitute a
significant portion of breast cancers, particularly among estrogen receptor-
positive tumors. This study aims to elucidate the molecular characteristics
of HLBCs, examining their mutational and transcriptional profiles. The
research also investigates potential heterogeneity within HLBCs and compares
their genomic landscape with HER2-positive and HER2-negative breast cancers.
By providing insights into the distinct molecular features of HLBCs, this
analysis seeks to establish whether they represent a unique entity in breast
cancer pathology.
List of genes:
ERBB2, ESR1, PIK3CA, CBFB, SF3B1
Scenario 2
Context:
Breast cancer (BC) presents a significant global health challenge, with its
incidence
steadily
rising
and
mortality
rates
remaining
high.
Its
heterogeneous nature complicates treatment strategies, contributing to issues
like recurrence and drug resistance. Biomarkers play a crucial role in
diagnosing and managing BC, aiding in personalized treatment approaches.
However, existing biomarkers have limitations, necessitating the exploration
of novel markers, particularly in the realm of molecular and genetic
analysis. This study focuses on identifying genes with potential diagnostic
utility in breast cancer, aiming to contribute to the development of more
effective biomarkers and therapies, including immunotherapies, to combat this
disease.
List of genes:
DIXDC1, DUSP6, PDK4, CXCL12, IRF7, ITGA7, NEK2, NR3C1
Figure A.6: User-defined context of the analysis, including aspects like cancer types or subtypes, treatments,
populations, for Scenario 1 and 2.
Figure A.7: Human Protein Atlas workflow in the BioLunar interface. | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 9,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Cancer type: Breast Cancer
Sample size: 5
Cancer type: breast cancer
Biopsy site: primary site
Center:
Informed consent:
Sample sent:
Export report
Analysis run date: 2024-02-24
Pipeline version: v2.0. details
Context:
The analysis focuses on HER2-low breast cancer (HLBC), a subtype
that challenges traditional classifications based on HER2 expression
and ERBB2 amplification. Despite being operationally defined, HLBCs
constitute a significant portion of breast cancers, more
Figure A.8: The BioLunar report, produced by Biowork-
flow for Scenario 1
Cancer type: Breast Cancer
Sample size: 27
Cancer type: breast cancer
Biopsy site: primary site
Center:
Informed consent:
Sample sent:
Export report
Analysis run date: 2024-02-24
Pipeline version: v2.0. details
Context: Breast cancer (BC) presents a significant global health challenge,
with its incidence steadily rising and mortality rates remaining high.
Its heterogeneous nature complicates treatment strategies,
contributing to issues like recurrence and drug resistance. more
Figure A.9: The BioLunar report, produced by Biowork-
flow for Scenario 2. | {
"author": "",
"creationDate": "D:20240628000344Z",
"creationdate": "2024-06-28T00:03:44+00:00",
"creator": "LaTeX with hyperref",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"format": "PDF 1.5",
"keywords": "",
"modDate": "D:20240628000344Z",
"moddate": "2024-06-28T00:03:44+00:00",
"page": 10,
"producer": "pdfTeX-1.40.25",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2406.18626v1.pdf",
"subject": "",
"title": "",
"total_pages": 11,
"trapped": ""
} |
Nucleic Acids Research , 2025, 53 , D730–D737
https://doi.org/10.1093/nar/gkae1113
Advance access publication date: 18 November 2024
Database issue
The STRING database in 2025: protein networks with
directionality of regulation
Damian Szklarczyk 1 , 2 , Katerina Nastou
3 , Mik aela K outrouli
3 , Rebecca Kirsch 3 ,
F ar rokh Mehryary
4 , Radja Hachilif 1 , 2 , Dewei Hu 3 , Matteo E. Peluso 1 , 2 , Qingyao Huang 1 , 2 ,
Tao Fang
1 , 2 , Nadezhda T. Doncheva
3 , Sampo Pyysalo 4 , P eer Bor k 5 , 6 , 7 , * , Lars J. Jensen 3 , * and
Christian von Mering
1 , 2 , *
1 Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
2 SIB Swiss Institute of Bioinformatics, Amphipôle, Quartier UNIL-Sorge, 1015 Lausanne, Switzerland
3 Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, Blegdamsvej 3B, 2200 Copenhagen N, Denmark
4 TurkuNLP Lab, Department of Computing, University of Turku, Vesilinnantie 5, 20014 Turku, Finland
5 Structural and Computational Biology Unit, European Molecular Biology Laboratory, Meyerhofstrasse 1, 69117 Heidelberg, Germany
6 Max Delbrück Centre for Molecular Medicine, Robert-Rössle-Strasse 10, 13125 Berlin, Germany
7 Department of Bioinformatics, Biozentrum, University of Würzburg, Am Hubland, 97074 Würzburg, Germany
* To whom correspondence should be addressed. Tel: +49 6221 387 8526; Fax: +49 6221 387 517; Email: bork@embl.de
Correspondence may also be addressed to Lars J. Jensen. Email: lars.juhl.jensen@cpr.ku.dk
Correspondence may also be addressed to Christian von Mering. Tel: +41 44 6353147; Fax: +41 44 6356864; Email: mering@imls.uzh.ch
Abstract
Proteins cooperate, regulate and bind each other to achie v e their functions. Understanding the comple x netw ork of their interactions is essential
f or a sy stems-le v el description of cellular processes. T he S TRING database compiles, scores and integrates protein–protein association informa-
tion drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global
networks that encompass both ph y sical and functional interactions. A dditionally, S TRING pro vides supplementary tools such as network cluster-
ing and pathw a y enrichment analy sis. T he latest v ersion, S TRING 12.5, introduces a ne w ‘regulatory netw ork’, f or which it gathers evidence on
the type and directionality of interactions using curated pathw a y databases and a fine-tuned language model parsing the literature. This update
enables users to visualize and access three distinct network types—functional, ph y sical and regulatory —separately, each applicable to distinct
research needs. In addition, the pathw a y enrichment detection functionality has been updated, with better false discovery rate corrections,
redundancy filtering and impro v ed visual displa y s. T he resource now also offers improved annotations of clustered networks and provides users
with do wnloadable netw ork embeddings, which f acilitate the use of S TRING netw orks in machine learning and allow cross-species transfer of
protein information. The STRING database is available online at https:// string-db.org/ .
Gr aphical abstr act
Received: September 15, 2024. Revised: October 18, 2024. Editorial Decision: October 21, 2024. Accepted: October 29, 2024
© The Author(s) 2024. Published by Oxford University Press on behalf of Nucleic Acids Research.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https: // creativecommons.org / licenses / by / 4.0 / ),
which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 0,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
Nucleic Acids Research , 2025, Vol. 53, Database issue
D 731
Introduction
The function of every living cell is primarily governed by a
complex network of interacting proteins, with each protein’s
role determined not only by its molecular activities but also
by its position within this network ( 1 ,2 ). Connected proteins
work together to contribute to common biological processes
through various interaction types, such as physical binding,
genetic interactions and regulatory influences. These interac-
tions can collectively be categorized as functional associa-
tions , which serve as fundamental operational units within bi-
ological systems. Unraveling protein networks in their various
modalities remains a significant research focus. Consequently,
numerous databases have been developed over the years to
meet specific research needs. These range from carefully cu-
rated pathway databases such as Reactome ( 3 ) and KEGG
( 4 ), to databases of experimental interaction evidence curated
from literature such as BioGRID ( 5 ), IntAct ( 6 ) and MINT
( 7 ), and to those centered on specific modes of interactions,
including Complex Portal for protein co-complexes ( 8 ) and
SIGNOR for regulatory interactions ( 9 ). Finally, composite
databases such as STRING ( 10 ), GeneMANIA ( 11 ), FunCoup
( 12 ) and HumanNet ( 13 ) not only incorporate data from these
sources but also employ an array of computational methods
to predict additional associations, striving to provide the most
comprehensive views of the interactome.
Among these databases, STRING is notable for its many
sources of evidence, its robust scoring system, user-friendly
interface and comprehensive suite of enrichment features. It
is dedicated to assembling a broad set of associations among
proteins for the complete proteomes of thousands of organ-
isms across all domains of life. STRING charts interactions
ranging from highly confident, well-documented associations
to more speculative ones, which are crucial for exploratory
and computational analyses. The scoring system ensures that
data from diverse sources—including automatic text mining,
high- and low-throughput experimental data, and compu-
tational predictions—are directly comparable and weighted
consistently, regardless of their origin. Additionally, STRING
extends and maps networks across species by predicting in-
terologs, thereby broadening the scope of functional associa-
tions to encompass a diverse range of organisms. This includes
uncultured novel bacterial species derived from metagenomic
samples, and even unpublished proteomes uploaded by users.
The development of interaction databases has so far been a
trade-off between detailing the exact mode of an interaction—
whether physical or functional, signaling or structural, stable
or transient—and maintaining a comprehensive set of interac-
tions. Given the limitations of the available data and method-
ologies, STRING has historically focused on broadly defined
functional associations, which provided the most useful set
of interactions for proteome-wide analysis. While these net-
works continue to perform well in downstream tasks ( 14 ,15 ),
they often lack fine-grained resolution, leaving more special-
ized databases better equipped to offer detailed insights into
specific interactions if needed. However, the growing vol-
ume of data and advancements in text-mining technologies
have since enabled the composite, general-purpose databases
to catch up, offering more detailed maps of the interac-
tion space. Recently, the STRING database introduced a co-
complex (physical) interaction network mode, which details
multi-protein assemblies. With the latest update, the database
now also includes the largest set of regulatory (directed) inter-
actions, highlighting the flow of information within cells. This
major enhancement greatly expands the analytical capabilities
of the database, enabling new types of studies that were not
previously possible with nondirected interaction networks.
Interaction and pathway databases are frequently used to
gain insights into the functional context of individual pro-
teins or to understand the biological organization of an entire
protein dataset. Many of these databases enhance the inter-
pretation of protein datasets beyond simple interaction anno-
tations by incorporating an enrichment analysis toolset. This
type of analysis involves comparing observed data against ex-
pected distributions, enabling researchers to identify statisti-
cally significant features or patterns. Enrichment tools typi-
cally utilize established ontologies and annotations, such as
Gene Ontology ( 16 ), MSigDB hallmark sets ( 17 ) or OMIM
( 18 ). A few databases also leverage their annotated datasets
to help understand the functional context of a user’s input.
For example, KEGG ( 4 ) employs pathway and module ab-
stractions to organize genes into datasets for overrepresenta-
tion analysis, while Reactome maps interconnected proteins
into hierarchically nested modules. Uniquely, STRING uses
an unsupervised procedure to hierarchically precluster its en-
tire proteome-wide networks into functional modules. Incor-
porating these network-derived gene sets into the enrichment
analysis facilitates the identification of novel modules, espe-
cially in areas of the proteome where manual curation has
been less comprehensive so far.
Database content
The basic interaction scope in STRING is that of a ‘functional
association’ between pairs of proteins. A functional associa-
tion is defined as a contribution of two non-identical proteins
to a common function ( 19 ,20 ). This can take many forms;
functionally associated proteins can be in physical proxim-
ity to each other, regulate each other, exhibit genetic epistasis
or even work antagonistically (as long as this occurs in the
context of a common function). For the purpose of defining
functional associations, the concept of a common function is
crucial, but difficult to define—it should broadly be thought
of as corresponding to the notion of a ‘pathway’ or ‘function
module’. In practice, STRING roughly follows the functional
granularity of the pathway maps in the KEGG database ( 4 ).
From the set of all functional associations in STRING, sub-
sets are derived that are more specifically annotated with re-
gard to their mechanism of association. Currently, two such
more specific subsets are implemented: ‘physical’ and ‘regula-
tory’. The physical mode refers to pairs of proteins that either
bind directly or are at least subunits of the same complex ( 21 ).
The regulatory mode refers to associated protein pairs that are
known to regulate each other’s activity in at least one direc-
tion. This mode is described in more detail further below; for
the first time in STRING, such connections are annotated not
only with confidence scores but also with a directionality.
All protein–protein associations in STRING are annotated
with ‘confidence scores’. These scores are fully precomputed;
they scale between 0 and 1 and describe the estimated like-
lihood of a postulated association being correct, given the
available evidence. Separate confidence scores are provided
for physical and regulatory modes—these scores provide es-
timates of the likelihood that the proposed association is tak-
ing place and is indeed of the postulated type. The two spe-
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 1,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
D 732
Nucleic Acids Research , 2025, Vol. 53, Database issue
cific network modes are fully consistent with the functional
network, such that if an interaction is present in either the
physical or regulatory network, it will, by definition, also be
present in the full functional association network (with an
equal or higher confidence score). To compute the various con-
fidence scores, the available evidence supporting a given inter-
action is first separated into ‘evidence channels’, by the type
of evidence. For the basic functional association confidence
score, seven such channels are used. These include three that
are based on genomic context predictions—neighborhood, fu-
sion and gene co-occurrence—as well as channels dedicated to
co-expression, experimental data, curated databases and text
mining.
The genomic context channels focus on associations in-
ferred from genome sequences alone, detecting evolution-
ary patterns such as gene proximity, fusion events and co-
occurrence across different organisms. For instance, the neigh-
borhood channel assigns an association score to gene pairs
that are located close to each other on the chromosome in
the same orientation (in prokaryotic genomes) ( 22 ). The fu-
sion channel identifies open reading frames that result from
gene fusion events ( 23 ), while the co-occurrence channel ex-
amines whether genes have a nontrivial but shared distribu-
tion across genomes ( 24 ), implying a shared history of hori-
zontal transfers, losses or duplication events and thus likely
related functions. The co-expression channel compiles data
from gene expression studies, analyzing both transcript and
protein abundances across various conditions. By comparing
expression profiles, it identifies gene pairs with similar expres-
sion patterns, suggesting functional linkage ( 10 ,25 ). The ex-
periments channel aggregates interaction evidence from labo-
ratory assays, including biochemical, biophysical and genetic
assays. Data are imported from primary repositories such as
BioGRID ( 5 ) and the IMEx consortium ( 26 ), and the confi-
dence scores are estimated by globally benchmarking the accu-
racy of annotated experimental techniques, as well as within-
dataset performance and consistency for the case of high-
throughput experiments ( 10 ). The database channel is based
on well-described, curated protein–protein associations from
expert-compiled resources, such as KEGG ( 4 ), Reactome ( 3 )
and Gene Ontology Complexes ( 16 ). These resources provide
well-established pathways, complexes and functional relation-
ships, offering a high level of reliability. Unlike other chan-
nels, the database channel assigns a uniformly high confidence
score to associations, reflecting their established nature. Fi-
nally, the text-mining channel utilizes a large corpus of sci-
entific literature, including PubMed abstracts and full-text ar-
ticles, to identify co-mentions of protein names ( 27 ). By statis-
tically analyzing the frequency of these co-mentions in various
textual contexts, this channel uncovers potential associations
that may not be evident from structured data alone. In addi-
tion, for the more specific association modes, dedicated large
language models are employed to detect sentences supporting
either physical ( 21 ) or regulatory associations (see below).
For each evidence channel, the available interaction evi-
dence is translated into a confidence score by first quantifying
the evidence using channel-specific metrics and then convert-
ing these metrics into likelihoods using calibration curves de-
rived from prior knowledge (from pathway-map memberships
in KEGG). For the more specific association modes ‘physical’
and ‘regulatory’, only channels and evidence that are appli-
cable to these modes are considered. After this, all channel
scores that have been computed for a given protein pair in
a given organism are transferred onto related protein pairs
in other organisms, based on the ‘interolog’ concept ( 27 ,28 ).
Lastly, a final, combined confidence score is computed by inte-
grating the channel-specific subscores probabilistically, under
the assumption that evidence in different channels is largely
independent. Users of STRING can directly rely on this com-
bined score for browsing and filtering networks, or they can
alternatively customize their analyses by enabling or disabling
specific channels separately, after which the combined score is
recomputed according to their chosen settings.
All primary evidence underlying a given interaction can
be inspected interactively, in dedicated evidence viewers on-
line. Furthermore, accessory information is available for
each protein, such as its three-dimensional structure, do-
main composition, annotation and cross-references. Apart
from its core protein-network functionality, STRING also
implements features that allow extensive analysis of up-
loaded user data. These include functional enrichment de-
tection ( 29 ), experimental bias detection ( 21 ), homology
searches, and clustering and organizing large query pro-
tein sets. All data contained in STRING are freely avail-
able for download, under a Creative Commons BY 4.0 li-
cense. Apart from its website, STRING can also be accessed
via a dedicated Cytoscape plugin ( 30 ), through an applica-
tion programming interface (API) ( 31 ), as well as through
an R / Bioconductor package ( https://www.bioconductor.org/
packages/ release/ bioc/ html/ STRINGdb.html ).
Regulat ory netw orks
Molecular networks, like those available through STRING,
have become invaluable tools in biomedical research, offering
powerful insights into how molecules work together within
cells. The functional association networks, while useful for
many applications, do not specify the interaction type (e.g.
complex formation or transcriptional regulation) nor the di-
rection of interaction (i.e. who regulates whom). Constructing
networks with more detailed interaction types and directions
can significantly expand their utility, particularly for building
mathematical models of biological systems or for making log-
ical inferences. The directionality of interactions is essential
for interpretation of omics data, which typically reveals more
about downstream effects of conditions such as diseases than
it does about the upstream events that led to it and thus could
be targeted therapeutically.
To incorporate interactions with specific directionality, sign
and type into STRING, we leveraged the novel dataset and
deep learning-based relation extraction method described in
RegulaTome ( 32 ). The RegulaTome corpus provides a rich
and diverse dataset that includes 16 961 relations between
54 951 entities annotated across over 2500 documents. This
corpus is utilized to train a deep learning-based method used
to annotate and extract the regulatory events throughout the
entire available literature.
On top of undirected physical interactions—which are al-
ready included in STRING—the following types of directed
interactions can be extracted using the system developed for
RegulaTome: Regulation , with the signed subtypes of Posi-
tive Regulation and Negative Regulation , Regulation of Gene
Expression , Regulation of Degradation and Catalysis of Post-
Translational Modifications , covering six subtypes: Catalysis
of Small Protein Conjugation , Catalysis of Small Protein Re-
moval , Catalysis of Phosphorylation , Catalysis of Dephospho-
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 2,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
Nucleic Acids Research , 2025, Vol. 53, Database issue
D 733
Figure 1. Illustration of the new ‘regulatory network’ mode in STRING, where the network edges visually indicate the direction, confidence and sources
of each regulatory interaction. Clicking on an edge within the network brings up a pop-up window with a detailed o v ervie w of the a v ailable e vidence and
the annotated type of regulation. Users can further explore the data behind the interactions by accessing the specific evidence viewers linked within the
pop-up, including the text-mining evidence viewer. This viewer presents excerpts from literature pertaining to the inspected interactions, as well as an
automatically generated large language model summary of the presented e x cerpts.
rylation , Catalysis of Small Molecule Conjugation and Cataly-
sis of Small Molecule Removal . To enhance the interoperabil-
ity of the annotations, these chosen relation types align with
Gene Ontology ( 32 ).
To identify and classify a wide array of interaction types, we
fine-tuned the RoBERT a-large-PM-M3-V oc language model,
a transformer-based model well suited for biological tasks.
This model was trained on the RegulaTome dataset for multi-
label extraction of the directed, typed and signed interactions
mentioned above, and achieved an average F 1 score of 73.5%
(with a precision of 75.2% and a recall of 71.8%) on a held-
out test set, although the exact performance varied across dif-
ferent types of relationships. This level of accuracy is substan-
tial, considering the complexity of the underlying task. Using
this model, we processed over 1.2 billion sentence-level pairs
extracted from all available PubMed abstracts and PMC Open
Access full-text documents, and assigned 3.5% of them ( ∼43
million, of which ∼18 million in human) with at least one
positive label indicating directed (72.9%) or signed (33.1%)
relationships among the proteins. Furthermore, we made use
of the existing curated knowledge of regulatory interactions in
the database channel, by parsing regulatory information from
the SIGNOR, KEGG and Reactome databases.
To integrate these interactions into STRING, we followed
a similar score aggregation and benchmarking approach as
for physical interactions ( 21 ). Here, we perform benchmark-
ing in five categories for which we can derive gold stan-
dard datasets of known human regulatory interactions from
SIGNOR: regulation , upregulation , downregulation , tran-
scriptional regulation and phosphorylation . The resulting
calibration functions are then applied to extracted regula-
tions of all types to produce the final regulatory confidence
scores.
The new regulatory network has been seamlessly integrated
into the existing STRING framework, complementing the
‘functional’ and ‘physical’ network types. Directional edges
within this network are visually represented by arrows, depict-
ing both bidirectional and unidirectional relationships. The
network visualization can show varying confidence levels be-
tween directions (in the ‘confidence’ view) or the specific di-
rectionality of the sources (in the ‘evidence’ view). The user
interface retains its intuitive design, enabling users to access
the type of regulatory event and the evidence for the interac-
tion by clicking on the edge (Figure 1 ). Additionally, all API
functions have been updated to fully support the new net-
work type, which can be accessed by specifying the parameter
netw or k_type = regulatory in the API call.
Expanded co-expression networks
The co-expression channel in STRING identifies genes with
similar expression profiles across various tissues and condi-
tions, revealing their involvement in shared biological pro-
cesses. It compiles data from large-scale gene expression stud-
ies, analyzing transcript and protein levels to detect gene pairs
with coordinated expression. This method highlights poten-
tial functional relationships between genes. Importantly, the
analysis is free from study bias, as it directly uses raw omics
data instead of relying on manually curated or experimental
datasets, providing a more precise and objective view of gene
interactions.
For the upcoming version 12.5, we are expanding the
co-expression channel by generating additional networks
through the application of FAVA (Functional Association of
Variational Autoencoders) ( 25 ) on a wider range of single-
cell RNA sequencing data. Specifically, we are incorporating
single-cell data from the cellxgene Atlas ( 33 ) to enhance the
human and mouse networks and from the EBI Single Cell Ex-
pression Atlas to extend coverage to more model organisms
( 34 ). This expansion will provide more detailed, organism-
specific co-expression maps, enabling more precise identifi-
cation of gene co-expression patterns and cross-species com-
parisons, particularly for development, immune response and
disease-related gene expression dynamics.
Impro v ed enric hment analysis
Gene set enrichment analysis is a critical component of high-
throughput biological studies, providing key insights into the
biological characteristics of datasets. This analysis is pivotal in
identifying essential aspects such as biological processes, path-
ways and phenotypes, which are crucial for understanding the
biological implications of experiments. Typically, overrepre-
sentation analysis is employed, wherein a subset of data ex-
ceeding a specific cutoff (empirical or customary) is compared
against an experimental background. By default, STRING
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 3,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
D 734
Nucleic Acids Research , 2025, Vol. 53, Database issue
uses the whole genome / proteome background; however, it
is recommended that users provide a custom background
list representing a more realistic universe of genes / proteins
detected by their assay ( 35 ). Alternatively, the entire sorted
dataset can be analyzed using gene set enrichment analysis to
determine whether the distribution of genes at the top, bottom
or both ends deviates significantly from what would be ex-
pected by chance. STRING supports both of these enrichment
methods. The most recent update to STRING introduces vari-
ous improvements, including a new false discovery rate (FDR)
correction, enhanced filtering options and a new graphical vi-
sualization.
FDR correction
In its analyses, STRING performs a test for each path-
way (‘term’) within a given pathway collection (‘category’).
Like many other enrichment tools, STRING employs the
Benjamini–Hochberg correction ( 36 ) to adjust for multiple
hypothesis testing. This statistical correction is essential for
managing the FDR effectively. However, the larger the term
count and the greater the diversity of the terms, the higher
the required correction and the lower the chance of exceed-
ing the alpha level (significance threshold) for a given term.
One method used to address this issue involves restricting the
analysis to a subset of terms, such as ‘GO slims’ that focus
on broad, high-level terms with a simplified ontology struc-
ture ( 16 ). However, this approach has some disadvantages: it
might exclude terms particularly relevant to a tested gene set,
it may not cover all functions by design and it is mainly ap-
plicable to hierarchical classification systems such as the Gene
Ontology.
STRING utilizes several hierarchical ontologies, such as
the Brenda Tissue Ontology, Disease Ontology, Human Phe-
notype Ontology and the aforementioned Gene Ontology;
they typically contain many more leaf-ward nodes than nodes
closer to the root. The parental nodes must contain all the
genes included in the child terms, creating an imbalance in
the ontology with many more smaller terms and fewer larger
terms. Removing smaller terms can substantially increase sta-
tistical power . However , simply eliminating all small terms
could negatively impact discoverability, especially for small
query gene sets where the smaller terms might be the most rel-
evant for biological interpretation. The ontologies are usually
large, with > 10 000 terms (and therefore tests) in the Biolog-
ical Process branch of the Gene Ontology tree. Such strong
corrections have the effect that for larger query gene sets, or
for backgrounds that cover only part of the proteome, it might
not be statistically possible to detect enrichments for a subset
of the terms; these can therefore be removed from considera-
tion.
In its new version, STRING takes a more flexible
approach—by only testing terms that have a priori any statis-
tical potential for enrichment. This decision is based on sev-
eral parameters: the size of the term, the size of the query set,
the background size and the number of tests conducted. By
calculating the smallest and largest term sizes that could the-
oretically be enriched given the FDR correction, STRING can
determine which terms are viable for inclusion in the anal-
ysis. Terms outside these bounds are excluded from testing.
This methodical exclusion significantly enhances the statisti-
cal power of the enrichment analysis, without omitting any
terms that could be enriched. For smaller query sets, this strat-
egy will maintain all terms, as even the smallest tested sets
(term size = 2) might still be theoretically enriched. How-
ever , for larger , less specific sets or for tests against a cus-
tom smaller background, this approach markedly improves
the statistical power, enabling STRING to perform more fo-
cused analyses without the drawbacks of increased false posi-
tive rates or omitting potentially biologically interesting terms.
Although this method leverages the characteristics of ontolo-
gies, it does not require a hierarchy (parent–child relationships
between the terms) to function and can be applied to any
pathway / term collection with an imbalance of small terms,
such as PMID-derived gene sets and STRING neighborhood
clusters.
Term filtering
One key insight from our online user surveys is that the of-
ten overwhelming number of enrichment results can make it
challenging to discern the most relevant enriched terms. We
have thus enhanced the display of enrichment results, allow-
ing users to filter and sort them based on (i) FDR, (ii) strength,
(iii) signal, (iv) term size and (v) term similarity.
The newest addition to our filtering options are the ‘term
similarity’ and ‘signal’ filters. The ‘term similarity’ filter uses
the Jaccard index to measure the similarity between the gene
sets of terms within the same category. The method proceeds
by sorting the terms according to their P -values, prioritizing
those with the strongest statistical significance. The algorithm
then iterates through the list of terms; any term that exhibits
a similarity exceeding a predetermined, user-defined threshold
relative to terms already included in the results is subsequently
excluded. This approach requires no additional information
beyond what is already available in the dataset and can be
uniformly applied across all categories of terms. The ‘signal’
filter is defined as a weighted harmonic mean between the ra-
tio of observed to expected gene occurrences in an enriched
term and its −log(FDR), respectively. This filter balances the
impact of both metrics and provides a more intuitive ordering
of enriched terms and has been implemented as the default
sorting criterion.
These filters are designed to exclude potentially less relevant
results, such as terms that are marginally significant, small or
show significant overlap in terms of gene content. This en-
hancement streamlines the search for relevant observations,
helping users focus on the most meaningful results.
Graphical visualization
In response to user feedback requesting more intuitive, graph-
ical visualizations of the detected enrichments, STRING now
has adopted an interactive dot plot as a primary display
method for enrichment analysis outcomes. We chose the dot
plot for its clarity in representing enriched functions across
three dimensions: (i) enrichment signal along the X -axis, (ii)
the FDR indicated through the color coding of the dots and
(iii) the term’s protein count in the network represented by the
size of each dot. The terms (listed on the Y -axis) are ranked
by their enrichment signal or, if grouped by similarity, by the
enrichment signal of their group first (Figure 2 ). This visual-
ization reflects all filter settings chosen by the user to manage
the volume of results displayed. These plots can be addition-
ally customized in the interface and the resulting graphs can be
downloaded as PNG for immediate use or as SVG for further
modifications.
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 4,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
Nucleic Acids Research , 2025, Vol. 53, Database issue
D 735
Figure 2. Enrichment analysis interface from the STRING database website. Lef t: Sc hematic of the enrichment analysis tab highlighting various sections
of the webpage. Top right: A zoomed-in view of the analysis table with two user-highlighted terms (colored lines). Bottom right: A visualization of the
enrichment depicting the two terms highlighted in corresponding colors.
Furthermore, a distinctive feature of STRING’s visualiza-
tion is the incorporation of similarity groups, which visually
group related terms on the plot, adding an additional dimen-
sion to the enrichment results. These groups utilize computed
similarities to cluster terms together, highlighting relationships
between the terms and aiding the analysis. The clustering em-
ploys the average linkage method based on the Jaccard index,
with a user-specified cutoff applied to form groups that, on
average, share the specified similarity. The groups are sorted
by the maximum signal of their terms, with each term within a
group subsequently sorted by its individual signal. This group-
ing is visually highlighted on the right side of the plot.
S TRING clust ering and g ene set descriptions
STRING offers network clustering options for user-submitted
gene lists. This feature visually connects nodes that are more
interconnected to each other than to other nodes, reflecting
the functional modularity of the user’s gene list. By grouping
proteins that closely interact, clustering aids in the discovery
of functional modules and facilitates hypothesis generation.
Clustering is based on the connectivity between nodes, incor-
porating edge weights (combined scores) reflecting the confi-
dence level of the interactions. Only visible edges—those cor-
responding to active channels with confidence higher than the
user-specified cutoff—are considered, ensuring that clustering
is consistent with the network being shown.
Users have the choice of two different clustering algorithms.
‘ K -means clustering’ is implemented via the k -medoids algo-
rithm [pyclustering package ( 37 )], allowing users to specify
the desired number of clusters ( k ). This method forms clusters
based on a distance matrix derived from the most probable
paths between nodes. In case of multiple disconnected compo-
nents within the network, STRING aims for a balanced distri-
bution of cluster sizes by iteratively dividing the largest com-
ponents first. In contrast, ‘MCL clustering’, implemented with
standalone binaries (release 14-137), automatically generates
natural divisions based on the network’s inherent structure. It
utilizes the Markov cluster algorithm ( 38 ) to simulate stochas-
tic flow in graphs, identifying groups of highly interconnected
nodes as clusters based on their combined STRING scores.
The user-controlled inflation parameter influences the granu-
larity of the clustering. This method is particularly effective
in capturing the true modular nature of biological networks,
allowing for intuitive groupings that correspond to biological
functions and relationships.
Researchers often analyze the biological context of the ob-
tained clusters to interpret and draw conclusions from net-
work data. To aid this process, we have developed a novel
gene set naming feature that significantly enhances the inter-
pretability of cluster analyses. This feature automatically as-
signs the best description for each cluster based on STRING’s
robust statistical enrichment analysis, simplifying the often
cumbersome task of manually interpreting cluster data. It
ranks the enriched terms by their enrichment signal (see
the ‘Improved enrichment analysis’ section), choosing the pri-
mary, secondary and tertiary names among various enrich-
ment categories. This prioritization ensures that the names re-
flect the most statistically significant and biologically pertinent
attributes of each cluster, providing a clear, immediate under-
standing of its functional characteristics. The enrichment pri-
marily draws from categories such as Gene Ontology Biolog-
ical Processes, while incorporating a range of other categories
for broader annotations. In case the enrichment analysis yields
no significant results or if the cluster contains only one gene,
STRING assigns canonical gene names as descriptions to en-
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 5,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
D 736
Nucleic Acids Research , 2025, Vol. 53, Database issue
sure that clusters are easily identifiable. This functionality ex-
tends beyond cluster analysis and is applicable to any gene set.
As such, it is also available through an API, complementing
our suite of other API methods. The new API function, named
geneset_description , requires only a set of genes as input and
automatically performs enrichment analysis. It returns up to
three descriptions—primary, secondary and tertiary—based
on relevance and availability, identical to the cluster naming
in the user interface. These descriptions are filled sequentially,
with the ‘primary’ always representing the most relevant term.
One application of this API is in the stringApp Cytoscape plu-
gin, which utilizes it to automatically overlay the assigned de-
scriptions onto each network cluster after clustering is exe-
cuted.
Network and protein embeddings
To facilitate the use of STRING in machine learning applica-
tions, users can now directly download precomputed ProtT5
( 39 ) sequence and cross-species protein network embeddings
derived from STRING, for all eukaryotes, and utilize the em-
beddings on their own labeled datasets. The embeddings en-
code information from two aspects. Sequence embeddings can
capture protein domains and shorter sequence motifs, while
network embeddings can complement these with information
on cellular organization such as protein complexes and path-
ways that may not be evident from sequence data alone.
While protein sequence embeddings are inherently compa-
rable across species due to the universal nature of amino acid
sequences, the primary obstacle to using network embeddings
is ensuring that network embeddings from different species
are directly comparable. This challenge arises because pro-
tein networks are independent for each species, and tradi-
tional network embedding methods are not designed to ad-
dress cross-species comparability. The lack of comparability
between network embeddings prevents the effective transfer
of knowledge and findings across different species. To address
this, we developed a method to align eukaryotic network em-
beddings across species using orthologous relationships based
on FedCoder ( 40 ). This alignment technique creates a uni-
fied embedding space where proteins from different eukary-
otic species can be directly compared. As a result, it enhances
cross-species protein predictions, particularly in tasks such as
subcellular localization and function prediction.
In the protein machine learning field, having precomputed
sequence embeddings and cross-species compatible network
embeddings from STRING enables researchers to use these re-
sources directly, eliminating the need to calculate embeddings
themselves. This strategy not only reduces the energy footprint
resulting from redundant calculations across research groups,
but also enhances reproducibility in computational biology re-
search promoting more sustainable and consistent scientific
practices.
Data availability
The STRING database is freely available online at https://
string-db.org/.
A c kno wledg ements
The authors wish to thank Yan P. Yuan (EMBL Heidelberg)
and Dandan Xue (University of Copenhagen) for IT support.
Thomas Rattei (University of Vienna) is thanked for extensive
computational work toward resolving orthology relations. We
thank the CSC—IT Center for Science, Finland, for generous
computational resources.
Funding
Swiss Institute of Bioinformatics; Novo Nordisk Founda-
tion [NNF14CC0001, NNF20SA0035590]; European Molec-
ular Biology Laboratory (EMBL Heidelberg); HORIZON
EUROPE Marie Skłodowska-Curie Actions [101023676 to
K.N.]; Academy of Finland [332844 to F.M. and S.P.]. Fund-
ing for open access charge: University of Zurich.
Conflict of interest statement
None declared.
References
1. Marsh, J.A. and Teichmann, S.A. (2015) Structure, dynamics,
assembly, and evolution of protein complexes. Annu. Rev.
Biochem., 84 , 551–575.
2. Cesnik, A. , Schaffer, L.V. , Gaur, I. , Jain, M. , Ideker, T. and
Lundberg,E. (2024) Mapping the multiscale proteomic
organization of cellular and disease phenotypes. Annu. Rev.
Biomed. Data Sci., 7 , 369–389.
3. Milacic, M. , Beavers, D. , Conley, P. , Gong, C. , Gillespie, M. , Griss, J. ,
Haw, R. , Jassal, B. , Matthews, L. , May, B. , et al. (2024) The
Reactome Pathway Knowledgebase 2024. Nucleic Acids Res., 52 ,
D672–D678.
4. Kanehisa, M. , Furumichi, M. , Sato, Y. , Kawashima, M. and
Ishiguro-Watanabe,M. (2023) KEGG for taxonomy-based analysis
of pathways and genomes. Nucleic Acids Res. , 51 , D587–D592.
5. Oughtred, R. , Stark, C. , Breitkreutz, B.-J. , Rust, J. , Boucher, L. ,
Chang, C. , Kolas, N. , O’Donnell, L. , Leung, G. , McAdam, R. , et al.
(2019) The BioGRID interaction database: 2019 update. Nucleic
Acids Res., 47 , D529–D541.
6. Del Toro, N. , Shrivastava, A. , Ragueneau, E. , Meldal, B. , Combe, C. ,
Barrera, E. , Perfetto, L. , How, K. , Ratan, P. , Shirodkar, G. , et al.
(2022) The IntAct database: efficient access to fine-grained
molecular interaction data. Nucleic Acids Res. , 50 , D648–D653.
7. Licata, L. , Briganti, L. , Peluso, D. , Perfetto, L. , Iannuccelli, M. ,
Galeota, E. , Sacco, F. , Palma, A. , Nardozza, A.P. , Santonico, E. , et al.
(2012) MINT, the molecular interaction database: 2012 update.
Nucleic Acids Res., 40 , D857–D861.
8. Meldal, B.H.M. , Perfetto, L. , Combe, C. , Lubiana, T. , Ferreira
Cavalcante, J.V. , Bye-A-Jee, H. , Waagmeester, A. , Del-Toro, N. ,
Shrivastava, A. , Barrera, E. , et al. (2022) Complex Portal 2022:
new curation frontiers. Nucleic Acids Res. , 50 , D578–D586.
9. Lo Surdo, P. , Iannuccelli, M. , Contino, S. , Castagnoli, L. , Licata, L. ,
Cesareni, G. and Perfetto, L. (2023) SIGNOR 3.0, the SIGnaling
network open resource 3.0: 2022 update. Nucleic Acids Res., 51 ,
D631–D637.
10. Szklarczyk, D. , Kirsch, R. , Koutrouli, M. , Nastou, K. , Mehryary, F. ,
Hachilif, R. , Gable, A.L. , Fang, T. , Doncheva, N.T. , Pyysalo, S. , et al.
(2023) The STRING database in 2023: protein–protein
association networks and functional enrichment analyses for any
sequenced genome of interest. Nucleic Acids Res. , 51 , D638–D646.
11. Warde-Farley, D. , Donaldson, S.L. , Comes, O. , Zuberi, K. ,
Badrawi, R. , Chao, P. , Franz, M. , Grouios, C. , Kazi, F. , Lopes, C.T. ,
et al. (2010) The GeneMANIA prediction server: biological
network integration for gene prioritization and predicting gene
function. Nucleic Acids Res. , 38 , W214–W220.
12. Persson, E. , Castresana-Aguirre, M. , Buzzao, D. , Guala, D. and
Sonnhammer,E.L.L. (2021) FunCoup 5: functional association
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 6,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
Nucleic Acids Research , 2025, Vol. 53, Database issue
D 737
networks in all domains of life, supporting directed links and
tissue-specificity. J. Mol. Biol., 433 , 166835.
13. Kim, C.Y. , Baek, S. , Cha, J. , Yang, S. , Kim, E. , Marcotte, E.M. , Hart, T.
and Lee,I. (2022) HumanNet v3: an improved database of human
gene networks for disease research. Nucleic Acids Res., 50 ,
D632–D639.
14. Huang, J.K. , Carlin, D.E. , Yu, M.K. , Zhang, W. , Kreisberg, J.F. ,
Tamayo, P. and Ideker, T. (2018) Systematic evaluation of molecular
networks for discovery of disease genes. Cell Syst. , 6 , 484–495.e5.
15. Wright, S.N. , Colton, S. , Schaffer, L.V. , Pillich, R.T. , Churas, C. ,
Pratt, D. and Ideker, T. (2024) State of the interactomes: an
evaluation of molecular networks for generating biological
insights. bioRxiv doi:
https:// doi.org/ 10.1101/ 2024.04.26.587073 , 29 April 2024,
preprint: not peer reviewed.
16. Ontology Consortium, G. , Aleksander, S.A. , Balhoff, J. , Carbon, S. ,
Cherry, J.M. , Drabkin, H.J. , Ebert, D. , Feuermann, M. , Gaudet, P. ,
Harris, N.L. , et al. (2023) The Gene Ontology knowledgebase in
2023. Genetics , 224 , iyad031.
17. Liberzon, A. , Birger, C. , Thorvaldsdóttir, H. , Ghandi, M. ,
Mesirov, J.P. and Tamayo, P. (2015) The Molecular Signatures
Database hallmark gene set collection. Cell Syst. , 1 , 417–425.
18. Amberger, J.S. , Bocchini, C.A. , Schiettecatte, F. , Scott, A.F. and
Hamosh,A. (2015) OMIM.org: Online Mendelian Inheritance in
Man (OMIM ®), an online catalog of human genes and genetic
disorders. Nucleic Acids Res. , 43 , D789–D798.
19. Snel, B. , Bork, P. and Huynen, M.A. (2002) The identification of
functional modules from the genomic association of genes. Proc.
Natl Acad. Sci. U.S.A., 99 , 5890–5895.
20. Guala, D. , Ogris, C. , Müller, N. and Sonnhammer, E.L.L. (2020)
Genome-wide functional association networks: background, data
& state-of-the-art resources. Brief. Bioinform., 21 , 1224.
21. Szklarczyk, D. , Gable, A.L. , Nastou, K.C. , Lyon, D. , Kirsch, R. ,
Pyysalo, S. , Doncheva, N.T. , Legeay, M. , Fang, T. , Bork, P. , et al.
(2021) The STRING database in 2021: customizable
protein–protein networks, and functional characterization of
user-uploaded gene / measurement sets. Nucleic Acids Res., 49 ,
D605–D612.
22. Mering,C.V. (2003) STRING: a database of predicted functional
associations between proteins. Nucleic Acids Res. , 31 , 258–261.
23. Enright, A.J. , Iliopoulos, I. , Kyrpides, N.C. and Ouzounis, C.A.
(1999) Protein interaction maps for complete genomes based on
gene fusion events. Nature , 402 , 86–90.
24. Franceschini, A. , Lin, J. , von Mering, C. and Jensen, L.J. (2016)
SVD-phy: improved prediction of protein functional associations
through singular value decomposition of phylogenetic profiles.
Bioinformatics , 32 , 1085–1087.
25. Koutrouli, M. , Nastou, K. , Piera Líndez, P. , Bouwmeester, R. ,
Rasmussen, S. , Martens, L. and Jensen, L.J. (2024) FAVA:
high-quality functional association networks inferred from
scRNA-seq and proteomics data. Bioinformatics , 40 , btae010.
26. Porras, P. , Barrera, E. , Bridge, A. , Del-Toro, N. , Cesareni, G. ,
Duesbury, M. , Hermjakob, H. , Iannuccelli, M. , Jurisica, I. ,
Kotlyar, M. , et al. (2020) Towards a unified open access dataset of
molecular interactions. Nat. Commun., 11 , 6144.
27. Franceschini, A. , Szklarczyk, D. , Frankild, S. , Kuhn, M. ,
Simonovic, M. , Roth, A. , Lin, J. , Minguez, P. , Bork, P. , von Mering, C. ,
et al. (2013) STRING v9.1: protein–protein interaction networks,
with increased coverage and integration. Nucleic Acids Res., 41 ,
D808–D815.
28. Yu, H. , Luscombe, N.M. , Lu, H.X. , Zhu, X. , Xia, Y. , Han, J .-D.J .,
Bertin, N. , Chung, S. , V idal, M. and Gerstein, M. (2004) Annotation
transfer between genomes: protein–protein interologs and
protein–DNA regulogs. Genome Res. , 14 , 1107–1118.
29. Szklarczyk, D. , Gable, A.L. , Lyon, D. , Junge, A. , Wyder, S. ,
Huerta-Cepas, J. , Simonovic, M. , Doncheva, N.T. , Morris, J.H. ,
Bork, P. , et al. (2019) STRING v11: protein–protein association
networks with increased coverage, supporting functional discovery
in genome-wide experimental datasets. Nucleic Acids Res., 47 ,
D607–D613.
30. Doncheva, N.T. , Morris, J.H. , Gorodkin, J. and Jensen, L.J. (2019)
Cytoscape StringApp: network analysis and visualization of
proteomics data. J. Proteome Res., 18 , 623–632.
31. Jensen, L.J. , Kuhn, M. , Stark, M. , Chaffron, S. , Creevey, C. , Muller, J. ,
Doerks, T. , Julien, P. , Roth, A. , Simonovic, M. , et al. (2009) STRING
8—a global view on proteins and their functional interactions in
630 organisms. Nucleic Acids Res. , 37 , D412–D416.
32. Nastou, K. , Mehryary, F. , Ohta, T. , Luoma, J. , Pyysalo, S. and
Jensen,L.J. (2024) RegulaTome: a corpus of typed, directed, and
signed relations between biomedical entities in the scientific
literature. Database , 2024 , baae095.
33. Megill, C. , Martin, B. , Weaver, C. , Bell, S. , Prins, L. , Badajoz, S. ,
McCandless, B. , Pisco, A.O. , Kinsella, M. , Griffin, F. , et al. (2021)
cellxgene: a performant, scalable exploration platform for high
dimensional sparse matrices. bioRxiv doi:
https:// doi.org/ 10.1101/ 2021.04.05.438318 , 6 April 2021,
preprint: not peer reviewed.
34. Moreno, P. , Fexova, S. , George, N. , Manning, J.R. , Miao, Z. ,
Mohammed, S. , Muñoz-Pomer, A. , Fullgrabe, A. , Bi, Y. , Bush, N. ,
et al. (2022) Expression Atlas update: gene and protein expression
in multiple species. Nucleic Acids Res. , 50 , D129–D140.
35. T immons, J.A. , Szkop, K.J. and Gallagher, I.J. (2015) Multiple
sources of bias confound functional enrichment analysis of global
-omics data. Genome Biol. , 16 , 186.
36. Benjamini, Y. and Hochberg, Y. (1995) Controlling the false
discovery rate: a practical and powerful approach to multiple
testing. J. R. Stat. Soc. Ser. B Stat. Methodol., 57 , 289–300.
37. Novikov,A. (2019) PyClustering: data mining library. J. Open
Source Softw., 4 , 1230.
38. Van Dongen,S. (2008) Graph clustering via a discrete uncoupling
process. SIAM J. Matrix Anal. Appl., 30 , 121–141.
39. Elnaggar, A. , Heinzinger, M. , Dallago, C. , Rehawi, G. , Wang, Y. ,
Jones, L. , Gibbs, T. , Feher, T. , Angerer, C. , Steinegger, M. , et al.
(2022) ProtTrans: toward understanding the language of life
through self-supervised learning. IEEE Trans. Pattern Anal. Mach.
Intell., 44 , 7112–7127.
40. Baumgartner, M. , Dell’Aglio, D. , Paulheim, H. and Bernstein, A.
(2023) Towards the Web of Embeddings: integrating multiple
knowledge graph embedding spaces with FedCoder. J. Web
Semant., 75 , 100741.
Received: September 15, 2024. Revised: October 18, 2024. Editorial Decision: October 21, 2024. Accepted: October 29, 2024
© The Author(s) 2024. Published by Oxford University Press on behalf of Nucleic Acids Research.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https: // creativecommons.org / licenses / by / 4.0 / ), which permits unrestricted reuse,
distribution, and reproduction in any medium, provided the original work is properly cited.
Downloaded from https://academic.oup.com/nar/article/53/D1/D730/7903368 by guest on 11 January 2026 | {
"author": "Szklarczyk Damian, Nastou Katerina, Koutrouli Mikaela, Kirsch Rebecca, Mehryary Farrokh, Hachilif Radja, Hu Dewei, Peluso Matteo E., Huang Qingyao, Fang Tao, Doncheva Nadezhda T., Pyysalo Sampo, Bork Peer, Jensen Lars J., vonMering Christian",
"creationDate": "D:20241230125932+05'30'",
"creationdate": "2024-12-30T12:59:32+05:30",
"creator": "OUP",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"format": "PDF 1.4",
"keywords": "",
"modDate": "D:20260111010257+00'00'",
"moddate": "2026-01-11T01:02:57+00:00",
"page": 7,
"producer": "Acrobat Distiller 24.0 (Windows); modified using iTextSharp 5.5.10 ©2000-2016 iText Group NV (AGPL-version); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/gkae1113.pdf",
"subject": "DOI: 10.1093/nar/gkae1113 , 53, Database issue, 18-11-2024. Abstract: Proteins cooperate, regulate and bind each other to achieve their functions. Understanding the complex network of their interactions is essential for a systems-level description of cellular processes. The STRING database compiles, scores and integrates protein–protein association information drawn from experimental assays, computational predictions and prior knowledge. Its goal is to create comprehensive and objective global networks that encompass both physical and functional interactions. Additionally, STRING provides supplementary tools such as network clustering and pathway enrichment analysis. The latest version, STRING 12.5, introduces a new ‘regulatory network’, for which it gathers evidence on the type and directionality of interactions using curated pathway databases and a fine-tuned language model parsing the literature. This update enables users to visualize and access three distinct network types—functional, physical and regulatory—separately, each applicable to distinct research needs. In addition, the pathway enrichment detection functionality has been updated, with better false discovery rate corrections, redundancy filtering and improved visual displays. The resource now also offers improved annotations of clustered networks and provides users with downloadable network embeddings, which facilitate the use of STRING networks in machine learning and allow cross-species transfer of protein information. The STRING database is available online at https://string-db.org/.",
"title": "The STRING database in 2025: protein networks with directionality of regulation",
"total_pages": 8,
"trapped": ""
} |
Research and Applications
Improving large language model applications in
biomedicine with retrieval-augmented generation: a
systematic review, meta-analysis, and clinical
development guidelines
Siru Liu, PhD,1,2, Allison B. McCoy
, PhD1, Adam Wright
, PhD1,3
1Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States, 2Department of Computer
Science, Vanderbilt University, Nashville, TN 37212, United States, 3Department of Medicine, Vanderbilt University Medical Center,
Nashville, TN 37212, United States
Corresponding author: Siru Liu, PhD, Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 West End Ave #1475, Nashville,
TN 37212, United States (siru.liu@vumc.org)
Abstract
Objective: The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large lan
guage models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.
Materials and Methods: We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Pre
ferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase,
PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We
selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using
odds ratio as the effect size.
Results: Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of
1.19-1.53, indicating a statistically significant effect (P ¼ .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as
well as evaluation methods.
Discussion: Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Appli
cations with RAG in Clinical Settings to inform clinical applications using RAG.
Conclusion: Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research
should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of
knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.
Key words: large language model; retrieval augmented generation; systematic review; meta-analysis.
Introduction
Large language models (LLMs) have reported remarkable
performance in question-answering, summarization, and text
generation.1 Given this, researchers have explored its poten
tial in biomedical areas.2 For example, several studies
reported the ability of using LLM to answer patient mes
sages,3 to analyze alert logic in clinical decision support,4 and
to make discharge summaries more readable to patients.5
However, several challenges remain.
LLMs are trained on fixed datasets, which restrict their
knowledge to information available up to the training cut-off
date. For example, GPT-4o’s training data only includes
information up to October 2023, making it unable to
respond accurately to findings that emerged afterward. LLM
training datasets are also generally broad and lack the specif
icity required for biomedical applications. Finally, not all
sources used to train the LLMs are reliable and trustworthy.
To address these limitations, researchers have performed
fine-tuning and retrieval-augmented generation (RAG) tech
niques. Fine-tuning can adapt LLMs to specific domains, but
it is resource-intensive and does not allow for real-time
updates. In contrast, RAG maintains the original LLM archi
tecture while incorporating relevant context directly into
queries, offering more flexibility and control. In addition,
RAG’s unique advantage in biomedical applications lies in its
ability to adapt to dynamic environments by delivering up-to-
date information and efficiently integrating external knowl
edge sources with high interpretability.6
Another limitation of using LLMs directly is the risk of hal
lucination, where the model generates incorrect or fabricated
information.7 To mitigate such issues, researchers have pro
posed RAG as a solution that integrates up-to-date, relevant
information, enhancing both the accuracy and reliability of
LLM generated responses.8,9 For example, when ChatGPT
Received: November 19, 2024; Revised: December 17, 2024; Editorial Decision: January 2, 2025; Accepted: January 3, 2025
© The Author(s) 2025. Published by Oxford University Press on behalf of the American Medical Informatics Association.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/
by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial
re-use, please contact reprints@oup.com for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via
the Permissions link on the article page on our site—for further information please contact journals.permissions@oup.com.
Journal of the American Medical Informatics Association, 2025, 32(4), 605–615
https://doi.org/10.1093/jamia/ocaf008
Advance access publication 15 January 2025
Research and Applications
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 0,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
was asked about medications for peripheral artery disease
patients without increased bleeding risk, it initially omitted
low-dose rivaroxaban. After integrating retrieved text from
the 2024 American College of Cardiology / American Heart
Association Guideline for the Management of Lower Extrem
ity Peripheral Artery Disease,10 the model correctly recom
mended rivaroxaban.
Several guidelines exist for evaluating Artificial Intelligence
(AI) applications and LLMs in healthcare, including
DECIDE-AI (Developmental and Exploratory Clinical Inves
tigations of DEcision support systems driven by Artificial
Intelligence),11 CLAIM (Checklist for Artificial Intelligence in
Medical Imaging),12
and CONSORT-AI (Consolidated
Standards of Reporting Trials-AI).13 In addition, Tam et al.
introduced QUEST, a framework specifically for human eval
uation of LLMs in healthcare.14 However, these guidelines
do not cover RAG applications in clinical settings, emphasiz
ing the need for a more specific guideline.
Despite the promise of RAG in improving LLM performance
in clinical settings, there is limited understanding of its overall
effectiveness comparing with the baseline LLM, adoption in
clinical domains, and optimal strategies for its development in
biomedical applications. The aim of this study is to synthesize
findings from recent research of RAG and LLM in biomedicine
and provide clinical development guidelines to improve effec
tiveness as well as transparency in future research.
Materials and methods
Study design
We conducted a systematic literature review. The report was
created in adherence to the Preferred Reporting Items for Sys
tematic Reviews and Meta-Analyses 2020 analysis checklist
(File S1).15 We also performed a meta-analysis following the
Cochrane Handbook guidelines.16
Screening papers
We searched in 3 databases (PubMed, Embase, and Psy
cINFO), using terms related to “retrieval augmented gener
ation” and “large language model.” PubMed and Embase
were selected based on recommendations from the Cochrane
Handbook, as they are widely recognized for their extensive
coverage of biomedical literature.17 PsycINFO was included
to capture articles from the psychological and behavioral sci
ences. To maintain a high standard of quality and reliability,
we focused on peer-reviewed articles and excluded preprints.
The specific search terms used for each database are provided
in File S2. Given that ChatGPT was released on November
30, 2022, we set the publication filter to search papers pub
lished in 2023 and 2024. The search was performed on
December 12, 2024. The inclusion criteria were: (1) the study
must compare baseline LLM performance with RAG per
formance and (2) the study must address a biomedical ques
tion. The exclusion criteria were: (1) literature reviews,
editorial comments, or viewpoint papers, (2) studies focusing
on LLMs in languages other than English, or (3) studies cen
tered on a multi-agent system without a focus on RAG. SL
screened titles and abstracts, then conducted a full-text
review of papers meeting the criteria.
Data extraction
For each included study, we extracted the following informa
tion: author, title, publication year, journal, clinical task, and
specialty. Regarding RAG techniques, we gathered details
about the baseline LLM, retrieval sources, and strategies used
in the pre-retrieval, retrieval, and post-retrieval stages. For
evaluation, we extracted the evaluation method (human,
automated, or a combination of both), the number of evalua
tors, the evaluation dataset, and the evaluation metrics.
Meta-analysis
Effect size was defined as a metric quantifying the relation
ship between variables, including both direction and magni
tude.18 For each included study, we calculated the effect size
between baseline LLM performance and RAG-enhanced
LLM performance. The outcomes focused on the perform
ance of generation results, such as accuracy and usefulness.
Metrics related to the retrieval process, cost, or speed were
not included as outcomes in the meta-analysis. For continu
ous outcomes, we used Cohen’s d, standardized between-
group mean difference (SMD), calculated as the difference in
means divided by the pooled standard deviation. The stand
ard error (SE) of SMD was calculated using the following for
mula (1), where n1 and n2 represent the sample sizes of each
group.19 For dichotomous measurements, we calculated the
log-odds ratio, obtained by transforming the odds ratio (OR)
with the natural logarithm, and the associated SE was calcu
lated using formula (2), where a, b, c, and d represent the
number of successful and failed events in the baseline LLM
and RAG-enhanced LLM approaches. For studies reporting
multiple outcomes, we used the overall outcome to calculate
effect size. If no overall outcome was reported, we averaged
the effect sizes of all reported outcomes. We excluded out
comes with a sample size of less than 30 to avoid small-
sample bias.
SESMD ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n1 þ n2
n1n2
þ
SMD2
2 n1 þ n2
ð
Þ
s
(1)
SElogOR ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
1
a þ 1
b þ 1
c þ 1
d
r
(2)
We developed a random-effect meta-analysis model,
because of the variability in RAG architectures and evalua
tion datasets among the included studies. The random-effect
model was used when individual study effects likely con
tained additional sources of variance beyond sampling error.
Between-study heterogeneity was assessed using Higgins &
Thompson’s I2 statistic, where 25% indicated low heteroge
neity, 50% moderate, and 75% substantial.20
We conducted subgroup analyses to explore performance
variations across different factors. First, we analyzed the
influence of the baseline LLM, referring to the foundation
model (eg, GPT-4 or Llama2) that provides the core architec
ture for the system. Second, we examined data retrieval strat
egies, categorizing them as simple or complex. Simple
strategies included fixed-length chunking and basic similarity
search, and we performed a subgroup analysis to compare
these with complex retrieval strategies. Third, we analyzed
differences based on evaluation methods, distinguishing
between human evaluations, such as Likert scale ratings for
helpfulness and accuracy, and automatic evaluation metrics,
including ROUGE-1 and BLEU. Finally, we conducted a sub
group analysis based on the type of task, classifying studies
606
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 1,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
into
clinical
decision-making
and
medical
question-
answering. These analyses provided insights into how varia
tions in model architecture, retrieval strategies, evaluation
methods, and task types affect system outcomes.
To visualize the meta-analysis outcomes, we generated a
forest plot. This plot displayed the effect size, confidence inter
val for each study, as well as the pooled effect and predicted
effect size. We evaluated the publication bias using a contour-
enhanced funnel plot to investigate small-study effects. This
scatter plot had the effect size on the x-axis and the inverted
SE on the y-axis, with contours indicating P-values (<.1, .05,
and .01).21 Symmetry in the funnel plot suggested no publica
tion bias, and asymmetry was quantified using Egger’s regres
sion test.22 We used the “meta” package in R to conduct the
meta-analysis and perform statistical analyses.
Results
Study selection
A total of 335 studies were identified from 3 databases:
PubMed, Embase, and PsycINFO. After removing duplicates,
251 studies were screened. Of these, 20 studies were included
in this literature review, all of which were published in 2024.
One of the included studies was a conference paper.23 The
flow diagram depicting the study selection process is shown
in Figure 1. For each included study, their author, title, publi
cation year, journal, clinical task, specialty, and retrieval
sources are listed in Table S1 of File S2.
Meta-analysis
The pooled effect size was 1.35, with a 95% confidence inter
val of 1.19-1.53, indicating a statistically significant effect
(P ¼ .001). All outcomes and associated SEs are listed in File
S2. The I2 value was 37%, indicating low to moderate heter
ogeneity among the studies. The prediction interval ranged
from 1.01 to 1.8. The forest plot is shown in Figure 2. The
contour-enhanced funnel plot is presented in File S2. In
Egger’s regression test, the intercept (β0) was 1.1, with a 95%
confidence interval of [0.56, 1.64] and a P-value of .001,
indicating the presence of small-study effects and potential
publication bias.
Figure 1. Flow diagram of included studies illustrating the systematic review process. Out of 335 records identified from PubMed, Embase, and
PsycINFO, 84 duplicates were removed, leaving 251 records screened. After excluding 200 records, 51 reports were assessed for eligibility. 31 reports
were excluded for reasons such as lack of RAG-LLM comparison, non-clinical focus, or commentary. Ultimately, 20 studies were included in the final
review. LLM, large language model; RAG, retrieval-augmented generation.
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
607
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 2,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
Clinical applications of RAG
RAG techniques have been applied across a broad range of
medical specialties, as shown in Table 1. These applications
include clinical decision-making and medical question-
answering. In clinical decision making, RAG has supported
personalized treatment,23,24 emergency triage,25 and disease
management.26,27 For medical question-answering, RAG’s
capability has been explored to address complex treatment
guidelines questions,28 as well as queries focused on specific
areas, such as head and neck surgery-related questions,29 and
patient questions regarding diabetes.30 In the subgroup anal
ysis, 13 studies focused on clinical decision-making (OR
1.46, 95% CI [1.16, 1.71]) and 7 studies focused on medical
question-answering (OR 1.32, 95% CI [1.08, 1.63]), with no
statistically significant difference observed between these 2
groups.
Baseline LLMs
The baseline LLMs varied across studies, with GPT-4 being
the most common, used in 14 studies, (OR: 1.58, 95% CI:
1.21-2.04). GPT-3.5, used in 6 studies, showed an OR of
1.43 (95% CI: 1.06-1.93). Llama2 was applied in 5 studies
(OR: 1.25, 95% CI: 1.08-1.44).
Retrieval sources
Retrieval sources were categorized as pre-stored documents
and real-time online browsing. Regarding pre-stored docu
ments, 6 studies used clinical guidelines, such as the Emer
gency Severity Index (ESI) Ver.3 Field Triage.25 Five studies
used academic articles from sources like PubMed abstracts or
full texts, or document sets such as the Radiographics Top 10
Reading List on Gastrointestinal Imaging.31 Three studies
used specialized knowledge bases, including ChatENT,
OncoKB, and RareDis Corpus, while one study employed a
general biomedical knowledge graph (Scalable Precision
Medicine Open Knowledge Engine [SPOKE]). SPOKE
Table 1. Distribution of RAG applications by medical specialty and task
type.
Medical Specialty
Frequency
Internal medicine
4
General medicine
3
Oncology
3
Emergency medicine
2
Gastroenterology
2
Otolaryngology
1
Hepatology
1
Rare diseases
1
Orthopedics
1
Neurology
1
Ophthalmology
1
Task
Frequency (%)
Clinical decision-making
13 (65%)
Medical question-answering
7 (35%)
Figure 2. A forest plot showing the odds ratio (OR) of total impacts of the RAG-enhanced system compared with the baseline LLM system in clinical
tasks. The left panel lists individual studies (e.g., Zakka et al., Murugan et al., Long et al.) along with their corresponding ORs and 95% confidence
intervals (CIs). The right panel visualizes these ORs, with square markers proportional to study weights and horizontal lines representing CIs. The overall
pooled OR is 1.35 (95% CI: 1.19–1.53), represented by a diamond at the bottom.
608
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 3,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
integrates over 40 publicly available biomedical knowledge
sources across separate domains, such as genes, proteins,
drugs, compounds, and diseases, along with their known
relationships.32 Two studies used textbooks, such as Harri
son's Principles of Internal Medicine, while 3 others utilized
electronic health record (EHR) data. Additionally, Zakka
et al. added over 500 markdown files from MDCalc to
improve clinical calculation capabilities in LLM.28 Two stud
ies employed real-time online browsing to search academic
sites, such as PubMed and UpToDate. The amount of
retrieval resources varied across studies, ranging from a small
dataset specific to 6 osteoarthritis guidelines to a large dataset
of EHR data from 7 hospitals.
Retrieval strategies
Identified retrieval strategies were grouped based on the RAG
stages: pre-retrieval, retrieval, and post-retrieval. Figure 3
presents an example of how RAG is applied and lists identi
fied strategies within each stage.
In the pre-retrieval stage, 50% of studies (n ¼ 10) reported
strategies, such as query rewriting, document preprocessing,
and assessing the necessity of retrieval. Zakka et al. simplified
queries by rephrasing text into search terms that are better
suited for website browsing,28 while Wang et al. focused on
techniques such as correcting errors, expanding abbrevia
tions, and matching synonyms in user queries.30 Soman et al.
extracted disease entities in queries and retrieved correspond
ing nodes from a knowledge graph.33 Document preprocess
ing involved removing non-textual elements from PMC
papers (eg, figures, references, and author disclosures),30
extracted tables from PDFs using pdfplumber, structured the
content with pydantic for seamless integration.25 In addition
to query modification and document preprocessing, Jeong
et al. fine-tuned a model to determine whether retrieval was
necessary for a given query.34
During the data retrieval stage, 85% of studies (n ¼ 17)
reported strategies regarding indexing, aligning queries with
documents, and ranking retrieval chunks. Chunking methods
ranged from fixed-size chunks35 to recursive splits.36 Embed
ding models such as Text-embedding-ada-002,24,28–30,36,37
MiniLM, and PubMedBERT33 were commonly used to con
vert sentences into vectors. Cosine similarity was the primary
metric for measuring query-document alignment. Two stud
ies adopted Maximal Marginal Relevance for search and
highlighted its improved performance over similarity-based
methods.24,35 A domain-specific retriever, MedCPT, was
used in one study.34 Another study used the multi-vector
retriever that leveraged summarized document sections to
identify the original content for final answer generation.25
The retrieval cutoff parameters varied widely, with probabil
ity thresholds up to 0.83 and the number of retrieved chunks
ranging from 3 to 90.28,36,38 Vector databases like FAISS and
Chroma were frequently reported, and LangChain was
widely used for document processing and retrieval.23,25,35,38
In the subgroup analysis, 12 studies used simple data retrieval
strategies (OR 1.30, 95% CI [1.16, 1.45]), while 5 studies
used complex data retrieval strategies (OR 1.30, 95% CI
[1.07, 1.24]), with no statistically significant difference
observed between the 2 approaches.
In the post-retrieval stage, 65% of studies (n ¼ 13) imple
mented specific strategies to refine outputs. Murugan et al.
tailored prompts by providing clear context, defining roles
(eg, distinguishing between healthcare providers and patients
to deliver appropriately detailed information), and incorpo
rating relevant citations from retrieval sources such as the
Clinical
Pharmacogenetics
Implementation
Consortium
guidelines and Food and Drug Administration (FDA) label
ing.24 Soman et al. utilized prompt engineering to integrate
accurate knowledge sources and statistical evidence, such as
P-values and z-scores, from the SPOKE knowledge graph
into their outputs.33 Wang et al. outlined a detailed process
in the post-retrieval stage using prompt engineering, which
involved decomposing retrieved text into individual claims,
verifying each claim with external knowledge sources,
Figure 3. An example of using RAG in clinical applications, with identified strategies in 3 stages: pre-retrieval, retrieval, and post-retrieval. The user query
seeks recommended medications for symptomatic peripheral artery disease without increased bleeding risk. The system retrieves evidence from clinical
guidelines, processes it through embeddings and a vector database, and outputs a response, including rivaroxaban with low-dose aspirin, as recommended
by the retrieved guideline information. In this example, GPT-4 suggested a dose of 75 mg Aspirin, but the common low-dose Aspirin is 81 mg.
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
609
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 4,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
conducting safety checks by applying 24 predefined rules to
ensure ethical and factual accuracy, and summarizing the
results.30 Glicksberg et al. developed an ensemble model that
combined structured and unstructured data to predict hospi
tal admission probabilities. These predicted probabilities,
along with similar historical cases, were incorporated into the
prompt to enhance the performance of LLM.37 Chen et al.
used Chain-of-Thought (CoT) prompting to improve LLM
reasoning capabilities.39 Kresevic et al. customized prompts
to help the model interpret structured guidelines, combined
with few-shot learning using 54 question-answer pairs.27
Jeong et al. fine-tuned LLMs to assess the relevance of
retrieved evidence, ensure all statements were evidence-based,
and confirm that the response effectively addressed the
query.34
Evaluation
Nine studies used human evaluation, 8 relied on automated
evaluation (eg, similarity comparisons between generated
sentences and original answers), and 3 used a mix of both.
Outcomes from human evaluation showed an overall OR of
1.65 (95% CI: 1.36-2.03), while automatic evaluation
resulted in an OR of 1.20 (95% CI: 1.1-1.41). The differen
ces between the 2 were statistically significant (P < .01).
There were 4 human evaluators on average, with the range
spanning from 1 to 10. Most human evaluators were
physicians from relevant specialties according to the
study focus. In one case, 3 diabetic patients were involved in
evaluating the understandability of diabetes-related patient
queries.30
Twelve studies used self-curated datasets focused on
research tasks. Examples included the ClinicalQA bench
mark, which comprised 314 open-ended questions about
treatment guidelines and clinical calculations generated by
physicians,28 and 43 diabetes-related questions sourced from
the National Institute of Diabetes and Digestive and Kidney
Diseases website.30 Simulated cases from medical examina
tions were also utilized.25
Three studies used EHR
data.35,37,40 Six studies used public benchmark datasets, such
US board exam practice questions, MedMCQA29,34 and
longform question-answering benchmarks (eg, LiveQA, Med
icationQA).34 The self-curated datasets averaged 76 ques
tions, ranging from 7 to 314. The length of public benchmark
datasets varied significantly, from 102 questions in the
LiveQA dataset28 to 194 000 questions in the MedMCQA
dataset.34
Most studies reported evaluation metrics for the final
response generation, while 4 (25%) also included specific
metrics to evaluate the retrieval process. For instance, 1 study
measured recall in context retrieval,24 another evaluated
retrieval accuracy,33 and a fine-tuned LLM was developed to
assess the relevance of retrieved information to the user’s
query.34 Additionally, 1 study evaluated the accuracy of
using LLMs to extract text from figures and tables during
document preprocessing.27 The final evaluation metrics
focused on the generated responses, consistent with those
used in LLM-only systems. These metrics could be catego
rized as accuracy, completeness, user perception, safety, hal
lucination, citation, bias, and language. Accuracy was the
most frequently reported metric, covering Likert scale rat
ings, match rates, correct treatment percentages,9 AUC,
AUPRC, and F1 scores, as well as text similarity metrics like
ROUGE
(ROUGE-1,
ROUGE-2,
ROUGE-L),
BLEU,
METEOR, and BERTScore,21
which compared LLM-
generated responses to expert-provided answers. Complete
ness metrics assessed whether responses included all neces
sary information, typically using Likert scales. User
perception captured subjective feedback from both healthcare
providers and patients on understandability, helpfulness, and
whether responses met user intent, usually using Likert scales.
Safety metrics focused both on user-related and system-
related aspects. These metrics assessed potential harm, adver
sarial safety, and risk management,24 ensuring that outputs
were free of harmful content or risks. Scientific validity and
adherence to evidence were also evaluated.29 One study used
adversarial prompting, defined as intentionally adding harm
ful directives to a prompt, to evaluate the safety of the RAG
system.28 Hallucinations were primarily identified through
manual review, with definitions varying across studies. Some
studies defined hallucinations as nonfactual information,
while one study added 2 other types of hallucinations: input-
conflicting (content deviating from user-provided input) and
contextual-conflicting (content conflicting with previously
generated information).27,41 Citation metrics measured the
accuracy of provided references, with valid references consid
ered those that pointed to established publications, guide
lines, or research. Bias and language were evaluated for
clarity and neutrality, ensuring responses were unbiased and
empathetic to patient concerns.24
Discussion
This study presents a systematic review of current research
on RAG for clinical tasks. Overall, RAG implementation
increased outcomes by 1.35 times compared to baseline
LLM. We analyzed clinical tasks, baseline LLMs, retrieval
sources and strategies, as well as evaluation methods. Despite
the potential benefits of RAG systems, there remains room
for improvement. Building on our literature review, we devel
oped GUIDE-RAG (Guidelines for Unified Implementation
and Development of Enhanced LLM Applications with RAG
in Clinical Settings) for future clinical applications using
RAG (Figure 4).
GUIDE-RAG:
1) Define clear clinical tasks and evaluation datasets.
Future research should clearly define clinical tasks and
questions to maximize the effectiveness of RAGs. Ambi
guity in questions can hinder performance, particularly
in less powerful LLMs, making it challenging to achieve
significant improvements in responses generation, even
with improved knowledge selection.42 For example, one
study in the review constructed the evaluation dataset
using 30 case reports on rare diseases from PubMed.38
The authors did not report human performance on the
self-build dataset. The questions themselves might have
been inherently challenging or ambiguous. As expected,
the reported performance showed modest improvement,
with an OR of 1.31.
2) Identify appropriate external resources for specific clini
cal tasks.
The first step in developing a RAG-based clinical system
is to identify external resources that fill the knowledge
gaps of the baseline LLM in relation to specific clinical
tasks. The external knowledge should complement the
LLM’s existing capabilities to effectively address task
610
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 5,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
requirements. For instance, in question-answering tasks
related to broad medical exams for physicians, clinical
guidelines (eg, StatPearls) and textbooks proved more
useful than PubMed abstracts as external sources.43
Another example from our review involved a task
focused on medical question-answering in internal medi
cine. The study used a single source—Harrison’s Princi
ples of Internal Medicine—as the knowledge retrieval
source, and the reported improvement was marginal
(OR: 1.14).36 Expanding the knowledge base to include
additional resources, such as clinical guidelines, could
potentially enhance the performance of the RAG system
for such tasks.
3) Preprocess multi-modal content in documents.
Clinical guidelines and medical literature often contain
complex information presented through flowcharts,
graphs, and tables. Accurately parsing this multi-modal
content is essential for effective retrieval. Relying solely
on LLMs for text extraction may be insufficient; a pre
liminary study found that GPT-4 Turbo had only a
16% accuracy rate in extracting table data.27 Compre
hensive document preprocessing should systematically
extract relevant information from text, tables, and fig
ures to ensure accuracy and clarity. Only 3 studies in
our review explicitly mentioned extracting text from
tables or figures during the pre-retrieval process.25,27,44
4) Optimize
chunking
and
indexing
based
corpus
structure.
The structure of clinical knowledge corpora should be
carefully considered during chunking and indexing.
Fixed-length chunking can introduce noise by fragment
ing related information, which can reduce retrieval
accuracy. Researchers should optimize the chunking
granularity based on a thorough review of the clinical
knowledge corpus, to ensure the completeness of
retrieved information. An alternative approach is
dynamic chunking, which adjusts chunk boundaries
based
on
semantic
similarity
changes.45
Other
approaches include recursive chunking, which hierarchi
cally divides text into smaller chunks using delimiters
like headings, subheadings, paragraphs, and sentences.
Sliding window chunking enables layered retrieval by
overlapping chunks of text, allowing the system to cap
ture and merge contextually related information across
different segments.46 Context enriched chucking enhan
ces retrieval by incorporating concise summaries within
each segment to provide additional context for down
stream tasks.47 In indexing, while dense indexing (con
verting text to vectors) is widely used, it may miss global
information. The structure of a clinical knowledge cor
pora such as some headings, keywords, can be used as
sparse indexing and further combined with dense index
ing. This hybrid approach that combines dense and
sparse indexing can improve retrieval performance by
capturing both global and local information.48,49
5) Explore advanced retrieval strategies, including iterative
and graph-based retrieval.
Iterative retrieval improves accuracy by refining results
through multiple rounds. Parameters such as the num
ber of retrieved chunks or cutoff thresholds should be
optimized based on specific clinical questions, as
retrieval needs can vary—some questions may not
require external knowledge at all. Researchers should
evaluate retrieval requirements in advance and adapt
retrieval parameters accordingly. Graph-based retrieval,
which structures entities and relationships into a graph,
can improve information synthesis from multiple sour
ces. For example, GraphRAG identified entities and
relationships from documents and developed a graph
using LLM. Then, they used clustering algorithm to
Figure 4. Overview of GUIDE-RAG: This framework streamlines RAG in clinical applications through three iterative stages. In the pre-retrieval stage, it
focuses on defining tasks, identifying relevant resources, and preprocessing content. The retrieval stage enhances data retrieval with optimized chunking,
indexing, and advanced strategies such as graph-based retrieval. The post-retrieval stage emphasizes system evaluation, monitoring, knowledge updates,
and implementing few-shot learning for complex clinical reasoning, ensuring robust and adaptive performance. CoT, chain-of-thought.
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
611
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 6,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
offer global information based on user query, offering
better performance than naïve RAG on the traditional
vector databases.50
6) Implement few-shot learning with CoT for complex clin
ical reasoning.
Few-shot learning has been shown to enhance LLMs’
reasoning capabilities by teaching specific reasoning that
may not have been included in their original training.
Similarly, CoT techniques can improve complex reason
ing in clinical tasks.51,52 Researchers should generate
high-quality examples and incorporate CoT strategies
into the final query to refine specialized reasoning.
7) Evaluate all stages of the RAG system development.
Most current studies focus only on final performance,
overlooking the importance of evaluating each stage of
development. It is crucial to formally assess and report
performance at the pre-retrieval, retrieval, and post-
retrieval stages. Evaluating the knowledge boundaries of
the baseline LLM, potential conflicts between the LLM
and external knowledge, and the accuracy and coverage
of retrieved information helps ensure replicability and
transparency. This level of evaluation enables other
researchers to understand why a RAG system works (or
does not) and facilitates reproducibility.
8) Consider cost and time constraints.
Advanced retrieval strategies can improve performance
but often increase processing time and computational
costs. For example, graph-based RAG requires substan
tial resources for developing knowledge graphs, and
responses from global summaries may take longer than
with naïve RAG methods.50 Another example is to fine-
tune LLMs to evaluate the needs and the quality of
retrieval.34 In terms of computational cost, this process
is expensive, especially when scaling the method to
larger datasets or deploying it in a real-time system.
Also, a set of extra processes will make the whole speed
slow. The long response time might have a nonignorable
negative impact in situations that need a quick answer,
especially common in clinical settings. Researchers
should balance performance improvements with time
and cost considerations. Only 1 study in our review,
which focused on gastrointestinal radiology diagnosis
based on imaging descriptions, compared the cost and
response time between LLMs and LLMs with RAG.31
The mean response time was 29.8 s for LLM with RAG
vs 15.7s for LLM alone, with costs of $0.15 and $0.02
per case, respectively. Another study used EHR to pre
dict cognitive decline only reported cost, with LLM:
$4.49; RAG: $12.51. Another study that used EHR
data to predict cognitive decline reported costs of $4.49
for LLM alone and $12.51 for LLM with RAG.53
9) Establish monitoring and knowledge update strategies.
An important concept in AI applications in healthcare,
algorithmovigilance, which defined as “scientific meth
ods and activities relating to the evaluation, monitoring,
understanding, and prevention of adverse effects of
algorithms in health care,” 54 should also be considered
in the RAG applications. Researchers need to develop
long-term monitoring strategies for the RAG system per
formance, especially in clinical applications. In addition,
current studies use fixed external datasets. Researchers
should update external knowledge sources as latest
information becomes available. Clear strategies for
updating knowledge should be defined, specifying when
and how updates will occur.
For future studies, the first direction could be the system-
level enhancement, the combination of RAG and LLM-
powered agents. LLM-powered agents are AI systems that
use LLMs with complex reasoning and planning capabilities,
memory management, interactive capabilities with the envi
ronment, and actions to execute tasks.55,56 Recent research
points to the emerging trend of combination of RAG and
LLM-powered agents, where agents can assist in planning
and decision making for complex tasks, rather than simple
retrieval.57 For example, clinicians and patients have diverse
information access needs, some needing to analyze text from
a knowledge base, others needing to incorporate structured
data from an EHR. RAG will eventually only become one of
the methods for agents to access information. Moreover,
future research could focus on the usage of internal and
external functions and tools, long-term and short-term mem
ory module, self-learning module. For example, a study
developed an agent to answer questions related to rare dis
eases by expanding beyond RAG with additional tool func
tions, such as querying phenotypes and performing web
searches. This approach improved the overall correctness
from 0.48 to 0.75 compared to the GPT-4 baseline LLM.58
The second future direction could focus on the knowledge-
level enhancement: deep integration of external knowledge
into LLM. LLM exhibits the knowledge boundaries. RAG
approaches retrieving external knowledge and then integrates
it into LLMs in the forms of prompts for the final generation
to enhance the capabilities of LLMs in perceiving knowledge
boundaries.59 However, the integration of external knowl
edge into LLM reasoning is typically limited to providing the
retrieved data as additional context for the LLM’s query dur
ing generation. This approach keeps retrieval and generation
loosely connected, and the LLM’s output can still be influ
enced by its inherent knowledge boundaries or by noise in the
retrieved text, leading to incorrect answers. Additionally,
when the external knowledge source is EHR data, this
enhancement becomes even more important. Current EHR
data is organized in a “problem-oriented medical record”
(POMR) format, which collects and displays information in a
structured manner.60 LLMs excel in free-form contexts, and
their ability to perform clinical tasks depends on access to
unstructured text that provides a comprehensive view of the
patient. Achieving this within the structured POMR format
in modern EHR systems poses a significant challenge.61
Therefore, investigating how to realize the deep integration
of external knowledge with LLM reasoning is an important
direction for future research in clinical applications.
The final direction is the integration-level enhancement,
focusing on integrating RAG systems within EHRs. Current
research has primarily focused on development and testing
outside of EHR systems. To seamlessly provide support for
healthcare providers and patients, future efforts should pri
oritize embedding RAG systems into EHR interfaces. This
requires collaboration with EHR vendors to ensure the neces
sary infrastructure is available. Researchers also can facilitate
this integration using data exchange frameworks, such as
SMART on FHIR.62
612
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 7,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
Limitations
This study was limited to peer-reviewed publications avail
able in biomedical databases (eg, PubMed, Embase), exclud
ing
preprint
articles
from
repositories
like
ArXiv.
Additionally, only studies in English language were included,
which might have excluded relevant studies in other lan
guages. We did not include sources such as IEEE Xplore or
Google Scholar, which might have additional relevant stud
ies. However, our focus was on biomedicine, and we priori
tized databases specifically tailored to biomedical research to
maintain the relevance and quality of the included studies.
Furthermore, we used free-text searches in the databases,
which activated automatic mapping to Medical Subject
Headings (MeSH) and Emtree terms, improving retrieval
accuracy. However, the limitations of automatic term map
ping cannot be ignored, as it may introduce variability if the
underlying algorithms change. To address this, we have docu
mented all identified papers from our search. The title, publi
cation year, PMID, PUI, and database source for each study
are provided in File S3.
Conclusion
We conducted a systematic literature review of studies
exploring the use of RAG and LLM in clinical tasks. RAG
implementation showed a 1.35 odds ratio increase in per
formance compared to baseline LLMs. To improve perform
ance and transparency in future studies, we developed
guidelines for improving clinical RAG applications based on
current research findings. Future research could focus on
these 3 directions: (1) system-level enhancement: the combi
nation of RAG and agent, (2) knowledge-level enhancements:
deep integration of knowledge into LLM, and (3) integration-
level enhancements: integrating RAG systems within EHRs.
Author contributions
Siru Liu (Conceptualization, Data curation, Formal analysis,
Investigation, Methodology, Software, Visualization, Writing
– original draft), Allison B. McCoy (Conceptualization, Writ
ing – review & editing), Adam Wright (Conceptualization,
Writing – review & editing).
Supplementary material
Supplementary material is available at Journal of the Ameri
can Medical Informatics Association online.
Funding
This work was supported by National Institutes of Health
grants: R00LM014097-02 and R01LM013995-01.
Conflicts of interest
The authors do not have conflicts of interest related to this
study.
Data availability
The characteristics and outcomes for each included study
were reported in the File S2.
References
01. Raiaan MAK, Mukta MSH, Fatema K, et al. A review on large lan
guage models: architectures, applications, taxonomies, open issues
and challenges. IEEE Access. 2024;12:26839-26874. https://doi.
org/10.1109/ACCESS.2024.3365742
02. Thirunavukarasu AJ, Ting DSJ, Elangovan K, et al. Large language
models in medicine. Nat Med. 2023;29:1930-1940. https://doi.
org/10.1038/s41591-023-02448-8
03. Ayers JW, Poliak A, Dredze M, et al. Comparing physician and arti
ficial intelligence Chatbot responses to patient questions posted to a
public social media forum. JAMA Intern Med. 2023;183:589-596.
https://doi.org/10.1001/jamainternmed.2023.1838
04. Liu S, Wright AP, Patterson BL, et al. Using AI-generated sugges
tions from ChatGPT to optimize clinical decision support. J Am
Med Inform Assoc. 2023;30:1237-1245. https://doi.org/10.1093/
jamia/ocad072
05. Zaretsky J, Kim JM, Baskharoun S, et al. Generative artificial intel
ligence to transform inpatient discharge summaries to patient-
friendly language and format. JAMA Netw Open. 2024;7:
e240357. https://doi.org/10.1001/jamanetworkopen.2024.0357
06. Gao Y, Xiong Y, Gao X, et al. Retrieval-augmented generation for
large language models: a survey. 2023, preprint: not peer reviewed.
https://arxiv.org/abs/2312.10997
07. Xu Z, Jain S, Kankanhalli M. Hallucination is inevitable: an innate
limitation of large language models. January 22, 2024, preprint:
not peer reviewed. https://arxiv.org/abs/2401.11817
08. Shuster K, Poff S, Chen M, et al. Retrieval augmentation reduces
hallucination in conversation. Findings of the Association for Com
putational Linguistics, Findings of ACL: EMNLP 2021. Associa
tion for Computational Linguistics (ACL); 2021:3784-3803.
09. Ayala O, Bechard P. Reducing hallucination in structured outputs
via Retrieval-Augmented Generation. Proceedings of the 2024
Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies.
Association for Computational Linguistics; 2024;228-38. https://
doi.org/10.18653/v1/2024.naacl-industry.19
10. Gornik HL, Aronow HD, Goodney PP, et al. 2024 ACC/AHA/
AACVPR/APMA/ABC/SCAI/SVM/SVN/SVS/SIR/VESS guideline
for the management of lower extremity peripheral artery disease: a
report of the American College of Cardiology/American Heart
Association Joint Committee on Clinical Practice Guidelines. Cir
culation. 2024;149:e1313-e1410. https://doi.org/10.1161/CIR.
0000000000001251
11. Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for
the early stage clinical evaluation of decision support systems
driven by artificial intelligence: DECIDE-AI. BMJ. 2022;377:
e070904. https://doi.org/10.1136/bmj-2022-070904
12. Mongan J, Moy L, Kahn CE. Checklist for Artificial Intelligence in
Medical Imaging (CLAIM): a guide for authors and reviewers.
Radiol Artif Intell. 2020;2:e200029. https://doi.org/10.1148/ryai.
2020200029
13. Martindale APL, Ng B, Ngai V, et al. Concordance of randomised
controlled trials for artificial intelligence interventions with the
CONSORT-AI
reporting
guidelines.
Nat
Commun.
2024;15:6376-6311. https://doi.org/10.1038/s41467-024-45355-3
14. Tam TYC, Sivarajkumar S, Kapoor S, et al. A framework for
human evaluation of large language models in healthcare derived
from literature review. NPJ Digit Med. 2024;7:258. https://doi.
org/10.1038/s41746-024-01258-7
15. Shamseer L, Moher D, Clarke M, et al. Preferred reporting items
for systematic review and meta-analysis protocols (PRISMA-p)
2015: elaboration and explanation. BMJ. 2015;350:g7647.
https://doi.org/10.1136/bmj.g7647
16. Higgins JPT, Thomas J, Chandler J, et al. (editors). Cochrane Hand
book for Systematic Reviews of Interventions version 6.5 (updated
August 2024). Cochrane, 2024. Available from www.training.
cochrane.org/handbook. Date accessed December 11, 2024.
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
613
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 8,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
17. Lefebvre C, Glanville J, Briscoe S, et al. Chapter 4: searching for
and selecting studies. Cochrane Handbook for Systematic Reviews
of Interventions Version, Vol. 6. Cochrane, 2024. https://training.
cochrane.org/handbook/current/chapter-04
18. Chapter 3 Effect Sizes j Doing Meta-Analysis in R. Accessed Octo
ber 13, 2024. https://bookdown.org/MathiasHarrer/Doing_Meta_
Analysis_in_R/effects.html
19. Borenstein M, Hedges LV, Higgins JPT, et al. Introduction to
Meta-Analysis. John Wiley & Sons; 2011.
20. Higgins JPT, Thompson SG. Quantifying heterogeneity in a meta-
analysis. Stat Med. 2002;21:1539-1558. https://doi.org/10.1002/
sim.1186
21. Peters JL, Sutton AJ, Jones DR, et al. Contour-enhanced meta-
analysis funnel plots help distinguish publication bias from other
causes of asymmetry. J Clin Epidemiol. 2008;61:991-996. https://
doi.org/10.1016/j.jclinepi.2007.11.010
22. Egger M, Smith GD, Schneider M, et al. Bias in meta-analysis detected
by a simple, graphical test measures of funnel plot asymmetry. BMJ.
1997;315:629-634. https://doi.org/10.1136/bmj.315.7109.629
23. Kreimeyer K, Canzoniero JV, Fatteh M, et al. Using retrieval-
augmented generation to capture molecularly-driven treatment
relationships for precision oncology. Stud Health Technol Inform.
2024;316:983-987. https://doi.org/10.3233/SHTI240575
24. Murugan M, Yuan B, Venner E, et al. Empowering personalized
pharmacogenomics with generative AI solutions. J Am Med Inform
Assoc. 2024;31:1356-1366. https://doi.org/10.1093/jamia/ocae039
25. Yazaki M, Maki S, Furuya T, et al. Emergency patient triage
improvement through a retrieval-augmented generation enhanced
large-scale language model. Prehosp Emerg Care. 2024;1-7.
https://doi.org/10.1080/10903127.2024.2374400
26. Malik S, Kharel H, Dahiya DS, et al. Assessing ChatGPT4 with
and without retrieval-augmented generation in anticoagulation
management for gastrointestinal procedures. Ann Gastroenterol.
2024;37:514-526. https://doi.org/10.20524/aog.2024.0907
27. Kresevic S, Giuffre M, Ajcevic M, et al. Optimization of hepatolog
ical clinical guidelines interpretation by large language models: a
retrieval augmented generation-based framework. NPJ Digit Med.
2024;7:102-109. https://doi.org/10.1038/s41746-024-01091-y
28. Zakka C, Shad R, Chaurasia A, et al. Almanac–retrieval-aug
mented language models for clinical medicine. NEJM AI. 2024;1
(2):10.1056/aioa2300068. https://doi.org/10.1056/aioa2300068
29. Long C, Subburam D, Lowe K, et al. ChatENT: augmented large
language model for expert knowledge retrieval in otolaryngology–
head and neck surgery. Otolaryngol Head Neck Surg.
2024;171:1042-1051. https://doi.org/10.1002/ohn.864
30. Wang D, Liang J, Ye J, et al. Enhancement of the performance of
large language models in diabetes education through retrieval-aug
mented generation: comparative study. J Med Internet Res.
2024;26:e58041. https://doi.org/10.2196/58041
31. Rau S, Rau A, Nattenm€uller J, et al. A retrieval-augmented chatbot
based on GPT-4 provides appropriate differential diagnosis in gas
trointestinal radiology: a proof of concept study. Eur Radiol Exp.
2024;8:60. https://doi.org/10.1186/s41747-024-00457-x
32. Morris JH, Soman K, Akbas RE, et al. The scalable precision
medicine open knowledge engine (SPOKE): a massive knowl
edge
graph
of
biomedical
information.
Bioinformatics.
2023;39(2):btad080.
https://doi.org/10.1093/BIOINFOR
MATICS/BTAD080
33. Soman K, Rose PW, Morris JH, et al. Biomedical knowl
edge graph-optimized prompt generation for large language
models. Commun ACM. 2023;66:7-7. https://doi.org/10.1145/
3606337
34. Jeong M, Sohn J, Sung M, et al. Improving medical reasoning
through retrieval and self-reflection with retrieval-augmented large
language models. Bioinformatics. 2024;40:i119-i129. https://doi.
org/10.1093/bioinformatics/btae238
35. Alkhalaf M, Yu P, Yin M, et al. Applying generative AI with
retrieval augmented generation to summarize and extract key
clinical information from electronic health records. J Biomed
Inform.
2024;156:104662.
https://doi.org/10.1016/j.jbi.2024.
104662
36. Tarabanis C, Zahid S, Mamalis M, et al. Performance of publicly
available large language models on internal medicine board-style
questions. PLOS Digit Health. 2024;3:e0000604. https://doi.org/
10.1371/journal.pdig.0000604
37. Glicksberg BS, Timsina P, Patel D, et al. Evaluating the accuracy of
a state-of-the-art large language model for prediction of admis
sions from the emergency room. J Am Med Inform Assoc.
2024;31:1921-1928. https://doi.org/10.1093/jamia/ocae103
38. Zelin C, Chung WK, Jeanne M, et al. Rare disease diagnosis using
knowledge guided retrieval augmentation for ChatGPT. J Biomed
Inform. 2024;157:104702. https://doi.org/10.1016/j.jbi.2024.104702
39. Chen X, Wang L, You MK, et al. Evaluating and enhancing large
language models’ performance in domain-specific medicine: devel
opment and usability study with DocOA. J Med Internet Res.
2024;26:e58158. https://doi.org/10.2196/58158
40. Du X, Novoa-Laurentiev J, Plasaek JM, et al. Enhancing early
detection of cognitive decline in the elderly: a comparative study
utilizing large language models in clinical notes. medRxiv.
2024;109:105401. https://doi.org/10.1101/2024.04.03.24305298
41. Zhang Y, Li Y, Cui L, et al. Siren’s song in the AI Ocean: a survey
on hallucination in large language models. 2023, preprint: not
peer reviewed. https://arxiv.org/abs/2309.01219
42. Li X, Ouyang JA. Systematic investigation of knowledge retrieval
and selection for retrieval augmented generation. 2024, preprint:
not peer reviewed. https://arxiv.org/abs/2410.13258
43. Xiong G, Jin Q, Lu Z, et al. Benchmarking retrieval-augmented
generation for medicine. Findings of the Association for Computa
tional Linguistics: ACL 2024, 6233-6251. Bangkok, Thailand:
Association for Computational Linguistics.
44. Hewitt KJ, Wiest IC, Carrero ZI, et al. Large language models as a
diagnostic support tool in neuropathology. J Pathol Clin Res.
2024;10:e70009. https://doi.org/10.1002/2056-4538.70009
45. Allahverdiyev R, Taha M, Akalin A, et al. ChunkRAG: novel
LLM-chunk filtering method for RAG systems. October 25, 2024,
preprint: not peer reviewed. https://arxiv.org/abs/2410.19572
46. Cai B, Zhang FL, Wang C. Research on chunking algorithms of
data de-duplication. Advances in Intelligent Systems and Comput
ing, Vol. 181. 2013:1019-1025. Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-642-31698-2_144
47. Optimizing RAG with Advanced Chunking Techniques. Accessed
December 15, 2024. https://antematter.io/blogs/optimizing-rag-
advanced-chunking-techniques-study
48. Chen J, Xiao S, Zhang P, et al. M3-Embedding: multi-lingual,
multi-functionality, multi-granularity text embeddings through
self-knowledge distillation. Findings of the Association for Com
putational Linguistics ACL 2024. 2024:2318-2335. Association
for Computational Linguistics. https://doi.org/10.18653/V1/2024.
FINDINGS-ACL.137
49. Sawarkar K, Mangal A, Solanki SR. Blended RAG: improving
RAG (Retriever-Augmented Generation) accuracy with semantic
search and hybrid query-based retrievers. 2024 IEEE 7th Interna
tional Conference on Multimedia Information Processing and
Retrieval (MIPR), 155-161. San Jose, CA, USA: IEEE. https://doi.
org/10.1109/MIPR62202.2024.00031
50. Edge D, Trinh H, Cheng N, et al. From local to global: a graph
RAG approach to query-focused summarization. 2024, preprint:
not peer reviewed. https://arxiv.org/abs/2404.16130
51. Wu Z, Hasan A, Wu J, et al. KnowLab_AIMed at MEDIQA-
CORR 2024: Chain-of-Though (CoT) prompting strategies for
medical error detection and correction. Proceedings of the 6th
Clinical Natural Language Processing Workshop. 2024:353-359.
Association for Computational Linguistics. 10.18653/v1/2024.
clinicalnlp-1.33
52. Kwon T, Tzu-Iunn Ong K, Kang D, et al. Large language models
are clinical reasoners: reasoning-aware diagnosis framework with
614
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 9,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
prompt-generated rationales. Proceedings of the AAAI Conference
on Artificial Intelligence. Association for the Advancement of Arti
ficial Intelligence; 2024:18417-18425.
53. Du X, Novoa-Laurentiev J, Plasek JM, et al. Enhancing early
detection of cognitive decline in the elderly: a comparative study
utilizing large language models in clinical notes. EBioMedicine.
2024;109:105401. https://doi.org/10.1016/j.ebiom.2024.105401
54. Embi PJ. Algorithmovigilance—advancing methods to analyze and
monitor artificial intelligence–driven health care for effectiveness
and equity. JAMA Netw Open. 2021;4:e214622. https://doi.org/
10.1001/jamanetworkopen.2021.4622
55. Xi Z, Chen W, Guo X, et al. The rise and potential of large lan
guage model based agents: a survey. September 14, 2023, preprint:
not peer reviewed.
56. Wang L, Ma C, Feng X, et al. A survey on large language model
based autonomous agents. Front Comput Sci. 2024;18:1-26.
https://doi.org/10.1007/S11704-024-40231-1/METRICS
57. Li X, Wang S, Zeng S, et al. A survey on LLM-based multi-agent
systems: workflow, infrastructure, and challenges. Vicinagearth.
2024;1:9. https://doi.org/10.1007/s44336-024-00009-2
58. Yang J, Shu L, Duan H, et al. RDguru: a conversational intelligent
agent for rare diseases. IEEE J Biomed Health Inform. Published
online September 19, 2024. https://doi.org/10.1109/JBHI.2024.
3464555
59. Ren R, Wang Y, Qu Y, et al. Investigating the factual knowledge
boundary of large language models with retrieval augmentation.
2023, preprint: not peer reviewed. https://arxiv.org/abs/2307.
11019
60. Weed LL. Medical records that guide and teach. N Engl J Med.
1968;278:593-600. https://doi.org/10.1056/NEJM19680314278
1105/ASSET/9EE62BDC-88EB-469C-BCDC-DB379C2CAE47/
ASSETS/IMAGES/MEDIUM/NEJM196803142781105_F2.GIF
61. McCoy LG, Manrai AK, Rodman A. Large language mod
els and the degradation of the medical record. N Engl J
Med.
2024;391:1561-1564.
https://doi.org/10.1056/
NEJMP2405999
62. Mandel JC, Kreda DA, Mandl KD, et al. SMART on FHIR: a
standards-based, interoperable apps platform for electronic health
records. J Am Med Inform Assoc. 2016;23:899-908. https://doi.
org/10.1093/jamia/ocv189
© The Author(s) 2025. Published by Oxford University Press on behalf of the American Medical Informatics Association.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/
4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please
contact reprints@oup.com for reprints and translation rights for reprints. All other permissions can be obtained through our RightsLink service via the Permissions link
on the article page on our site—for further information please contact journals.permissions@oup.com.
Journal of the American Medical Informatics Association, 2025, 32, 605–615
https://doi.org/10.1093/jamia/ocaf008
Research and Applications
Journal of the American Medical Informatics Association, 2025, Vol. 32, No. 4
615
Downloaded from https://academic.oup.com/jamia/article/32/4/605/7954485 by guest on 11 January 2026 | {
"author": "",
"creationDate": "D:20250322110116+05'30'",
"creationdate": "2025-03-22T11:01:16+05:30",
"creator": "Servigistics Arbortext Advanced Print Publisher 11.1.4667/W",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"format": "PDF 1.5",
"keywords": "large language model; retrieval augmented generation; systematic review; meta-analysis",
"modDate": "D:20260111011327+00'00'",
"moddate": "2026-01-11T01:13:27+00:00",
"page": 10,
"producer": "PDFlib+PDI 9.0.7p3 (C++/Win32); modified using iTextSharp.LGPLv2.Core 3.7.4.0",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/ocaf008.pdf",
"subject": "Doi: 10.1093/jamia/ocaf008 Journal of the American Medical Informatics Association, 32, 4, 2025 Publication Date: 15/01/2025 Abstract Objective The objectives of this study are to synthesize findings from recent research of retrieval-augmented generation (RAG) and large language models (LLMs) in biomedicine and provide clinical development guidelines to improve effectiveness.Materials and Methods We conducted a systematic literature review and a meta-analysis. The report was created in adherence to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 analysis. Searches were performed in 3 databases (PubMed, Embase, PsycINFO) using terms related to “retrieval augmented generation” and “large language model,” for articles published in 2023 and 2024. We selected studies that compared baseline LLM performance with RAG performance. We developed a random-effect meta-analysis model, using odds ratio as the effect size.Results Among 335 studies, 20 were included in this literature review. The pooled effect size was 1.35, with a 95% confidence interval of 1.19-1.53, indicating a statistically significant effect (P = .001). We reported clinical tasks, baseline LLMs, retrieval sources and strategies, as well as evaluation methods.Discussion Building on our literature review, we developed Guidelines for Unified Implementation and Development of Enhanced LLM Applications with RAG in Clinical Settings to inform clinical applications using RAG.Conclusion Overall, RAG implementation showed a 1.35 odds ratio increase in performance compared to baseline LLMs. Future research should focus on (1) system-level enhancement: the combination of RAG and agent, (2) knowledge-level enhancement: deep integration of knowledge into LLM, and (3) integration-level enhancement: integrating RAG systems within electronic health records.",
"title": "Improving large language model applications in biomedicine with retrieval-augmented generation: a systematic review, meta-analysis, and clinical development guidelines",
"total_pages": 11,
"trapped": ""
} |
Joy et. al
1
Federated Knowledge Retrieval Elevates Large Language Model Performance on
Biomedical Benchmarks
Janet Joy1,2, Andrew I. Su1,2
1Department of Integrative Structural and Computational Biology, Scripps Research, La
Jolla, CA, USA.
2Correspondence: Janet Joy (jjoy@scripps.edu) and Andrew I. Su (asu@scripps.edu)
Main Figures: 4
Supplemental Figures: 11
Supplemental Table: 1
Abstract
Background
Large language models (LLMs) have significantly advanced natural language
processing in biomedical research, however, their reliance on implicit, statistical
representations often results in factual inaccuracies or hallucinations, posing significant
concerns in high-stakes biomedical contexts.
Results
To overcome these limitations, we developed BTE-RAG, a retrieval-augmented
generation framework that integrates the reasoning capabilities of advanced language
models with explicit mechanistic evidence sourced from BioThings Explorer, an API
federation of more than sixty authoritative biomedical knowledge sources. We
systematically evaluated BTE-RAG in comparison to traditional LLM-only methods
across three benchmark datasets that we created from DrugMechDB. These datasets
specifically targeted gene-centric mechanisms (798 questions), metabolite effects (201
questions), and drug–biological process relationships (842 questions). On the gene-
centric task, BTE-RAG increased accuracy from 51% to 75.8% for GPT-4o mini and
from 69.8% to 78.6% for GPT-4o. In metabolite-focused questions, the proportion of
responses with cosine similarity scores of at least 0.90 rose by 82% for GPT-4o mini
and 77% for GPT-4o. While overall accuracy was consistent in the drug–biological
process benchmark, the retrieval method enhanced response concordance, producing a
greater than 10% increase in high-agreement answers (from 129 to 144) using GPT-4o.
Conclusion
Federated knowledge retrieval provides transparent improvements in accuracy for large
language models, establishing BTE-RAG as a valuable and practical tool for
mechanistic exploration and translational biomedical research.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 0,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
2
1 Introduction
Large language models (LLMs) have rapidly advanced the state of natural-language
processing, reaching or surpassing expert performance across a wide range of
biomedical tasks, including cell type annotation, protein-structure prediction and
automated synthesis of clinical-trial results 1–6. However, the underlying generative
methodology of these models, which sequentially predict tokens based on statistical
patterns learned from massive text corpora, renders them susceptible to hallucinations,
defined as outputs that are syntactically fluent yet factually incorrect 7,8. Such
inaccuracies pose significant risks in biomedicine, where even minor errors can
misdirect research efforts, delay critical therapeutic discoveries, or compromise patient
safety 7,9–11. Indeed, recent assessments underscore that hallucination rates remain too
high for safe and effective deployment in clinical and research-intensive environments
12,13.
Efforts to mitigate these hallucinations through domain-specific pre-training and prompt
engineering have yielded only incremental improvements, as these approaches
continue to embed knowledge implicitly within opaque model parameters and fail to
reliably surface evidence provenance 14–16. Retrieval-augmented generation (RAG) has
emerged as a promising solution, explicitly grounding model-generated responses by
dynamically incorporating external, verifiable evidence into prompts 17–19. Within
biomedical question-answering contexts, RAG approaches consistently reduce
hallucinations and elevate factual accuracy compared to parameter-only models.
Nonetheless, the efficacy of RAG hinges critically on the precision, comprehensiveness,
and currency of the retrieved contextual evidence 20–22.
Knowledge graphs (KGs) are particularly compelling resources for RAG because they
explicitly represent biological entities and their relationships, support multi-hop
mechanistic reasoning, and maintain persistent identifiers that simplify provenance
tracking 23–26. Yet most biomedical KGs are tuned to a narrow slice of biology (for
example, protein–protein interactions) or require extensive curation to remain current,
limiting their utility for cross-domain mechanistic reasoning. To address these
challenges, BioThings Explorer (BTE) integrates and federates 61 authoritative
biomedical APIs into a continuously updated meta-knowledge graph that encompasses
genes, pathways, drugs, diseases, phenotypes, and more 27. The API-centric framework
of BTE returns structured JSON triples annotated with semantic types and evidence
citations from reputable biomedical databases such as Gene Ontology, DrugBank, and
Pubmed central using Translator Reasoner API (TRAPI) specification 28–30.
Here, we introduce BTE–RAG (BioThings Explorer–Retrieval-Augmented Generation),
a novel framework that integrates the conversational fluency and reasoning capabilities
of advanced LLMs with the explicit, multi-domain mechanistic knowledge captured by
BTE. BTE–RAG dynamically executes targeted, query-focused graph traversals to
retrieve concise, mechanistically pertinent evidence, formulates this evidence into
declarative context statements, and augments model prompts accordingly.
To rigorously assess the performance of BTE-RAG in biomedical question answering,
we systematically created three specialized benchmark datasets from DrugMechDB, a
curated knowledge base containing 5,666 expert-annotated mechanistic pathways with
literature validation 31. These datasets consist of gene-centric (n = 798), metabolite-
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 1,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
3
centric (n = 201), and drug-centric (n = 842) question–answer pairs, each explicitly
reflecting the causal flow from drug through intermediate biological nodes to disease
outcomes. Across all three DrugMechDB-derived benchmarks, BTE–RAG robustly
improves factual grounding, accelerates convergence to correct responses over diverse
biomedical entities relative to an LLM-only baseline.
Collectively, these findings establish BTE–RAG as a powerful, practical tool for reducing
hallucination risks and enhancing mechanistic clarity, significantly advancing the
transparency, reliability, and utility of language model-driven biomedical discovery and
clinical decision-making.
2 Materials & Methods
2.1 BTE–RAG Framework and Baseline Comparison
The BTE–RAG framework systematically compares two distinct inference routes to
evaluate the impact of structured, mechanistic context on large language model (LLM)
outputs (Figure 1A). The first inference route, labeled "LLM-only," directly submits user-
generated questions to the language model without external context augmentation. The
second route, labeled "BTE–RAG," integrates structured mechanistic evidence retrieved
from BioThings Explorer prior to submitting an enriched, evidence-supported prompt to
the same language model. This dual-path design allows rigorous evaluation of how
explicitly retrieved context influences both answer accuracy and the factual grounding of
model-generated responses. The BTE–RAG architecture comprises three key phases:
entity recognition, knowledge-graph-based retrieval via BTE, and generative inference
utilizing context-augmented LLM prompting.
Entity Recognition
The retrieval phase begins with precise identification of biomedical entities mentioned
within each input question. For the current benchmarks, entities such as drugs,
diseases, metabolites, and biological processes were pre-annotated and standardized
to established knowledge graph identifiers, enabling automated recognition at runtime.
Additionally, the framework includes a zero-shot entity extraction module that leverages
a specialized task-oriented prompting approach. This module is currently optimized for
retrieving drugs and diseases from queries, with potential to extend extraction
capabilities to include other biomedical entities as needed.
Knowledge Graph Retrieval
Identified biomedical entities are translated into structured queries interfacing directly
with BTE. BTE integrates 61 authoritative biomedical databases under a unified
knowledge graph schema, accessible via the programmatic API endpoint (/v1/query).
Each query to BTE specifies an input entity (e.g., disease, drug, or biological process)
along with desired output entity categories, following the TRAPI query format. In
response, BTE returns structured JSON data that includes a detailed knowledge graph
containing two key components: "nodes," which describe biomedical entities along with
their semantic categories and standardized names; and "edges," which specify the
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 2,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
4
explicit relationships (predicates) between pairs of entities, supplemented by
provenance details indicating the primary knowledge sources.
For each benchmark dataset, targeted queries were structured to retrieve
mechanistically relevant context. Specifically, in the gene-centric benchmark, queries
separately utilized disease and drug entities to retrieve directly linked gene and protein
nodes. In the metabolite-centric benchmark, disease and chemical (drug) entities were
queried independently to identify connected biochemical entities. For the drug–
biological process benchmark, separate queries using disease entities and biological
Figure 1: Retrieval‑Augmented Generation workflow and derivation of mechanistic
evaluation benchmarks.
(A) Schematic of the BTE-RAG pipeline, which augments large language model (LLM)
responses with context retrieved from the BioThings Explorer (BTE) knowledge graph. In
the LLM-only pathway, the model generates a response using only the input question. In
contrast, BTE-RAG operates in two phases: a Retrieval Phase, where relevant entities
are extracted from the question and queried against BTE to collect mechanistically
relevant subject–predicate–object triples, and a Generative Phase, where this curated
context is appended to the input question and passed to the LLM. The resulting outputs:
LLM-only or BTE-RAG, can be directly compared to assess the impact of knowledge-
augmented generation.
(B) Construction of benchmark datasets from DrugMechDB, a curated biomedical
knowledge graph of drug–disease mechanisms. Directed paths connecting a drug to a
disease were mined and transformed into structured questions targeting different
mechanistic facets: (i) gene nodes (Mechanistic Gene Benchmark), (ii) biochemical
entities or metabolites (Metabolite Benchmark), and (iii) drug–biological process–
disease paths (Drug Benchmark). Each benchmark provides paired questions and
gold‑standard labels for rigorous, domain‑specific evaluation of retrieval‑augmented
generation.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 3,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
5
process entities were conducted to retrieve associated chemical entities (drugs). Upon
receiving the structured knowledge graph responses from BTE, both node and edge
information were systematically processed. Nodes were extracted along with their
semantic categories and descriptive names, while edges were parsed to identify
subject-object pairs, predicates, and associated primary knowledge sources. Nodes and
edges were subsequently merged to construct coherent statements that succinctly
describe each mechanistic relationship (e.g., "drug X inhibits gene Y"). These concise,
natural-language context statements collectively formed the mechanistic evidence
provided to the language models during the generative inference phase, significantly
enhancing the transparency, interpretability, and accuracy of the generated outputs.
Supplementary Figure S1 provides a detailed schematic illustrating the complete BTE–
RAG pipeline workflow, demonstrating a representative query and the subsequent
processing and integration steps.
Context Selection
Two distinct evidence-inclusion strategies were systematically assessed for each
question. The first strategy incorporates the entire set of sentences retrieved by BTE,
leveraging the extensive 128,000-token context window of GPT-4o 32. The second
strategy employs sentence-level cosine similarity filtering using ‘S-PubMedBert-MS-
MARCO’ embeddings, retaining only sentences whose similarity scores with the query
exceed a predefined percentile threshold 33. Running these two strategies concurrently
enables a direct evaluation of the impact of comprehensive versus selectively pruned
contextual evidence under identical experimental conditions.
Generative Inference
For the generative phase, selected context sentences and the original query were
concatenated to form an enriched prompt submitted to both GPT-4o and GPT-4o-mini
models. Models were configured deterministically (temperature set to 0) to produce
reproducible outputs. Parallel runs of the LLM-only baseline used identical questions
without the BTE-derived context. To streamline downstream analyses and ensure
objective comparisons, language models were instructed explicitly to output structured
JSON responses devoid of extraneous explanatory text. Detailed system prompts are
shown in Supplementary Table S1.
Evaluation
Generated outputs were benchmarked against curated ground-truth annotations from
the constructed mechanistic datasets. For entity-specific tasks (e.g., gene identification),
correctness was evaluated via exact, case-insensitive string matching. For semantically
nuanced responses (e.g., metabolites and drugs), BioBERT-based embeddings
(‘BioBERT-mnli-snli-scinli-scitail-mednli-stsb’) quantified the semantic similarity between
model-generated outputs and reference answers 34. Answers surpassing a
predetermined similarity threshold were classified as accurate. Collectively, these
standardized evaluation methodologies ensure scalable, objective, and reproducible
assessment of the fidelity and biological coherence of model predictions, rigorously
testing the utility and impact of knowledge graph-enhanced prompting in biomedical
reasoning contexts.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 4,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
6
2.2 Datasets from DMDB
2.2 Construction of Mechanistic Question–Answer Benchmarks from
DrugMechDB
DrugMechDB is a rigorously curated biomedical knowledge graph designed to represent
therapeutic mechanisms through explicit stepwise paths. These pathways originate from
drug nodes, traverse biologically meaningful intermediate entities, and culminate at
disease nodes, collectively delineating mechanisms underlying drug–disease
interactions 31. The current version of DrugMechDB contains 5,666 curated mechanistic
pathways, providing comprehensive coverage for 4,583 distinct drug–disease
indications. Each node within DrugMechDB is systematically mapped to a standardized
Biolink category and anchored to stable identifiers, while each relationship (edge) is
annotated with a controlled predicate 35. This structured, granular, and provenance-rich
resource enables robust benchmarking of computational models focused on
mechanistic inference rather than simple associative or co-occurrence patterns.
To comprehensively evaluate the BTE–RAG framework across multiple levels of
biological resolution, DrugMechDB was systematically transformed into three
complementary mechanistic question–answer (QA) benchmarks, each highlighting a
distinct biological focus: genes, metabolites, and drugs (Figure 1B).
Gene-Centric Benchmark: Mechanistic pathways were initially filtered to retain those
containing exactly one internal node annotated as a Gene entity. Gene identifiers were
resolved into standardized HGNC symbols using MyGene.info services; pathways
containing deprecated or ambiguous identifiers were systematically excluded 36. Each
remaining mechanistic pathway was converted into a structured question of the form:
"Which gene plays the most significant mechanistic role in how Drug 'X' treats or
impacts Disease 'Y'?" The corresponding HGNC gene symbol served as the definitive
ground truth. Following deduplication across different indications, this dataset
comprised 798 unique QA pairs.
Metabolite-Centric Benchmark: To capture downstream biochemical effects, pathways
exclusively containing taxonomic relationships (such as “subclass” predicates) were
removed to ensure mechanistic specificity. Selected pathways included exactly one
metabolite node, identified specifically by filtering node identifiers prefixed with "CHEBI:"
to denote biochemical entities. Records containing multiple mechanistic pathways were
excluded to maintain dataset simplicity and clarity. Each qualifying pathway was
formulated into the structured question: "Which biochemical entity is affected by Drug 'X'
via its mechanism of action in treating Disease 'Y'?" The metabolite node identified via
CHEBI identifiers served as the ground truth answer, yielding a final dataset of 201
unique QA pairs.
Drug-Centric Benchmark: A third benchmark dataset was developed to evaluate the
ability of computational models to infer therapeutic agents when provided with a disease
and a mediating biological process. Pathways were selected specifically if they included
exactly one BiologicalProcess node, and drugs lacking resolvable identifiers from
DrugBank or MESH databases were excluded to ensure accurate and standardized
identification. Each qualifying path was structured into the question: "Which drug can be
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 5,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
7
used in the treatment of Disease 'Y' by targeting Biological Process 'P'?" The
corresponding drug node served as the ground truth. After thorough harmonization and
stringent quality control measures, this benchmark comprised 842 unique QA pairs.
The resulting benchmarks thus offer a robust, multiscale evaluation platform specifically
designed to probe the mechanistic inference capabilities of knowledge-graph-
augmented language models comprehensively and rigorously.
2.3 Use of Large Language Models:
All natural-language processing steps were carried out with two OpenAI models,
GPT-4o-mini (snapshot 2024-07-18) and GPT-4o (snapshot 2024-08-06) 32,37. Both
models were invoked through the OpenAI API. The temperature parameter was fixed at
0.0 for every request, thereby forcing deterministic decoding and facilitating
reproducible evaluation. Each model accepts up to 128,000 input tokens and can return
a maximum of 16,384 completion tokens. Although GPT-4o-mini is substantially smaller
in parameter count than GPT-4o, both models share the same context window size,
permitting a controlled comparison of model capacity while holding prompt length
constant 32,37. At the time the experiments were executed, GPT-4o-mini was priced at
0.15 USD per million input tokens and 0.60 USD per million output tokens. The
corresponding prices for GPT-4o were 2.50 USD and 10.00 USD, respectively. Model
versions were pinned by explicit snapshot identifiers to eliminate the possibility of
version drift during the study period. Snapshot documentation is archived at
https://platform.openai.com/docs/models/gpt-4o-mini and
https://platform.openai.com/docs/models/gpt-4o. ChatGPT was used to assist with
grammar correction and to improve conciseness in the manuscript.
Prompt engineering
Each request began with a concise system prompt defining the model’s role 38–40. Two
distinct system prompts were prepared per dataset: one for the standalone LLM
baseline, and one tailored for the retrieval-augmented BTE–RAG workflow. Queries
were provided directly to the model without additional contextual examples, employing a
zero-shot prompting approach. To facilitate efficient and accurate downstream
processing, the model was instructed to produce responses strictly in a predefined
JSON format, omitting supplementary explanatory text.
3 Results
We developed BTE–RAG, a retrieval-augmented generation framework designed to
enhance large language models (LLMs) by integrating mechanistic evidence from
BioThings Explorer (BTE), a federated biomedical knowledge graph. BTE–RAG embeds
structured, graph-derived context into prompts to improve mechanistic accuracy, ensure
explicit provenance, and facilitate higher-order reasoning. We benchmarked the
performance of BTE–RAG versus an LLM-only baseline across three distinct
mechanistic reasoning tasks: gene identification, drug–metabolite interactions, and
drug–biological-process relationships.
3.1 Mechanistic Gene Prediction
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 6,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
8
We first assessed the effect of knowledge graph augmentation on gene-level
mechanistic inference using 798 curated drug–disease pairs from DrugMechDB.
Queries were structured as: "Which gene plays the most significant mechanistic role in
how Drug 'X' treats or impacts Disease 'Y'?" Two models, GPT-4o and the smaller GPT-
4o-mini were evaluated in two experimental conditions: (i) LLM-only, providing no
additional context, and (ii) BTE–RAG, incorporating evidence retrieved via BTE.
Under the LLM-only condition, GPT-4o-mini correctly answered 407 queries
(Supplementary Figure S2B), achieving an accuracy of 51%. Augmenting prompts with
BTE-derived evidence markedly increased accuracy to 75.8% (Figure 2A), a substantial
absolute improvement of 24.8 percentage points (Supplementary Figure S2C). The
larger GPT-4o model demonstrated an accuracy of 69.8% in the baseline condition,
which increased to 78.6% (627 correct answers, Supplementary Figure S3B) when
supplemented with BTE context (Figure 2B), reflecting an absolute gain of 8.8
percentage points (Supplementary Figure S3C).
Because knowledge‑graph queries can return superfluous triples, we evaluated a
simple similarity‑based pruning strategy. Specifically, both the user queries and the
context statements were embedded using the sentence embedding model 'S-
PubMedBert-MS-MARCO' 33. Context statements were then ranked based on cosine
Figure 2: Retrieval‑augmented generation with BTE-RAG markedly improves factual
accuracy of gene-centric benchmark using GPT‑4o models.
(A) For the compact gpt‑4o‑mini model, introducing the BTE-RAG retrieval layer raised
overall accuracy from 51% (hatched bar, LLM‑only baseline) to 75.8 % (solid bar).
(B) The same intervention applied to the larger gpt‑4o model increased accuracy from
69.8% to 78.6 %. Accuracy was calculated as the proportion of correct answers across the
composite biomedical question‑answering benchmark described in Methods.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 7,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
9
similarity scores relative to the embedded query, and those statements falling within the
lowest 10% similarity scores were removed to retain only the most relevant context
lines. This lightweight filtering strategy preserved, and in some cases slightly enhanced
performance across all evaluated accuracy metrics (Supplementary Figure S2A, S3A),
suggesting that excluding the least relevant context statements can beneficially impact
the accuracy of gene-level reasoning tasks. Cross-tabulation provided quantitative detail
on the effect of retrieval augmentation, illustrating that BTE context flipped 245
previously incorrect answers to correct for GPT-4o-mini and 119 for GPT-4o
(Supplementary Figure S2D, S3D).
Together, these findings illustrate that structured mechanistic context provided through
BTE significantly enhances gene-level reasoning performance, particularly amplifying
the capabilities of smaller-scale models such as GPT-4o-mini. The accuracy
improvements observed even in GPT-4o highlight that state-of-the-art models retain
latent knowledge gaps effectively bridged by integrating curated biomedical graphs and
selectively pruning irrelevant content.
3.2 Prediction of Drug–Metabolite Relationships
To gauge whether retrieval augments the mechanistic fidelity of metabolite‑level
reasoning, we posed 201 queries of the form “Which biochemical entity is affected by
Drug X via its mechanism of action in treating Disease Y?” using the DrugMechDB‑
derived Drug → Metabolite → Disease paths. Because metabolite names are much less
standardized than gene names, we scored the answer quality by computing a semantic
concordance between each model answer and the gold standard metabolite. Semantic
concordance was based on cosine similarity of text embeddings using the BioBERT-
STSB text embedding model, a metric that rewards graded lexical and semantic overlap
rather than exact string identity 34.
Rank‑ordered similarity curves in Figure 3A immediately reveal the effect of
augmentation: for both gpt‑4o‑mini (orange) and gpt‑4o (blue), the BTE‑RAG trace
(solid line) departs from the prompt‑only baseline (dashed line) after ~130 ranked
questions (cosine ≈ 0.70) and widens steadily, nearly doubling the number of answers
that reach the high‑fidelity zone (cosine ≥ 0.90).
Histograms for the prompt‑only condition (Figure 3B, gpt4o; Supplementary Figure S4,
gpt-4o-mini) reveal a pronounced left‑skew: both gpt‑4o‑mini and gpt‑4o peak in the
0.15–0.30 similarity bins, with medians below 0.30. Only 15 % of answers fall in the
high‑similarity regime (≥ 0.90), indicating that the LLMs frequently retrieve metabolites
that are semantically distant from the curated ground truth.
Appending BTE evidence shifts the distributions rightward across similarity bins (Figure
3C (gpt-4o), Supplementary Figure S5, S6). For GPT-4o-mini, applying a stringent
context similarity threshold (>80th percentile) increased the number of high-fidelity
answers (cosine similarity 0.90–1.00) from 28 to 51 (+82%). Similarly, GPT-4o exhibited
an increase from 30 to 53 (+77%) under the same conditions. Simultaneously, counts in
the mid‑similarity interval (0.40–0.70) contract (Supplementary Figure S5, S6),
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 8,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
10
confirming that retrieval largely converts borderline predictions into highly concordant
hits rather than merely redistributing low‑score failures.
Because voluminous context can inflate token budgets, we assessed performance
when progressively discarding lower‑ranked context lines (10th to 90th percentile
cut‑offs). Rank‑ordered similarity traces (Supplementary Figure S7) show that the
BTE‑RAG curves remain above or coincide with the prompt‑only baseline throughout
the distribution even when 90 % of context is withheld. Histograms (Supplementary
Figure S5, S6) reinforce this observation: the ≥ 0.90 similarity bin consistently retains
Figure 3: Retrieval‑augmented context increases semantic concordance with
ground‑truth metabolites.
(A) Cosine‑similarity scores between each generated answer and the corresponding
reference metabolite (sentence‑transformer embeddings; see Methods) are plotted
for all 201 questions in the Metabolite Benchmark, ordered from lowest to highest
similarity. Dashed traces represent the LLM‑only baseline, whereas solid traces
include BioThings Explorer (BTE) retrieval‑augmented context. Orange curves
denote gpt‑4o‑mini; blue curves denote gpt‑4o. For both model sizes, BTE‑RAG
systematically shifts the similarity distribution upward, indicating improved semantic
alignment with the curated biochemical ground truth.
(B) Score distribution GPT-4o, LLM-only. Histogram of cosine-similarity scores for
GPT-4o answers generated without external context. Bar heights and numeric labels
denote the number of questions (n = 201) falling in each bin; the overlaid KDE line
summarizes the distribution.
(C) Score distribution GPT-4o + BTE-RAG. Same format as panel B but for GPT-4o
answers generated with BTE-RAG's context. The right-shifted, more peaked
distribution highlights the improvement in semantic alignment achieved by
retrieval-augmented generation.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 9,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
11
≥ 40 hits for both models across all pruning levels, demonstrating that a concise subset
of top‑ranked evidence lines is sufficient to drive the bulk of the performance gains.
3.3 Drug–Biological Process Reasoning
We next asked 842 DrugMechDB questions of the form “Which drug can be used in the
treatment of Disease Y by targeting Biological Process P?”. Answer fidelity was again
scored with BioBERT‑STSB cosine similarity 34. In rank‑ordered plots (Figure 4A), the
prompt‑only (dashed) and BTE‑RAG (solid) curves for both gpt‑4o‑mini (orange) and
Figure 4: Retrieval-augmented generation maintains overall parity yet excels in the
high-fidelity regime of drug-centric mechanistic answers.
(A) Cosine-similarity scores (sentence-transformer embeddings; see Methods) between
each generated answer and the reference drug→biological-process pathway are plotted
for all 842 questions in the Drug Benchmark, ordered from lowest to highest similarity.
Dashed traces (LLM-only) and solid traces (BTE-RAG) follow nearly overlapping
trajectories across most of the distribution, indicating broadly comparable performance
between the two inference modes. However, above a cosine similarity threshold of ≈ 0.7,
both gpt-4o-mini (orange) and gpt-4o (blue) curves generated with BTE context surge
ahead of their prompt-only counterparts, revealing a marked advantage in producing
highly concordant mechanistic explanations.
(B) Score distribution GPT-4o, LLM-only. Histogram of cosine-similarity scores for GPT-
4o answers generated without external context. The hatched bar at 0.90–1.00 marks the
high-fidelity zone, capturing 129 near-perfect matches produced by the baseline model.
(C) Score distribution GPT-4o + BTE-RAG. Same format as panel B but for GPT-4o
answers produced with BTE-RAG's context. The distribution is right-shifted, and the solid
bar in the 0.90–1.00 high-fidelity zone now contains 144 answers, highlighting the
enrichment of top-tier mechanistic concordance achieved through retrieval-augmented
generation.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 10,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
12
gpt‑4o (blue) are nearly super‑imposable through the first ≈ 600 ranked queries
(cosine < 0.70). Beyond this inflection point, the BTE‑augmented traces bend upward
more steeply, yielding a clear margin in the high‑fidelity zone (cosine ≥ 0.80). Thus,
retrieval does not alter overall parity but selectively boosts the most mechanistically
demanding subset of questions.
Prompt‑only histograms (Figure 4B; Supplementary Figure S8, gpt-4o-mini) peak in the
0.20–0.35 range, with ~15 % of answers falling in the ≥ 0.90 bin. Appending the full BTE
context nudges the entire distribution rightward (Figure 4C; Supplementary Figure S9-
top‑left panels). The ≥ 0.90 bin increases by ≈ 5–10 % for both model sizes. These
shifts, though smaller than those seen for gene‑ and metabolite tasks, account for the
late‑stage separation observed in Figure 4A.
Unlike the previous tasks, performance here depends on retaining a broad evidentiary
window. When the lowest‑ranked 10–20 % of context lines are removed, the uplift in the
≥ 0.90 bin attenuates, and the rank‑ordered curves progressively converge toward the
baseline (Supplementary Figure S9, S10 & S11). Deeper cuts (> 40 %) essentially erase
the retrieval advantage. This suggests that pathway‑level questions draw on a more
diffuse set of graph triples than gene or metabolite queries, and aggressive trimming
can discard critical relational clues. For drug → biological‑process reasoning, BTE‑RAG
delivers targeted gains in the top decile of similarity scores, provided the complete
knowledge‑graph context is supplied.
These findings reinforce that optimal evidence granularity is task‑dependent: concise,
high‑relevance snippets suffice for gene‑ and metabolite‑level inference, whereas
pathway‑level queries benefit from a richer contextual fabric. By grounding LLM outputs
within curated, biologically meaningful pathways, BTE–RAG consistently accelerates
accurate inference, reduces residual errors, and demonstrates considerable promise for
advancing automated biomedical hypothesis generation and therapeutic repurposing
workflows.
4 Discussion
The rapid advancement of large language models (LLMs) has profoundly reshaped
biomedical natural language processing 41. Despite these advances, current LLMs
predominantly operate as opaque systems with implicit knowledge representation,
rendering their factual accuracy challenging to verify and limiting their applicability in
high-stakes biomedical environments. Recent efforts, such as the knowledge-graph
augmented retrieval approach 21, have successfully enhanced biomedical reasoning by
integrating disease-specific embeddings from specialized knowledge graphs such as
SPOKE 42. We developed BTE–RAG, a novel retrieval-augmented generation pipeline
that strategically incorporates explicit mechanistic evidence from BTE 27. By leveraging
the extensive and federated biomedical knowledge graph of BTE, our method
substantially broadens the applicability of knowledge-graph augmented strategies to
address diverse query types, including those involving genes, proteins, metabolites,
biological processes, diseases and chemical substances. This capability allows BTE–
RAG to support complex, multi-domain biomedical inquiries, significantly extending
beyond disease-centric queries alone. Our comparative analysis, utilizing a direct "LLM-
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 11,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
13
only" approach versus the BTE-augmented strategy (Figure 1A) across three rigorously
constructed DrugMechDB benchmarks (Figure 1B), demonstrates that incorporating
explicit, structured context significantly elevates answer accuracy, enhances
transparency, and allows smaller, more computationally efficient models to perform
competitively with leading-edge systems. The granularity, explicit mechanistic
grounding, and high-quality source attribution inherent in these benchmarks uniquely
position them for probing the causal inference capabilities of language models.
Comparable mechanistically focused datasets remain scarce in the biomedical domain,
as existing resources like PubMedQA or Natural Questions predominantly target
document-level retrieval or summarization rather than deep mechanistic inference 43,44.
Traditional LLMs accumulate domain-specific knowledge implicitly during pre-training by
statistically modeling large collections of biomedical texts. Although this method yields
linguistically coherent responses, it inherently exposes models to the risk of
hallucinations, particularly in scenarios involving sparse biomedical facts or multi-step
mechanistic reasoning. By contrast, retrieval-augmented generation explicitly anchors
model predictions in verifiable external sources, constraining generation to well-
substantiated evidence. BTE–RAG advances this paradigm by dynamically federating
61 authoritative biomedical APIs into a single cohesive meta-graph, thereby enabling
real-time inclusion of newly curated knowledge in generated responses and ensuring
reproducible benchmarking through cached retrievals.
Four critical design principles underpin the efficacy of the BTE–RAG framework. First,
the framework leverages an API-centric federation layer that integrates trusted
biomedical data sources, including MyGene.info, Gene Ontology, CTDbase, Pubmed
central, CHEBI, disease-ontology, DrugBank and more, through unified interface of BTE
29,36,45–47. Second, it employs semantic query templates aligned with the Translator
Reasoner API (TRAPI) standard, selectively retrieving only the most relevant
relationships for each question, thereby avoiding extraneous contextual noise. Third,
retrieved knowledge graph triples are translated into succinct, directionally explicit
declarative statements, seamlessly integrating structured knowledge with natural-
language prompts. Fourth, BTE–RAG incorporates flexible context-selection strategies;
full-context utilization and cosine similarity-based pruning for scenarios requiring
concise, highly relevant context subsets.
Across diverse mechanistic tasks, including gene-centric, metabolite-centric, and drug-
centric benchmarks derived from DrugMechDB 31, BTE-augmented prompting
consistently outperformed the LLM-only approach. Notably, the smaller GPT-4o-mini
model achieved over sixty-percentage improvement in accuracy on the gene-centric
task and eighty-two percent improvement on the metabolite task, when provided with
structured BTE evidence. Even GPT-4o, the larger flagship model, demonstrated
substantial accuracy gains, underscoring that high-quality, explicit mechanistic context
can effectively mitigate the need for extremely large model sizes, suggesting a cost-
efficient pathway toward domain-specific accuracy.
While BTE offers comprehensive coverage across numerous biomedical domains,
certain areas such as single-cell data, epigenomic profiles, and microbiome interactions
remain sparsely represented. Furthermore, variations in curation quality across
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 12,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
14
federated APIs could inadvertently propagate erroneous edges into model-generated
contexts. Although our evaluation leveraged the meticulously curated, high-confidence
knowledge graph of DrugMechDB, real-world applications may require strategies for
managing lower-confidence or conflicting evidence. Our study employed deterministic
prompting to maintain comparability; exploring guided, chain-of-thought prompting
strategies could further enhance complex reasoning capabilities but may simultaneously
reintroduce hallucinatory risks.
Future developments of BTE–RAG may involve integration into autonomous agent
systems capable of iterative querying, generation, self-critiquing, and re-querying, thus
facilitating automated self-verification workflows. Expanding the underlying knowledge
graph to incorporate resources such as LINCS transcriptomic signatures, tissue-specific
interaction networks, and multi-omics datasets would further enrich the mechanistic
coverage and broaden applicability 48. Expanding benchmarking efforts beyond
DrugMechDB to encompass open-world biomedical queries could rigorously evaluate
and strengthen the capacity BTE–RAG for reliable, contextually grounded inference.
Furthermore, adopting frameworks like the Model Context Protocol could harmonize
comparisons across diverse generative models, facilitate rigorous auditing, and support
real-time decision-making in clinical and regulatory contexts.
In conclusion, BTE–RAG demonstrates the substantial value derived from strategically
integrating explicit mechanistic evidence into biomedical language modeling workflows.
By significantly improving answer accuracy, interpretability, and computational efficiency,
this approach provides a scalable, transparent, and robust foundation for future
biomedical AI systems, effectively balancing accuracy, affordability, and trustworthiness.
Supplementary File:
Supplementary Figure S1: Detailed pipeline for BTE-RAG
Supplementary Figure S2: Performance of BTE‑RAG versus an LLM‑only baseline on
the gene‑centric benchmark using gpt‑4o‑mini.
Supplementary Figure S3: Performance of BTE‑RAG versus an LLM‑only baseline on
the gene‑centric benchmark using gpt‑4o.
Supplementary Figure S4: Cosine-similarity profile for the metabolite-centric
benchmark using GPT-4o-mini in LLM-only mode.
Supplementary Figure S5: Distribution of answer similarities for the metabolite-centric
benchmark using GPT-4o-mini in BTE-RAG mode.
Supplementary Figure S6: Distribution of answer similarities for the metabolite-centric
benchmark using GPT-4o in BTE-RAG mode.
Supplementary Figure S7: Rank-ordered cosine similarities between model
predictions and ground-truth answers on the metabolite-centric benchmark, across
context filtering thresholds.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 13,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
15
Supplementary Figure S8: Cosine-similarity profile for the drug-centric benchmark
using GPT-4o-mini in LLM-only mode.
Supplementary Figure S9: Distribution of answer similarities for the drug-centric
benchmark using GPT-4o-mini in BTE-RAG mode.
Supplementary Figure S10: Distribution of answer similarities for the drug-centric
benchmark using GPT-4o in BTE-RAG mode.
Supplementary Figure S11: Rank-ordered cosine similarities between model
predictions and ground-truth answers on the drug-centric benchmark, across context
filtering thresholds.
Supplementary Table S1: System prompts used for each task and model
Availability of Data and Materials
The source code, datasets, and analysis workflows described in this manuscript are
publicly available in the GitHub repository: Project Name: BTE-RAG. Repository URL:
https://github.com/janjoy/BTE-RAG
Author contributions
J.J. and A.S. conceived the project and proposed the benchmark curation. J.J.
implemented the code, created the benchmarks and wrote the manuscript. A.S.
supervised the study. All authors read and approved the final manuscript.
Acknowledgements
We thank Jackson Callaghan, Mikhael Astorga, and Karthik Soman for insightful
discussions, and Everaldo Rodolpho for technical support with high-performance
computing resources and server infrastructure.
Competing Interests
The authors declare no competing interests.
Funding
Support for this work was provided by the National Institute on Aging (award
R01AG066750) and by the National Center for Advancing Translational Sciences
through the Biomedical Data Translator program (awards 1OT2TR003427 and
1OT2TR005710). Any opinions expressed in this document do not necessarily reflect
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 14,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
16
the views of NIA, NCATS, NIH, individual Translator team members, or affiliated
organizations and institutions.
References
1.
Hou, W. & Ji, Z. Assessing GPT-4 for cell type annotation in single-cell RNA-seq
analysis. Nature Methods 2024 21:8 21, 1462–1465 (2024).
2.
Rives, A. et al. Biological structure and function emerge from scaling
unsupervised learning to 250 million protein sequences. Proc Natl Acad Sci U S A
118, e2016239118 (2021).
3.
Lin, Z. et al. Evolutionary-scale prediction of atomic-level protein structure with a
language model. Science (1979) 379, 1123–1130 (2023).
4.
Meier, J. et al. Language models enable zero-shot prediction of the effects of
mutations on protein function. in Proceedings of the 35th International Conference
on Neural Information Processing Systems (Curran Associates Inc., Red Hook,
NY, USA, 2021).
5.
Zheng, Y. et al. Large language models in drug discovery and development: From
disease mechanisms to clinical trials. arxiv.orgY Zheng, HY Koh, M Yang, L Li, LT
May, GI Webb, S Pan, G ChurcharXiv preprint arXiv:2409.04481, 2024•arxiv.org.
6.
Miller, K. et al. Dynamic few-shot prompting for clinical note section classification
using lightweight, open-source large language models. Journal of the American
Medical Informatics Association 32, 1164–1173 (2025).
7.
Ji, Z. et al. Survey of Hallucination in Natural Language Generation. ACM Comput
Surv 55, (2023).
8.
Vaswani, A. et al. Attention Is All You Need (Transformer Architecture). (2023).
9.
Kim, Y. et al. Medical Hallucinations in Foundation Models and Their Impact on
Healthcare. (2025).
10.
Idnay, B. et al. Environment scan of generative AI infrastructure for clinical and
translational science. npj Health Systems 2, 1–11 (2025).
11.
Ibrahim, M. et al. Generative AI for synthetic data across multiple medical
modalities: A systematic review of recent developments and challenges. Comput
Biol Med 189, 109834 (2025).
12.
Maynez, J., Narayan, S., Bohnet, B. & McDonald, R. On Faithfulness and
Factuality in Abstractive Summarization. Proceedings of the Annual Meeting of the
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 15,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
17
Association for Computational Linguistics 1906–1919 (2020)
doi:10.18653/v1/2020.acl-main.173.
13.
Yang Bs, Y., Jin, Q., Huang Phd, F. & Lu, Z. Adversarial Attacks on Large
Language Models in Medicine. (2024).
14.
Luo, R. et al. BioGPT: generative pre-trained transformer for biomedical text
generation and mining. Brief Bioinform 23, (2022).
15.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large Language Models
are Zero-Shot Reasoners. Adv Neural Inf Process Syst 35, (2022).
16.
Brown, T. B. et al. Language Models are Few-Shot Learners. Adv Neural Inf
Process Syst 2020-December, (2020).
17.
Lewis, P. et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP
Tasks. Adv Neural Inf Process Syst 2020-December, (2020).
18.
Izacard, G. & Grave, E. Leveraging Passage Retrieval with Generative Models for
Open Domain Question Answering. EACL 2021 - 16th Conference of the
European Chapter of the Association for Computational Linguistics, Proceedings
of the Conference 874–880 (2020) doi:10.18653/v1/2021.eacl-main.74.
19.
Karpukhin, V. et al. Dense Passage Retrieval for Open-Domain Question
Answering. EMNLP 2020 - 2020 Conference on Empirical Methods in Natural
Language Processing, Proceedings of the Conference 6769–6781 (2020)
doi:10.18653/v1/2020.emnlp-main.550.
20.
Zhang, G. et al. Leveraging long context in retrieval augmented language models
for medical question answering. NPJ Digit Med 8, 239 (2025).
21.
Soman, K. et al. Biomedical knowledge graph-optimized prompt generation for
large language models. Bioinformatics 40, (2024).
22.
Hou, W., BioRxiv, Z. J.- & 2023, undefined. GeneTuring tests GPT models in
genomics. biorxiv.orgW Hou, Z JiBioRxiv, 2023•biorxiv.org
doi:10.1101/2023.03.11.532238.ABSTRACT.
23.
Bizon, C. et al. ROBOKOP KG AND KGB: Integrated Knowledge Graphs from
Federated Sources. J Chem Inf Model 59, 4968 (2019).
24.
Mungall, C. J. et al. The Monarch Initiative: An integrative data and analytic
platform connecting phenotypes to genotypes across species. Nucleic Acids Res
45, D712–D722 (2017).
25.
Pan, S. et al. Unifying Large Language Models and Knowledge Graphs: A
Roadmap. IEEE Trans Knowl Data Eng 36, 3580–3599 (2024).
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 16,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
18
26.
Evangelista, J. E. et al. Toxicology knowledge graph for structural birth defects.
Communications Medicine 2023 3:1 3, 1–14 (2023).
27.
Callaghan, J. et al. BioThings Explorer: a query engine for a federated knowledge
graph of biomedical APIs. Bioinformatics 39, (2023).
28.
Carbon, S. et al. The Gene Ontology resource: enriching a GOld mine. Nucleic
Acids Res 49, D325–D334 (2021).
29.
Knox, C. et al. DrugBank 6.0: the DrugBank Knowledgebase for 2024. Nucleic
Acids Res 52, D1265–D1275 (2024).
30.
Fecho, K. et al. Progress toward a universal biomedical data translator. Clin Transl
Sci 15, 1838–1847 (2022).
31.
Gonzalez-Cavazos, A. C. et al. DrugMechDB: A Curated Database of Drug
Mechanisms. Sci Data 10, 1–7 (2023).
32.
OpenAI. GPT-4o System Card. (2024).
33.
Deka, P., Jurek-Loughrey, A. & Padmanabhan, D. Improved methods to aid
unsupervised evidence-based fact checking for online health news. Journal of
Data Intelligence 3, 474–505 (2022).
34.
Deka, P., Jurek-Loughrey, A. & P, D. Evidence extraction to validate medical
claims in fake news detection. SpringerP Deka, A Jurek-Loughrey, DPInternational
conference on health information science, 2022•Springer 13705 LNCS, 3–15
(2022).
35.
Unni, D. R. et al. Biolink Model: A universal schema for knowledge graphs in
clinical, biomedical, and translational science. Clin Transl Sci 15, 1848 (2022).
36.
Wu, C., MacLeod, I. & Su, A. I. BioGPS and MyGene.info: organizing online,
gene-centric information. Nucleic Acids Res 41, D561–D565 (2013).
37.
OpenAI et al. GPT-4 Technical Report. (2023).
38.
Chen, Y. et al. Iterative Prompt Refinement for Mining Gene Relationships from
ChatGPT. bioRxiv 2023.12.23.573201 (2023) doi:10.1101/2023.12.23.573201.
39.
Marvin, G., Hellen, N., Jjingo, D. & Nakatumba-Nabende, J. Prompt Engineering
in Large Language Models. 387–402 (2024) doi:10.1007/978-981-99-7962-2_30.
40.
Sahoo, P. et al. A Systematic Survey of Prompt Engineering in Large Language
Models: Techniques and Applications. (2024).
41.
Li, M., Kilicoglu, H., Xu, H. & Zhang, R. BiomedRAG: A retrieval augmented large
language model for biomedicine. J Biomed Inform 162, 104769 (2025).
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 17,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
19
42.
Morris, J. H. et al. The scalable precision medicine open knowledge engine
(SPOKE): a massive knowledge graph of biomedical information. Bioinformatics
39, (2023).
43.
Jin, Q. et al. Pubmedqa: A dataset for biomedical research question answering.
arxiv.orgQ Jin, B Dhingra, Z Liu, WW Cohen, X LuarXiv preprint
arXiv:1909.06146, 2019•arxiv.org.
44.
Kwiatkowski, T. et al. Natural Questions: A Benchmark for Question Answering
Research. Trans Assoc Comput Linguist 7, 452–466 (2019).
45.
Davis, A. P. et al. Comparative Toxicogenomics Database (CTD): update 2023.
Nucleic Acids Res 51, D1257–D1262 (2023).
46.
Schriml, L. M. et al. Disease Ontology: a backbone for disease semantic
integration. Nucleic Acids Res 40, D940–D946 (2012).
47.
Bateman, A. et al. UniProt: the Universal Protein Knowledgebase in 2025. Nucleic
Acids Res 53, D609–D617 (2025).
48.
Pilarczyk, M. et al. Connecting omics signatures and revealing biological
mechanisms with iLINCS. Nat Commun 13, 4678 (2022).
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 18,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Joy et. al
1
Supplementary Figures
Figure S1: Detailed pipeline for BTE-RAG
Supplementary Figure S1 depicts the end-to-end workflow through which the BTE-RAG retrieval module converts
a biomedical question into the evidence snippets ultimately supplied to the language-model reasoner. Beginning
with an example query, “Which gene plays the most significant mechanistic role in how the
drug hydrochlorothiazide treats or impacts the disease hypertension?”, the system first performs named-entity
recognition, normalizing the detected concepts to controlled identifiers (Drug: MESH:D006852; Disease:
MESH:D006973). Each entity is then submitted to BioThings Explorer (BTE) as part of a query graph that
requests mechanistically relevant genes and proteins; independent queries are executed for the drug and for the
disease. BTE returns knowledge-graph sub-graphs whose nodes and edges represent subject-predicate-object
triples grounded in the biomedical literature. These triples are linearized into plain-text sentences, yielding two
preliminary corpora (one for the disease, one for the drug) that list, for example, genes such as INS, REN,
SLC12A3 and PTGS2 with their associated predicates. Finally, the complete set of sentences or a
percentile-filtered subset is ranked by cosine similarity to the original question, and the highest-scoring lines are
selected as the “retrieved context” passed forward for answer generation.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted August 2, 2025.
;
https://doi.org/10.1101/2025.08.01.668022
doi:
bioRxiv preprint | {
"author": "janet",
"creationDate": "D:20250731211435-07'00'",
"creationdate": "2025-07-31T21:14:35-07:00",
"creator": "Appligent AppendPDF Pro 5.5",
"file_path": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"format": "PDF 1.7",
"keywords": "",
"modDate": "D:20260110171154-08'00'",
"moddate": "2026-01-10T17:11:54-08:00",
"page": 19,
"producer": "Microsoft® Word for Microsoft 365",
"source": "/home/donbr/open-biosciences/biosciences-research/data/raw/2025.08.01.668022v1.full.pdf",
"subject": "",
"title": "32182735",
"total_pages": 31,
"trapped": ""
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.