- AI-Driven Electronic Health Records System for Enhancing Patient Data Management and Diagnostic Support in Egypt Digital healthcare infrastructure is crucial for global medical service delivery. Egypt faces EHR adoption barriers: only 314 hospitals had such systems as of Oct 2024. This limits data management and decision-making. This project introduces an EHR system for Egypt's Universal Health Insurance and healthcare ecosystem. It simplifies data management by centralizing medical histories with a scalable micro-services architecture and polyglot persistence for real-time access and provider communication. Clinical workflows are enhanced via patient examination and history tracking. The system uses the Llama3-OpenBioLLM-70B model to generate summaries of medical histories, provide chatbot features, and generate AI-based medical reports, enabling efficient searches during consultations. A Vision Transformer (ViT) aids in pneumonia classification. Evaluations show the AI excels in capturing details (high recall) but needs improvement in concise narratives. With optimization (retrieval-augmented generation, local data fine-tuning, interoperability protocols), this AI-driven EHR could enhance diagnostic support, decision-making, and healthcare delivery in Egypt. 6 authors · Feb 8
- Biomedical Large Languages Models Seem not to be Superior to Generalist Models on Unseen Medical Data Large language models (LLMs) have shown potential in biomedical applications, leading to efforts to fine-tune them on domain-specific data. However, the effectiveness of this approach remains unclear. This study evaluates the performance of biomedically fine-tuned LLMs against their general-purpose counterparts on a variety of clinical tasks. We evaluated their performance on clinical case challenges from the New England Journal of Medicine (NEJM) and the Journal of the American Medical Association (JAMA) and on several clinical tasks (e.g., information extraction, document summarization, and clinical coding). Using benchmarks specifically chosen to be likely outside the fine-tuning datasets of biomedical models, we found that biomedical LLMs mostly perform inferior to their general-purpose counterparts, especially on tasks not focused on medical knowledge. While larger models showed similar performance on case tasks (e.g., OpenBioLLM-70B: 66.4% vs. Llama-3-70B-Instruct: 65% on JAMA cases), smaller biomedical models showed more pronounced underperformance (e.g., OpenBioLLM-8B: 30% vs. Llama-3-8B-Instruct: 64.3% on NEJM cases). Similar trends were observed across the CLUE (Clinical Language Understanding Evaluation) benchmark tasks, with general-purpose models often performing better on text generation, question answering, and coding tasks. Our results suggest that fine-tuning LLMs to biomedical data may not provide the expected benefits and may potentially lead to reduced performance, challenging prevailing assumptions about domain-specific adaptation of LLMs and highlighting the need for more rigorous evaluation frameworks in healthcare AI. Alternative approaches, such as retrieval-augmented generation, may be more effective in enhancing the biomedical capabilities of LLMs without compromising their general knowledge. 11 authors · Aug 25, 2024