Evaluating a large language model’s ability to answer clinicians’ requests for evidence summaries

Authors

DOI:

https://doi.org/10.5195/jmla.2025.1985

Keywords:

Large Language Models, LLMs, Generative AI, Artificial Intelligence, Evidence Synthesis, Library Science, Information Science, Biomedical Informatics

Abstract

Objective: This study investigated the performance of a generative artificial intelligence (AI) tool using GPT-4 in answering clinical questions in comparison with medical librarians’ gold-standard evidence syntheses.

Methods: Questions were extracted from an in-house database of clinical evidence requests previously answered by medical librarians. Questions with multiple parts were subdivided into individual topics. A standardized prompt was developed using the COSTAR framework. Librarians submitted each question into aiChat, an internally managed chat tool using GPT-4, and recorded the responses. The summaries generated by aiChat were evaluated on whether they contained the critical elements used in the established gold-standard summary of the librarian. A subset of questions was randomly selected for verification of references provided by aiChat. 

Results: Of the 216 evaluated questions, aiChat’s response was assessed as “correct” for 180 (83.3%) questions, “partially correct” for 35 (16.2%) questions, and “incorrect” for 1 (0.5%) question. No significant differences were observed in question ratings by question category (p=0.73). For a subset of 30% (n=66) of questions, 162 references were provided in the aiChat summaries, and 60 (37%) were confirmed as nonfabricated.

Conclusions: Overall, the performance of a generative AI tool was promising. However, many included references could not be independently verified, and attempts were not made to assess whether any additional concepts introduced by aiChat were factually accurate. Thus, we envision this being the first of a series of investigations designed to further our understanding of how current and future versions of generative AI can be used and integrated into medical librarians’ workflow.

References

OpenAI. Introducing ChatGPT [Internet]. OpenAI; 2022 Nov 30 [cited 2024 Apr 25]. <https://openai.com/blog/chatgpt>.

Johns WL, Kellish A, Farronato D, Ciccotti MG, Hammoud S. ChatGPT can offer satisfactory responses to common patient questions regarding elbow ulnar collateral ligament reconstruction. Arthrosc Sports Med Rehabil. 2024 Apr;6(2):100893. DOI: https://doi.org/10.1016/j.asmr.2024.100893

Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, Faix DJ, Goodman AM, Longhurst CA, Hogarth M, Smith DM. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023 Jun 1;183(6):589–96. DOI: https://doi.org/10.1001/jamainternmed.2023.1838

Wei Q, Yao Z, Cui Y, Wei B, Jin Z, Xu X. Evaluation of ChatGPT-generated medical responses: a systematic review and meta-analysis. J Biomed Inform. 2024 Mar;151:104620. DOI: https://doi.org/10.1016/j.jbi.2024.104620

Mohammad B, Supti T, Alzubaidi M, Shah H, Alam T, Shah Z, Househ M. The pros and cons of using ChatGPT in medical education: a scoping review. Stud Health Technol Inform. 2023 Jun 29;305:644–7. DOI: https://doi.org/10.3233/shti230580

Shyr C, Grout RW, Kennedy N, Akdas Y, Tischbein M, Milford J, Tan J, Quarles K, Edwards TL, Novak LL, White J, Wilkins CH, Harris PA. Leveraging artificial intelligence to summarize abstracts in lay language for increasing research accessibility and transparency. J Am Med Inform Assoc. 2024 Jul 15:ocae186. DOI: https://doi.org/10.1093/jamia/ocae186.

Dubinski D, Won SY, Trnovec S, Behmanesh B, Baumgarten P, Dinc N, Konczalla J, Chan A, Bernstock JD, Freiman TM, Gessler F. Leveraging artificial intelligence in neurosurgery-unveiling ChatGPT for neurosurgical discharge summaries and operative reports. Acta Neurochir (Wien). 2024 Jan 26;166(1):38. DOI: https://doi.org/10.1007/s00701-024-05908-3

Lechien JR, Gorton A, Robertson J, Vaira LA. Is ChatGPT-4 accurate in proofread a manuscript in otolaryngology-head and neck surgery? Otolaryngol--Head Neck Surg. 2023 Sep 17. DOI: https://doi.org/10.1002/ohn.526

Zhang B. ChatGPT, an opportunity to understand more about language models. Med Ref Serv Q. 2023 Apr-Jun;42(2):194-201. DOI: https://doi.org/10.1080/02763869.2023.2194149

Liu H, Azam M, Bin Naeem S, Faiola A. An overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. Health Info Libr J. 2023 Dec;40(4):440-446. DOI: https://doi.org/10.1111/hir.12509

Friesen E, Tanna H, Roy A. Artificial intelligence in subject-specific library work: trends, perspectives, and opportunities. Can J Acad Librariansh. 2023 Aug 1;9:1–26. DOI: https://doi.org/10.33137/cjal-rcbu.v9.39951

Epstein HB. Suggestions from experience and AI tools to teach evidence based practice to nurses. Med Ref Serv Q. 2024 Jan-Mar;43(1):59-71. DOI: https://doi.org/10.1080/02763869.2024.2289335

Lund BD, Khan D, Yuvaraj M. ChatGPT in medical libraries, possibilities and future directions: An integrative review. Health Info Libr J. 2024 Mar;41(1):4-15. DOI: https://doi.org/10.1111/hir.12518

Sutton A, Parisi V. ChatGPT: game-changer or wildcard for systematic searching? Health Info Libr J. 2024 Mar;41(1):1-3. DOI: https://doi.org/10.1111/hir.12517

Roth S, Wermer-Colan A. Machine learning methods for systematic reviews: a rapid scoping review. Dela J Public Health. 2023 Nov 30;9(4):40-47. https://pubmed.ncbi.nlm.nih.gov/38173960/.

Mughari S, Rafique GM, Ali MA. Effect of AI literacy on work performance among medical librarians in Pakistan. J Acad Librariansh. 2024 Sep 1;50(5):102918. DOI: https://doi.org/10.1016/j.acalib.2024.102918

Qureshi R, Shaughnessy D, Gill KAR, Robinson KA, Li T, Agai E. Are ChatGPT and large language models “the answer” to bringing us closer to systematic review automation? Syst Rev. 2023 Apr 29;12(1):72. DOI: https://doi.org/10.1186/s13643-023-02243-z

Wang S, Scells H, Koopman B, Zuccon G. Can ChatGPT write a good Boolean query for systematic review literature search? [Internet]. arXiv; 2023 Feb 9 [cited 2024 Apr 25]. <http://arxiv.org/abs/2302.03495>.

Giuse NB, Kusnoor SV, Koonce TY, Ryland CR, Walden RR, Naylor HM, Williams AM, Jerome RN. Strategically aligning a mandala of competencies to advance a transformative vision. J Med Libr Assoc JMLA. 2013 Oct;101(4):261–7. DOI: https://doi.org/10.3163/1536-5050.101.4.007

Giuse NB, Koonce TY, Jerome RN, Cahall M, Sathe NA, Williams A. Evolution of a mature clinical informationist model. J Am Med Inform Assoc JAMIA. 2005;12(3):249–55. DOI: https://doi.org/10.1197/jamia.m1726

Giuse NB, Williams AM, Giuse DA. Integrating best evidence into patient care: a process facilitated by a seamless integration with informatics tools. J Med Libr Assoc JMLA. 2010 Jul;98(3):220–2. DOI: https://doi.org/10.3163/1536-5050.98.3.009

Blasingame MN, Williams AM, Su J, Naylor HM, Koonce TY, Epelbaum MI, Kusnoor SV, Fox ZE, Lee P, DesAutels SJ, Frakes ET, Giuse NB. Bench to bedside: detailing the catalytic roles of fully integrated information scientists. Presented at: Special Libraries Association Annual Conference, Cleveland, OH; June 18, 2019.

Koonce TY, Giuse DA, Blasingame MN, Su J, Williams AM, Biggerstaff PL, Osterman T, Giuse NB. Personalization of evidence: using intelligent datasets to inform the process. Presented at: Annual Meeting of the American Medical Informatics Association (virtual); November 2020.

Mulvaney SA, Bickman L, Giuse NB, Lambert EW, Sathe NA, Jerome RN. A randomized effectiveness trial of a Clinical Informatics Consult Service: impact on evidence-based decision-making and knowledge implementation. J Am Med Inform Assoc JAMIA. 2008;15(2):203–11. DOI: https://doi.org/10.1197/jamia.m2461

Giuse NB. Advancing the practice of clinical medical librarianship. Bull Med Libr Assoc. 1997 Oct;85(4):437–8.

Goodman RS, Patrinely JR, Stone CA, Zimmerman E, Donald RR, Chang SS, Berkowitz ST, Finn AP, Jahangir E, Scoville EA, Reese TS, Friedman DL, Bastarache JA, van der Heijden YF, Wright JJ, Ye F, Carter N, Alexander MR, Choe JH, Chastain CA, Zic JA, Horst SN, Turker I, Agarwal R, Osmundson E, Idrees K, Kiernan CM, Padmanabhan C, Bailey CE, Schlegel CE, Chambless LB, Gibson MK, Osterman TJ, Wheless LE, Johnson DB. Accuracy and reliability of chatbot responses to physician questions. JAMA Netw Open. 2023 Oct 2;6(10):e2336483. DOI: https://doi.org/10.1001/jamanetworkopen.2023.36483

Cakir H, Caglar U, Yildiz O, Meric A, Ayranci A, Ozgor F. Evaluating the performance of ChatGPT in answering questions related to urolithiasis. Int Urol Nephrol. 2024 Jan;56(1):17–21. DOI: https://doi.org/10.1007/s11255-023-03773-0

Ozgor BY, Simavi MA. Accuracy and reproducibility of ChatGPT’s free version answers about endometriosis. Int J Gynaecol Obstet. 2024 May;162(2):691-695. DOI: https://doi.org/10.1002/ijgo.15309

Kuşcu O, Pamuk AE, Sütay Süslü N, Hosal S. Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer? Front Oncol. 2023;13:1256459. DOI: https://doi.org/10.3389/fonc.2023.1256459

Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clin Proc Digit Health. 2023 Sep;1(3):226–34. DOI: https://doi.org/10.1016/j.mcpdig.2023.05.004

Tang L, Sun Z, Idnay B, Nestor JG, Soroush A, Elias PA, Xu Z, Ding Y, Durrett G, Rousseau JF, Weng C, Peng Y. Evaluating large language models on medical evidence summarization. Npj Digit Med. 2023 Aug 24;6(1):158. DOI: https://doi.org/10.1038/s41746-023-00896-7

Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthc Basel Switz. 2023 Mar 19;11(6):887. DOI: https://doi.org/10.3390/healthcare11060887

Thapa S, Adhikari S. ChatGPT, Bard, and large language models for biomedical research: opportunities and pitfalls. Ann Biomed Eng. 2023 Dec;51(12):2647–51. DOI: https://doi.org/10.1007/s10439-023-03284-0

Liu J, Zheng J, Cai X, Wu D, Yin C. A descriptive study based on the comparison of ChatGPT and evidence-based neurosurgeons. iScience. 2023 Sep 15;26(9):107590. DOI: https://doi.org/10.1016/j.isci.2023.107590

Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG. Evaluation of the performance of generative AI large language models ChatGPT, Google Bard, and Microsoft Bing Chat in supporting evidence-based dentistry: comparative mixed methods study. J Med Internet Res. 2023 Dec 28;25:e51580. DOI: https://doi.org/10.2196/51580

Maksimoski M, Noble AR, Smith DF. Does ChatGPT answer otolaryngology questions accurately? The Laryngoscope. 2024 Mar 28. DOI: https://doi.org/10.1002/lary.31410

Blacker SN, Kang M, Chakraborty I, Chowdhury T, Williams J, Lewis C, Zimmer M, Wilson B, Lele AV. Utilizing artificial intelligence and chat generative pretrained transformer to answer questions about clinical scenarios in neuroanesthesiology. J Neurosurg Anesthesiol. 2023 Dec 19. DOI: https://doi.org/10.1097/ana.0000000000000949

Kerbage A, Kassab J, El Dahdah J, Burke CA, Achkar JP, Rouphael C. Accuracy of ChatGPT in common gastrointestinal diseases: impact for patients and providers. Clin Gastroenterol Hepatol. 2023 Nov 19;S1542-3565(23)00946-1. DOI: https://doi.org/10.1016/j.cgh.2023.11.008

Azadi A, Gorjinejad F, Mohammad-Rahimi H, Tabrizi R, Alam M, Golkar M. Evaluation of AI-generated responses by different artificial intelligence chatbots to the clinical decision-making case-based questions in oral and maxillofacial surgery. Oral Surg Oral Med Oral Pathol Oral Radiol. 2024 Mar 6;S2212-4403(24)00095-6. DOI: https://doi.org/10.1016/j.oooo.2024.02.018

Suárez A, Jiménez J, Llorente De Pedro M, Andreu-Vázquez C, Díaz-Flores García V, Gómez Sánchez M, Freire Y. Beyond the scalpel: assessing ChatGPT’s potential as an auxiliary intelligent virtual assistant in oral surgery. Comput Struct Biotechnol J. 2024 Dec;24:46–52. DOI: https://doi.org/10.1016/j.csbj.2023.11.058

Miller RA, Giuse NB. Medical knowledge bases. Acad Med J Assoc Am Med Coll. 1991 Jan;66(1):15–7. DOI: https://doi.org/10.1097/00001888-199101000-00004

Giuse DA, Giuse NB, Miller RA. A tool for the computer-assisted creation of QMR medical knowledge base disease profiles. Proc Symp Comput Appl Med Care. 1991;978–9.

Su J, Blasingame MN, Zhao J, Clark JD, Koonce TY, Giuse NB. Using a performance comparison to evaluate four distinct AI-assisted citation screening tools. Presented at: Medical Library Association Annual Meeting, Portland, OR; May 2024

Blasingame MN, Su J, Zhao J, Clark JD, Koonce TY, Giuse NB. Using a semi-automated approach to update clinical genomics evidence summaries. Presented at: Medical Library Association and Special Libraries Association Annual Meeting, Detroit, MI; May 18, 2023.

Koonce TY, Blasingame MN, Williams AM, Clark JD, DesAutels SJ, Giuse DA, Zhao J, Su J, Naylor HM, Giuse NB. Building a scalable knowledge management approach to support evidence provision for precision medicine. Presented at: AMIA Informatics Summit, Chicago, IL; March 2022.

Department of Biomedical Informatics. Generative AI at VUMC [Internet]. Vanderbilt University Medical Center; [cited 2024 Apr 25]. <https://www.vumc.org/dbmi/GenerativeAI>.

Flanagin A, Pirracchio R, Khera R, Berkwits M, Hswen Y, Bibbins-Domingo K. Reporting use of AI in research and scholarly publication-JAMA Network Guidance. JAMA. 2024 Apr 2;331(13):1096–8. DOI: https://doi.org/10.1001/jama.2024.3471

Miliard M. Vanderbilt piloting Nuance CAX Copilot, testing other genAI use cases [Internet]. Healthcare IT News; 2024 Jan 8 [cited 2024 Jul 16]. <https://www.healthcareitnews.com/news/vanderbilt-piloting-nuance-dax-copilot-testing-other-genai-use-cases>.

Terrasi V. GPT-4: how is it different from GPT-3.5? SearchEngineJournal; 2023 Mar 22 [cited 2024 Jul 16]. <https://www.searchenginejournal.com/gpt-4-vs-gpt-3-5/482463/>.

Griffith E. GPT-4 vs. ChatGPT-3.5: what’s the difference? PCMag; 2023 Mar 16 [cited 2024 Jul 16]. <https://www.pcmag.com/news/the-new-chatgpt-what-you-get-with-gpt-4-vs-gpt-35>.

Giuse NB, Kafantaris SR, Miller MD, Wilder KS, Martin SL, Sathe NA, Campbell JD. Clinical medical librarianship: the Vanderbilt experience. Bull Med Libr Assoc. 1998 Jul;86(3):412–6.

Giuse NB, Huber JT, Giuse DA, Brown CW, Bankowitz RA, Hunt S. Information needs of health care professionals in an AIDS outpatient clinic as determined by chart review. J Am Med Inform Assoc JAMIA. 1994;1(5):395–403. DOI: https://doi.org/10.1136/jamia.1994.95153427

Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009 Apr;42(2):377–81. DOI: https://doi.org/10.1016/j.jbi.2008.08.010

Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, Duda SN, REDCap Consortium. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019 Jul;95:103208. DOI: https://doi.org/10.1016/j.jbi.2019.103208

Giray L. Prompt engineering with ChatGPT: a guide for academic writers. Ann Biomed Eng. 2023 Dec;51(12):2629–33. DOI: https://doi.org/10.1007/s10439-023-03272-4

Ge J, Chen IY, Pletcher MJ, Lai JC. How I approach it: prompt engineering for generative artificial intelligence (GAI) in gastroenterology and hepatology. Am J Gastroenterol. 2024 Mar 20. DOI: https://doi.org/10.14309/ajg.0000000000002689

Lo LS. The art and science of prompt engineering: a new literacy in the information age. Internet Ref Serv Q. 2023 Oct 2;27(4):203–10. DOI: https://doi.org/10.1080/10875301.2023.2227621

Bansal M. A comprehensive guide to prompt engineering: unveiling the power of the COSTAR template [Internet]. Medium; 2024 Jan 10 [cited 2024 Apr 25]. <https://levelup.gitconnected.com/a-comprehensive-guide-to-prompt-engineering-unveiling-the-power-of-the-costar-template-944897251101>.

GovTech Data Science & AI Division. Prompt engineering playbook (Beta v3) [Internet]. Government of Singapore; 2023 Aug 30 [cited 2024 Apr 25]. <https://www.developer.tech.gov.sg/products/collections/data-science-and-artificial-intelligence/playbooks/prompt-engineering-playbook-beta-v3.pdf>.

Teo S. How I won Singapore’s GPT-4 prompt engineering competition [Internet]. Medium; 2024 Dec 28 [cited 2024 Apr 25]. <https://towardsdatascience.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41>.

OpenAI. Prompt engineering [Internet]. OpenAI; [cited 2024 Apr 12]. <https://platform.openai.com>.

OpenAI, Achiam J, Adler S, Agarwal S, et al. GPT-4 technical report [Internet]. arXiv; 2024 Mar 3 [cited 2024 Jul 16]. < http://arxiv.org/abs/2303.08774>.

Vincent J. OpenAI co-founder on company’s past approach to openly sharing research: “We were wrong” [Internet]. The Verge; 2023 Mar 15 [cited 2024 Jul 16]. <https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview>.

Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. 2023 May;15(5):e39238. DOI: https://doi.org/10.7759/cureus.39238

Giuse DA, Giuse NB, Bankowitz RA, Miller RA. Heuristic determination of quantitative data for knowledge acquisition in medicine. Comput Biomed Res Int J. 1991 Jun;24(3):261–72. DOI: https://doi.org/10.1016/0010-4809(91)90048-2

Kong A, Barnett GO, Mosteller F, Youtz C. How medical professionals evaluate expressions of probability. N Engl J Med. 1986 Sep 18;315(12):740–4. DOI: https://doi.org/10.1056/nejm198609183151206

Giuse DA, Giuse NB, Miller RA. Evaluation of long-term maintenance of a large medical knowledge base. J Am Med Inform Assoc. 1995 Sep 1;2(5):297–306. DOI: https://doi.org/10.1136/jamia.1995.96073832

Giuse NB, Giuse DA, Miller RA, Bankowitz RA, Janosky JE, Davidoff F, Hillner BE, Hripcsak G, Lincoln MJ, Middleton B. Evaluating consensus among physicians in medical knowledge base construction. Methods Inf Med. 1993 Apr;32(2):137–45.

Internet Archive. Wayback Machine [Internet]. Internet Archive; [cited 2024 Jul 16]. <https://web.archive.org/>.

Jerome RN, Giuse NB, Gish KW, Sathe NA, Dietrich MS. Information needs of clinical teams: analysis of questions received by the Clinical Informatics Consult Service. Bull Med Libr Assoc. 2001 Apr;89(2):177–84.

Random.org [Internet]. [cited 2024 Jul 16]. <https://www.random.org/>.

Zhang G, Jin Q, Jered McInerney D, Chen Y, Wang F, Cole CL, Yang Q, Wang Y, Malin BA, Peleg M, Wallace BC, Lu Z, Weng C, Peng Y. Leveraging generative AI for clinical evidence synthesis needs to ensure trustworthiness. J Biomed Inform. 2024 May;153:104640. DOI: https://doi.org/10.1016/j.jbi.2024.104640

Jerome RN, Giuse NB, Rosenbloom ST, Arbogast PG. Exploring clinician adoption of a novel evidence request feature in an electronic medical record system. J Med Libr Assoc JMLA. 2008 Jan;96(1):34–41. DOI: https://doi.org/10.3163/1536-5050.96.1.34

Piwowar H, Priem J, Larivière V, Alperin JP, Matthias L, Norlander B, Farley A, West J, Haustein S. The state of OA: a large-scale analysis of the prevalence and impact of open access articles. PeerJ. 2018 Feb 13;6:e4375. DOI: https://doi.org/10.7717/peerj.4375

STM. Uptake of open access [Internet]. STM; [cited 2024 Apr 25]. <https://www.stm-assoc.org/oa-dashboard/uptake-of-open-access/>.

Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595. DOI: https://doi.org/10.3389/frai.2023.1169595

Topaz M, Peltonen LM, Michalowski M, Stiglic G, Ronquillo C, Pruinelli L, Song J, O'Connor S, Miyagawa S, Fukahori H. The ChatGPT effect: nursing education and generative artificial intelligence. J Nurs Educ. 2024 Feb 5:1-4. DOI: https://doi.org/10.3928/01484834-20240126-01

Abd-Alrazaq A, AlSaad R, Alhuwail D, Ahmed A, Healy PM, Latifi S, Aziz S, Damseh R, Alabed Alrazak S, Sheikh J. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023 Jun 1;9:e48291. DOI: https://doi.org/10.2196/48291

Navigli R, Conia S, Ross B. Biases in large language models: origins, inventory, and discussion. J Data Inf Qual. 2023 Jun 22;15(2):10:1-10:21. DOI: https://doi.org/10.1145/3597307.

Dorr DA, Adams L, Embí P. Harnessing the promise of artificial intelligence responsibly. JAMA. 2023 Apr 25;329(16):1347–8. DOI: https://doi.org/10.1001/jama.2023.2771.

OpenAI. Introducing GPTs [Internet]. OpenAI; 2023 Nov 6 [cited 2024 Apr 25]. <https://openai.com/blog/introducing-gpts>.

Consensus: AI search engine for research [Internet]. Consensus; [cited 2024 Apr 25]. <https://consensus.app/>.

Scopus AI: Trusted content. Powered by responsible AI. [Internet]. Elsevier [cited 2024 Apr 25]. <https://www.elsevier.com/products/scopus/scopus-ai>.

Shah NH, Entwistle D, Pfeffer MA. Creation and adoption of large language models in medicine. JAMA. 2023 Sep 5;330(9):866–9. DOI: https://doi.org/10.1001/jama.2023.14217.

Trinquart L, Johns DM, Galea S. Why do we think we know what we know? A metaknowledge analysis of the salt controversy. Int J Epidemiol. 2016 Feb;45(1):251–60. DOI: https://doi.org/10.1093/ije/dyv184.

Downloads

Published

2025-01-14

Issue

Section

Original Investigation