[HTML][HTML] Foundation models for generalist medical artificial intelligence

M Moor, O Banerjee, ZSH Abad, HM Krumholz…�- Nature, 2023 - nature.com
The exceptionally rapid development of highly flexible, reusable artificial intelligence (AI)
models is likely to usher in newfound capabilities in medicine. We propose a new paradigm�…

A brief overview of ChatGPT: The history, status quo and potential future development

T Wu, S He, J Liu, S Sun, K Liu…�- IEEE/CAA Journal of�…, 2023 - ieeexplore.ieee.org
ChatGPT, an artificial intelligence generated content (AIGC) model developed by OpenAI,
has attracted world-wide attention for its capability of dealing with challenging language�…

Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models

J Li, D Li, S Savarese, S Hoi�- International conference on�…, 2023 - proceedings.mlr.press
The cost of vision-and-language pre-training has become increasingly prohibitive due to
end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and�…

Visual instruction tuning

H Liu, C Li, Q Wu, YJ Lee�- Advances in neural information�…, 2024 - proceedings.neurips.cc
Instruction tuning large language models (LLMs) using machine-generated instruction-
following data has been shown to improve zero-shot capabilities on new tasks, but the idea�…

Qlora: Efficient finetuning of quantized llms

T Dettmers, A Pagnoni, A Holtzman…�- Advances in Neural�…, 2024 - proceedings.neurips.cc
We present QLoRA, an efficient finetuning approach that reduces memory usage enough to
finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit�…

A survey of large language models

WX Zhao, K Zhou, J Li, T Tang, X Wang, Y Hou…�- arXiv preprint arXiv�…, 2023 - arxiv.org
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for�…

Minigpt-4: Enhancing vision-language understanding with advanced large language models

D Zhu, J Chen, X Shen, X Li, M Elhoseiny�- arXiv preprint arXiv�…, 2023 - arxiv.org
The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly
generating websites from handwritten text and identifying humorous elements within�…

[HTML][HTML] Large language models encode clinical knowledge

K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung…�- Nature, 2023 - nature.com
Large language models (LLMs) have demonstrated impressive capabilities, but the bar for
clinical applications is high. Attempts to assess the clinical knowledge of models typically�…

Palm 2 technical report

R Anil, AM Dai, O Firat, M Johnson, D Lepikhin…�- arXiv preprint arXiv�…, 2023 - arxiv.org
We introduce PaLM 2, a new state-of-the-art language model that has better multilingual and
reasoning capabilities and is more compute-efficient than its predecessor PaLM. PaLM 2 is�…

Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face

Y Shen, K Song, X Tan, D Li, W Lu…�- Advances in Neural�…, 2024 - proceedings.neurips.cc
Solving complicated AI tasks with different domains and modalities is a key step toward
artificial general intelligence. While there are numerous AI models available for various�…