User profiles for "author:Q Le"

Quoc V. Le

- Verified email at stanford.edu - Cited by 257396

Quynh-Thu Le

- Verified email at stanford.edu - Cited by 43330

Quang Bao Le

- Verified email at cgiar.org - Cited by 6127

Nasopharyngeal carcinoma

YP Chen, ATC Chan, QT Le, P Blanchard, Y Sun, J Ma�- The Lancet, 2019 - thelancet.com
Nasopharyngeal carcinoma is characterised by distinct geographical distribution and is
particularly prevalent in east and southeast Asia. Epidemiological trends in the past decade�…

Food system resilience: Defining the concept

DM Tendall, J Joerin, B Kopainsky, P Edwards…�- Global food security, 2015 - Elsevier
In a world of growing complexity and uncertainty, the security of food supplies is threatened
by many factors. These include multiple processes of global change (eg climate change�…

Non–small cell lung cancer

DS Ettinger, W Akerley, G Bepler, MG Blum…�- Journal of the national�…, 2010 - jnccn.org
Lung cancer is the leading cause of cancer-related death in the United States. An estimated
219,440 new cases (116,090 men; 103,350 women) of lung and bronchus cancer were�…

Google's neural machine translation system: Bridging the gap between human and machine translation

Y Wu, M Schuster, Z Chen, QV Le, M Norouzi…�- arXiv preprint arXiv�…, 2016 - arxiv.org
Neural Machine Translation (NMT) is an end-to-end learning approach for automated
translation, with the potential to overcome many of the weaknesses of conventional phrase�…

Scaling instruction-finetuned language models

HW Chung, L Hou, S Longpre, B Zoph, Y Tay…�- Journal of Machine�…, 2024 - jmlr.org
Finetuning language models on a collection of datasets phrased as instructions has been
shown to improve model performance and generalization to unseen tasks. In this paper we�…

Xlnet: Generalized autoregressive pretraining for language understanding

Z Yang, Z Dai, Y Yang, J Carbonell…�- Advances in neural�…, 2019 - proceedings.neurips.cc
With the capability of modeling bidirectional contexts, denoising autoencoding based
pretraining like BERT achieves better performance than pretraining approaches based on�…

Chain-of-thought prompting elicits reasoning in large language models

J Wei, X Wang, D Schuurmans…�- Advances in neural�…, 2022 - proceedings.neurips.cc
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In�…

Sequence to sequence learning with neural networks

I Sutskever, O Vinyals, QV Le�- Advances in neural�…, 2014 - proceedings.neurips.cc
Abstract Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large labeled�…

Finetuned language models are zero-shot learners

J Wei, M Bosma, VY Zhao, K Guu, AW Yu…�- arXiv preprint arXiv�…, 2021 - arxiv.org
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a�…

Searching for mobilenetv3

A Howard, M Sandler, G Chu…�- Proceedings of the�…, 2019 - openaccess.thecvf.com
We present the next generation of MobileNets based on a combination of complementary
search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile�…