Nasopharyngeal carcinoma
YP Chen, ATC Chan, QT Le, P Blanchard, Y Sun, J Ma�- The Lancet, 2019 - thelancet.com
Nasopharyngeal carcinoma is characterised by distinct geographical distribution and is
particularly prevalent in east and southeast Asia. Epidemiological trends in the past decade�…
particularly prevalent in east and southeast Asia. Epidemiological trends in the past decade�…
Food system resilience: Defining the concept
DM Tendall, J Joerin, B Kopainsky, P Edwards…�- Global food security, 2015 - Elsevier
In a world of growing complexity and uncertainty, the security of food supplies is threatened
by many factors. These include multiple processes of global change (eg climate change�…
by many factors. These include multiple processes of global change (eg climate change�…
Non–small cell lung cancer
DS Ettinger, W Akerley, G Bepler, MG Blum…�- Journal of the national�…, 2010 - jnccn.org
Lung cancer is the leading cause of cancer-related death in the United States. An estimated
219,440 new cases (116,090 men; 103,350 women) of lung and bronchus cancer were�…
219,440 new cases (116,090 men; 103,350 women) of lung and bronchus cancer were�…
Google's neural machine translation system: Bridging the gap between human and machine translation
Neural Machine Translation (NMT) is an end-to-end learning approach for automated
translation, with the potential to overcome many of the weaknesses of conventional phrase�…
translation, with the potential to overcome many of the weaknesses of conventional phrase�…
Scaling instruction-finetuned language models
Finetuning language models on a collection of datasets phrased as instructions has been
shown to improve model performance and generalization to unseen tasks. In this paper we�…
shown to improve model performance and generalization to unseen tasks. In this paper we�…
Xlnet: Generalized autoregressive pretraining for language understanding
With the capability of modeling bidirectional contexts, denoising autoencoding based
pretraining like BERT achieves better performance than pretraining approaches based on�…
pretraining like BERT achieves better performance than pretraining approaches based on�…
Chain-of-thought prompting elicits reasoning in large language models
We explore how generating a chain of thought---a series of intermediate reasoning steps---
significantly improves the ability of large language models to perform complex reasoning. In�…
significantly improves the ability of large language models to perform complex reasoning. In�…
Sequence to sequence learning with neural networks
Abstract Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large labeled�…
performance on difficult learning tasks. Although DNNs work well whenever large labeled�…
Finetuned language models are zero-shot learners
This paper explores a simple method for improving the zero-shot learning abilities of
language models. We show that instruction tuning--finetuning language models on a�…
language models. We show that instruction tuning--finetuning language models on a�…
Searching for mobilenetv3
We present the next generation of MobileNets based on a combination of complementary
search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile�…
search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile�…