NeMo
Jul 12, 2024
Train Generative AI Models More Efficiently with New NVIDIA Megatron-Core Functionalities
First introduced in 2019, NVIDIA Megatron-LM sparked a wave of innovation in the AI community, enabling researchers and developers to use the underpinnings of...
11 MIN READ
Jul 10, 2024
Customizing NVIDIA NIMs for Domain-Specific Needs with NVIDIA NeMo
Large language models (LLMs) adopted for specific enterprise applications most often benefit from model customization. Enterprises need to tailor LLMs for...
11 MIN READ
Jul 10, 2024
Understanding Diffusion Models: An Essential Guide for AEC Professionals
Generative AI, the ability of algorithms to process various types of inputs—such as text, images, audio, video, and code—and generate new content, is...
13 MIN READ
Jul 10, 2024
Curating Non-English Datasets for LLM Training with NVIDIA NeMo Curator
Data curation plays a crucial role in the development of effective and fair large language models (LLMs). High-quality, diverse training data directly...
12 MIN READ
Jul 08, 2024
Deploy Multilingual LLMs with NVIDIA NIM
Multilingual large language models (LLMs) are increasingly important for enterprises operating in today's globalized business landscape. As businesses expand...
9 MIN READ
Jul 02, 2024
Addressing Hallucinations in Speech Synthesis LLMs with the NVIDIA NeMo T5-TTS Model
NVIDIA NeMo has released the T5-TTS model, a significant advancement in text-to-speech (TTS) technology. Based on large language models (LLMs), T5-TTS produces...
4 MIN READ
Jun 28, 2024
Create RAG Applications Using NVIDIA NIM and Haystack on Kubernetes
Step-by-step guide to build robust, scalable RAG apps with Haystack and NVIDIA NIMs on Kubernetes.
1 MIN READ
Jun 18, 2024
Leverage the Latest Open Models for Synthetic Data Generation with NVIDIA Nemotron-4 340B
Since the introduction and subsequent wide adoption of Large Language Models (LLMs) – data has been the lifeblood of businesses building accurate and safe AI...
9 MIN READ
Jun 17, 2024
Video: Talk to Your Supply Chain Data Using NVIDIA NIM
NVIDIA operates one of the largest and most complex supply chains in the world. The supercomputers we build connect tens of thousands of NVIDIA GPUs with...
2 MIN READ
Jun 12, 2024
NVIDIA Sets New Generative AI Performance and Scale Records in MLPerf Training v4.0
Generative AI models have a variety of uses, such as helping write computer code, crafting stories, composing music, generating images, producing videos, and...
11 MIN READ
Jun 07, 2024
Seamlessly Deploying a Swarm of LoRA Adapters with NVIDIA NIM
The latest state-of-the-art foundation large language models (LLMs) have billions of parameters and are pretrained on trillions of tokens of input text. They...
11 MIN READ
May 31, 2024
Building Safer LLM Apps with LangChain Templates and NVIDIA NeMo Guardrails
An easily deployable reference architecture can help developers get to production faster with custom LLM use cases. LangChain Templates are a new way of...
7 MIN READ
May 29, 2024
Generative AI Agents Developer Contest: Top Tips for Getting Started
Join our contest that runs through June 17 and showcase your innovation using cutting-edge generative AI-powered applications using NVIDIA and LangChain...
3 MIN READ
May 24, 2024
Accelerating Transformers with NVIDIA cuDNN 9
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library for accelerating deep learning primitives with state-of-the-art performance....
12 MIN READ
May 21, 2024
Curating Custom Datasets for LLM Training with NVIDIA NeMo Curator
Data curation is the first, and arguably the most important, step in the pretraining and continuous training of large language models (LLMs) and small language...
14 MIN READ
May 17, 2024
Training Localized Multilingual LLMs with NVIDIA NeMo, Part 2
In Part 1, we discussed how to train a monolingual tokenizer and merge it with a pretrained LLM’s tokenizer to form a multilingual tokenizer. In this post, we...
8 MIN READ