Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Nvidia doubles down on AI language models and inference as a substrate for the Metaverse, in data centers, the cloud and at the edge

9 ноября 2021 г Hi-network.com

special feature

AI and the Future of Business

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.

Read now

GTC, Nvidia's flagship event, is always a source of announcements around all things AI. The fall 2021 edition is no exception. Huang's keynote emphasized what Nvidia calls the Omniverse. Omniverse is Nvidia's virtual world simulation and collaboration platform for 3D workflows, bringing its technologies together.

Based on what we've seen, we would describe the Omniverse as Nvidia's take on Metaverse. You will be able to read more about the Omniverse in Stephanie Condon and Larry Dignan's coverage here on ZDNet. What we can say is that indeed, for something like this to work, a confluence of technologies is needed.

So let's go through some of the updates in Nvidia's technology stack, focusing on components such as large language models (LLMs) and inference.


See also: Everything announced at Nvidia's Fall GTC 2021.


NeMo Megatron, Nvidia's open source large language model platform

Nvidia unveiled what it calls the Nvidia NeMo Megatron framework for training language models. In addition, Nvidia is making available the Megatron LLM, a model with 530 billion that can be trained for new domains and languages.

Bryan Catanzaro, Vice President of Applied Deep Learning Research at Nvidia, said that "building large language models for new languages and domains is likely the largest supercomputing application yet, and now these capabilities are within reach for the world's enterprises".

While LLMs are certainly seeing lots of traction and a growing number of applications, this particular offering's utility warrants some scrutiny. First off, training LLMs is not for the faint of heart and requires deep pockets. It has been estimated that training a model such as OpenAI's GPT-3 costs around$12 million.

OpenAI has partnered with Microsoft and made an API around GPT-3 available in order to commercialize it. And there are a number of questions to ask around the feasibility of training one's own LLM. The obvious one is whether you can afford it, so let's just say that Megatron is not aimed at the enterprise in general, but a specific subset of enterprises at this point.

The second question would be -- what for? Do you really need your own LLM? Catanzaro notes that LLMS "have proven to be flexible and capable, able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs". 

Powering impressive AI feats is based on an array of software and hardware advances, and Nvidia is addressing both. 

Nvidia

We would not go as far as to say that LLMs "comprehend" documents, for example, but let's acknowledge that LLMs are sufficiently useful and will keep getting better. Huang claimed that LLMs "will be the biggest mainstream HPC application ever".

must read

Ethics of AI: Benefits and risks of artificial intelligence

The increasing scale of AI is raising the stakes for major ethical questions.

Read now

The real question is -- why build your own LLM? Why not use GPT-3's API, for example? Competitive differentiation may be a legitimate answer to this question. The cost to value function may be another one, in another incarnation of the age-old "buy versus build" question.

In other words, if you are convinced you need an LLM to power your applications, and you're planning on using GPT-3 or any other LLM with similar usage terms, often enough, it may be more economical to train your own. Nvidia mentions use cases such as building domain-specific chatbots, personal assistants and other AI applications.

To do that, it would make more sense to start from a pre-trained LLM and tailor it to your needs via transfer learning rather than train one from scratch. Nvidia notes that NeMo Megatron builds on advancements from Megatron, an open-source project led by Nvidia researchers studying efficient training of large transformer language models at scale.

The company adds that the NeMo Megatron framework enables enterprises to overcome the challenges of training sophisticated natural language processing models. So, the value proposition seems to be -- if you decide to invest in LLMs, why not use Megatron? Although that sounds like a reasonable proposition, we should note that Megatron is not the only game in town.

Recently, EleutherAI, a collective of independent AI researchers, open-sourced their 6 billion parameter GPT-j model. In addition, if you are interested in languages beyond English, we now have a large European language model fluent in English, German, French, Spanish, and Italian by Aleph Alpha. Wudao is a Chinese LLM which is also the largest LLM with 1.75 trillion parameters, and HyperCLOVA is a Korean LLM with 204 billion parameters. Plus, there's always other, slightly older / smaller open source LLMs such as GPT2 or BERT and its many variations.

Aiming at AI model inference addresses the total cost of ownership and operation

One caveat is that when it comes to LLMs, bigger (as in having more parameters) does not necessarily mean better. Another one is that even with a basis such as Megatron to build on, LLMs are expensive beasts to train and operate. Nvidia's offering is set to address both of these aspects by specifically targeting inference, too.

Megatron, Nvidia notes, is optimized to scale out across the large-scale accelerated computing infrastructure of Nvidia DGX SuperPOD?. NeMo Megatron automates the complexity of LLM training with data processing libraries that ingest, curate, organize and clean data. Using advanced technologies for data, tensor and pipeline parallelization, it enables the training of large language models to be distributed efficiently across thousands of GPUs.

But what about inference? After all, in theory, at least, you only train LLMs once, but the model is used many-many times to infer -- produce results. The inference phase of operation accounts for about 90% of the total energy cost of operation for AI models. So having inference that is both fast and economical is of paramount importance, and that applies beyond LLMs.

Nvidia is addressing this by announcing major updates to its Triton Inference Server, as 25,000+ companies worldwide deploy Nvidia AI inference. The updates include new capabilities in the open source Nvidia Triton Inference Server? software, which provides cross-platform inference on all AI models and frameworks, and Nvidia TensorRT?, which optimizes AI models and provides a runtime for high-performance inference on Nvidia GPUs.

Nvidia introduces a number of improvements for the Triton Inference Server. The most obvious tie to LLMs is that Triton now has multi-GPU multinode functionality. This means Transformer-based LLMs that no longer fit in a single GPU can be inferenced across multiple GPUs and server nodes, which Nvidia says provides real-time inference performance.

90% of the total energy required for AI models comes from inference

The Triton Model Analyzer is a tool that automates a key optimization task by helping select the best configurations for AI models from hundreds of possibilities. According to Nvidia, It achieves optimal performance while ensuring the quality of service required for applications.

RAPIDS FIL is a new back-end for GPU or CPU inference of random forest and gradient-boosted decision tree models, which provides developers with a unified deployment engine for both deep learning and traditional machine learning with Triton.

Last but not least, on the software front, Triton now comes with Amazon SageMaker Integration, enabling users to easily deploy multi-framework models using Triton within SageMaker, AWS's fully managed AI service.

On the hardware front, Triton now also supports Arm CPUs and Nvidia GPUs and x86 CPUs. The company also introduced the Nvidia A2 Tensor Core GPU, a low-power, a small-footprint accelerator for AI inference at the edge that Nvidia claims offer up to 20X more inference performance than CPUs.

Triton provides AI inference on GPUs and CPUs in the cloud, data center, enterprise edge, and embedded, is integrated into AWS, Google Cloud, Microsoft Azure and Alibaba Cloud, and is included in Nvidia AI Enterprise. To help deliver services based on Nvidia's AI technologies to the edge, Huang announced Nvidia Launchpad.

Nvidia moving proactively to maintain its lead with its hardware and software ecosystem

And that is far from everything Nvidia unveiled today. Nvidia Modulus builds and trains physics-informed machine learning models that can learn and obey the laws of physics. Graphs -- a key data structure in modern data science -- can now be projected into deep-neural networks frameworks with Deep Graph Library, or DGL, a new Python package.

Huang also introduced three new libraries: ReOpt, for the$10 trillion logistics industry. cuQuantum, to accelerate quantum computing research. And cuNumeric, to accelerate NumPy for scientists, data scientists and machine learning and AI researchers in the Python community. And Nvidia is introducing 65 new and updated SDKs at GTC.

So, what to make of all that? Although we cherry-picked, each of these items would probably warrant its own analysis. The big picture is that, once again, Nvidia is moving proactively to maintain its lead in a concerted effort to tie in its hardware to its software.

LLMs may seem exotic for most organizations at this point. Still, Nvidia is betting that they will see more interest and practical applications and positioning itself as an LLM platform for others to build on. Although alternatives exist, having curated, supported, and bundled with Nvidia's software and hardware ecosystem and brand will probably seem like an attractive proposition to many organizations.

The same goes for the focus on inference. In the face of increasing competition by an array of hardware vendors building on architectures designed specifically for AI workloads, Nvidia is doubling down on inference. This is the part of the AI model operation that plays the biggest part in the total cost of ownership and operation. And Nvidia is, once again, doing it in its signature style - leveraging hardware and software into an ecosystem.

Artificial Intelligence

7 advanced ChatGPT prompt-writing tips you need to knowThe 10 best ChatGPT plugins of 2023 (and how to make the most of them)I've tested a lot of AI tools for work. These are my 5 favorite so farHuman or bot? This Turing test game puts your AI-spotting skills to the test
  • 7 advanced ChatGPT prompt-writing tips you need to know
  • The 10 best ChatGPT plugins of 2023 (and how to make the most of them)
  • I've tested a lot of AI tools for work. These are my 5 favorite so far
  • Human or bot? This Turing test game puts your AI-spotting skills to the test

tag-icon Горячие метки: Искусственный интеллект 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.