GTC March 2024 Keynote with NVIDIA CEO Jensen Huang

NVIDIA2 minutes read

Nvidia plays a crucial role in various innovative fields, emphasizing accelerated computing and AI applications in industries like healthcare, transportation, and manufacturing. Companies like AWS, Google, Oracle, and Microsoft partner with Nvidia to advance AI capabilities and infrastructure, leading to collaborations in cloud services and accelerated computing.

Insights

  • Nvidia is involved in various innovative roles, such as illuminating galaxies, guiding the blind, transforming renewable power, training robots, healing patients, and generating virtual scenarios.
  • The emergence of generative AI in 2023 marks the beginning of a new industry focused on producing software that never existed before.
  • Blackwell, a new GPU from Nvidia, offers enhanced performance with features like the fp4 tensor core, the new Transformer engine, and the MV link switch, facilitating faster communication among GPUs.
  • Nims, AI systems developed by Nvidia, are revolutionizing software building by assembling teams of AIs rather than writing code from scratch, enhancing productivity and efficiency across various tasks.

Get key ideas from YouTube videos. It’s free

Recent questions

  • What are some innovative roles Nvidia is involved in?

    Nvidia is involved in illuminating galaxies, guiding the blind, transforming renewable power, training robots, healing patients, and generating virtual scenarios.

Related videos

Summary

00:00

Nvidia's Innovations in AI and Supercomputing

  • Nvidia is involved in various innovative roles, such as illuminating galaxies, guiding the blind, transforming renewable power, training robots, healing patients, and generating virtual scenarios.
  • A doctor clarifies that certain antibiotics are safe for a patient allergic to penicillin as they do not contain it.
  • Nvidia's CEO, Jensen Wong, welcomes attendees to a developer's conference, highlighting the diverse scientific fields present and the focus on AI applications.
  • The conference features presentations from various industries utilizing accelerated computing, including life sciences, healthcare, genomics, transportation, retail, logistics, and manufacturing.
  • Nvidia's journey since its founding in 1993 is outlined, with key milestones like the development of CUDA, AI advancements, and the introduction of the DGX-1 supercomputer.
  • The emergence of generative AI in 2023 marks the beginning of a new industry focused on producing software that never existed before.
  • Nvidia announces partnerships with companies like Ansys, Synopsys, and Cadence to accelerate their ecosystems and connect them to the Omniverse digital twin platform.
  • The importance of accelerated computing is emphasized, particularly in the simulation tools industry, to drive up computing scale and create digital twins for products.
  • Large language models, like the open AI model with 1.8 trillion parameters, require significant computational power, leading to the need for larger GPUs and innovative systems like Nvidia's supercomputers.
  • Nvidia's continuous innovation in building supercomputers and developing software to distribute computation efficiently across thousands of GPUs is crucial for advancing AI capabilities and training larger models.

24:14

"Blackwell GPU Enhances AI Training Efficiency"

  • Training AI models involves using texts, images, graphs, and charts to ground them in physics and common sense.
  • Synthetic data generation is utilized to simulate learning processes, similar to how humans use imagination to learn.
  • The introduction of a new GPU named Blackwell, with 208 billion transistors, is designed to work alongside the existing Hopper GPU.
  • Blackwell chips are integrated into systems compatible with Hopper, enhancing computational efficiency.
  • The Blackwell system includes a prototype board with two Blackwell chips and four Blackwell dies connected to a Grace CPU.
  • A second-generation Transformer engine dynamically rescales numerical formats for improved precision in AI computations.
  • The fifth-generation MV link in Blackwell is twice as fast as Hopper, aiding in synchronization and collective computations among GPUs.
  • A reliability engine called Ras conducts self-tests on every gate and memory bit in Blackwell, ensuring high system utilization.
  • Secure AI features encryption of data at rest, in transit, and during computation, along with a high-speed compression engine for efficient data movement.
  • Blackwell offers two and a half times the fp8 performance for training per chip compared to Hopper, with enhanced inference capabilities and content token generation.

44:52

"Blackwell: Revolutionizing AI with NVIDIA and AWS"

  • A system aims to connect GPUs over a coherent link to function as one giant GPU, driven by a chip that directly drives copper.
  • The system, exemplified by the DGX, has evolved significantly, with the first model being 170 teraflops and the latest model reaching 720 pedop flops, almost an exaflop.
  • The back of the DGX features the MV link spine, enabling 130 terabytes per second bandwidth, equivalent to the internet's aggregate bandwidth.
  • By utilizing MV link switch instead of optics, the system saves 20 kilowatts, crucial for computation in the 120-kilowatt liquid-cooled rack.
  • The latest GPUs, like Blackwell, are designed for generative AI, offering 30 times the inference capability of previous models like Hopper for large language models.
  • Blackwell's performance is enhanced by features like the fp4 tensor core, the new Transformer engine, and the MV link switch, facilitating faster communication among GPUs.
  • Data centers, envisioned as AI Factories, will focus on generating intelligence, with Blackwell expected to be a game-changer in the AI industry.
  • Blackwell has garnered immense excitement and interest from various entities, including CSPs, OEMs, and cloud providers worldwide.
  • AWS is preparing to launch Blackwell, aiming to build a secure AI GPU and a 222 exaflops system, marking a significant partnership in the AI industry.
  • The collaboration between NVIDIA and AWS extends beyond infrastructure, with joint efforts in CUDA and other areas to advance AI capabilities.

01:01:32

Accelerated Computing Partnerships Drive Innovation in Tech

  • Amazon Robotics collaborates with Nvidia Omniverse and Isaac Sim AWS for accelerated computing.
  • Google's GCP already utilizes A100s, H100s, T4s, L4s, and a fleet of Nvidia Cuda GPUs.
  • Gemma model announced by Google to optimize and accelerate various aspects of GCP.
  • Oracle partners with Nvidia for accelerated services like Nvidia Cuda and Nvidia dgx cloud.
  • Microsoft accelerates services in Microsoft Azure with Nvidia's assistance.
  • Wistron, a manufacturing partner, builds digital twins of Nvidia dgx and hgx factories using custom software developed with Omniverse sdks and apis.
  • Wion's Factory efficiency increased by 51% during construction using the Omniverse digital twin.
  • Wion reduced end-to-end cycle times by 50% and defect rates by 40% with Nvidia Ai and Omniverse.
  • Nvidia introduces Cordi, a generative AI model for high-resolution weather forecasting.
  • Nvidia Healthcare focuses on understanding the language of Life through machine learning, aiding in protein structure prediction and drug discovery.

01:18:29

"NVIDIA's Nims: AI Revolution in Software"

  • A new API called human is being developed, where software packages are optimized and packaged for download from ai.nvidia.com, known internally as Nims.
  • The future of software building involves assembling teams of AIs rather than writing code from scratch, with super AIs breaking down missions into execution plans for different tasks.
  • Nims can understand languages like SAP's ABAP, retrieve information from platforms like ServiceNow, and perform tasks like optimization algorithms or numerical analysis.
  • Reports on various topics can be generated daily by assembling tasks from different Nims, which work together efficiently on systems with NVIDIA GPUs.
  • NVIDIA is implementing Nims across the company, including chatbots like a chip designer chatbot, enhancing productivity and efficiency in various tasks.
  • Nims can be customized using the Nemo microservice to curate and fine-tune data, evaluate performance, and deploy on different infrastructures like dgx Cloud or on-premises.
  • NVIDIA's focus is on inventing AI technology, providing tools for modification, and offering infrastructure for fine-tuning and deployment, creating an AI Foundry for industry-wide AI development.
  • Nims can be trained to understand proprietary information within a company, encoding it into a vector database for smart interactions and data retrieval.
  • Collaboration with companies like SAP, ServiceNow, Cohesity, Snowflake, and NetApp is ongoing to build co-pilots and chatbots using NVIDIA's Nims and Nemo microservice.
  • The next wave of AI involves physical AI robotics, where AI systems will watch and learn from human examples to adapt to the physical world, requiring specialized systems like dgx, agx, and Jetson for training and deployment.

01:33:24

"Omniverse: AI in Virtual Warehouses and Beyond"

  • The virtual world Omniverse is run by a computer called ovx, hosted in the Azure Cloud.
  • Three systems are built on top of this setup, each with algorithms.
  • An example of Ai and Omniverse working together is shown through a robotics building called a warehouse.
  • The warehouse acts as an air traffic controller for autonomous systems like humans and forklifts.
  • The Omniverse digital twin of a 100,000 ft Warehouse operates as a simulation environment.
  • AI agents help robots, workers, and infrastructure navigate unpredictable events in industrial spaces.
  • The AI system enables real-time updates and route planning for autonomous systems like AMRs.
  • Omniverse Cloud hosts the virtual simulation, with AI running on djx cloud in real-time.
  • Seamans, an industrial engineering platform, is connecting its Crown Jewel accelerator to Nvidia Omniverse.
  • Byd, the world's largest EV company, is adopting Nvidia's next-generation AV computer called Thor.

01:51:41

Nvidia's Groot: AI-Powered Robot Learning Model

  • Nvidia Project Groot is a general purpose Foundation model for humanoid robot learning, utilizing multimodal instructions and past interactions to guide the robot's actions.
  • Isaac lab, an application developed by Nvidia, trains the Groot model on Omniverse Isaac Sim and utilizes osmo, a compute orchestration service, to coordinate workflows for training and simulation.
  • The Groot model enables robots to learn from human demonstrations, assisting with everyday tasks and emulating human movement by observing humans.
  • Powered by Jetson Thor robotics chips, Groot is designed for the future, providing building blocks for AI-powered robotics, with a focus on generative AI and new types of software distribution.
Channel avatarChannel avatarChannel avatarChannel avatarChannel avatar

Try it yourself — It’s free.