Nvidia unveils foundational AI models running on RTX AI PCs


Join our daily and weekly newsletters for the latest updates and exclusive content on industry leading AI coverage. Learn more


Nvidia today announced foundation models that run locally Nvidia RTX AI PCs which supercharges digital people, content creation, productivity and development.

GeForce has long been an important platform for AI developers. The first GPU-accelerated deep learning network, AlexNet, was trained on a GeForce GTXTM 580 in 2012 — and last year, more than 30% of published AI research papers cited the use of GeForce RTX. Jensen Huang, CEO of Nvidia, made the announcement during his CES 2025 opening keynote.

Now, with generative AI and RTX AI PCs, anyone can become a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow and LM Studio enable enthusiasts to apply AI models to complex workflows through simple graphical user interfaces .

NIM microservices connected to these GUIs will make it effortless to access and deploy the latest generative AI models. Nvidia AI Blueprints, built on NIM microservices, provide easy-to-use, preconfigured reference workflows for digital people, content creation and more.

To meet the growing demand from developers and AI enthusiasts, every leading PC manufacturer and system builder has launched NIM-ready RTX AI PCs.

“AI is developing at breakneck speed, from vision AI to generative AI and now agent AI,” Huang said. “NIM’s microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.”

NIM microservices are also available on Nvidia Digits, a personal AI supercomputer that provides AI researchers, data scientists and students around the world with access to the power of Nvidia Grace Blackwell. Project Digits showcased the new Nvidia GB10 Grace Blackwell Superchip, which offers a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.

Create AI NIMble

How AI is getting smarter

Foundation models – neural networks trained on large amounts of raw data – are the building blocks for generative AI.

Nvidia will release a pipeline of NIM microservices for RTX AI PCs from top model developers such as Black Forest Labs, Meta, Mistral and Stability AI. Use cases include large-scale language models (LLMs), vision language models, image processing, speech, embedding models for retrieval-augmented generation (RAG), PDF extraction and computer vision.

“Making FLUX an Nvidia NIM microservice increases the rate at which AI can be deployed and experienced by many users, while delivering incredible performance,” said Robin Rombach, CEO of Black Forest Labs, in a statement.

Nvidia also today announced the Llama Nemotoron family of open models that provide high accuracy in a wide range of agent tasks. The Llama Nemotron Nano model will be offered as a NIM microservice for RTX AI PCs and workstations, and excels in agentic AI tasks such as instruction following, function calling, chat, coding and math. NIM’s microservices include key components to run AI on PCs and are optimized for deployment on NVIDIA GPUs – whether on RTX PCs and workstations or on
cloud

Developers and enthusiasts can quickly download, set up and run these NIM microservices on Windows 11 PCs with Windows Subsystem for Linux (WSL).

“AI is driving Windows 11 PC transformation at a rapid pace, and the Windows Subsystem for Linux (WSL) offers a great cross-platform environment for AI development in Windows 11 with the Windows Copilot Runtime, ” said Pavan Davuluri, corporate vice president of Windows at Microsoft, in a statement. “Nvidia NIM microservices, optimized for Windows PCs, provide developers and enthusiasts with ready-to-integrate AI models for their Windows apps, further accelerating the deployment of AI capabilities to Windows users.”

NIM microservices, running on RTX AI PCs, will be compatible with top AI development and agent frameworks, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow and LM Studio. Developers can connect applications and workflows built in these frameworks to AI models running NIM microservices through industry-standard endpoints, enabling them to use the latest technology with a unified interface across the board. cloud, data centers, workstations and PCs.

Enthusiasts can also experience a variety of NIM microservices using the upcoming Nvidia ChatRTX tech demo release.

Putting a Face to Agentic AI

Nvidia AI Blueprints

To demonstrate how enthusiasts and developers can use NIM to create AI agents and assistants, Nvidia is now previewing Project R2X, a vision-powered PC avatar that can put information on the extremities to a user, can help with desktop apps and video conference calls, reading and summarizing documents. , and so on.

The avatar is rendered using Nvidia RTX Neural Faces, a new generative AI algorithm that augments traditional rasterization with fully rendered pixels. The face is then animated with a new diffusion-based NVIDIA Audio2FaceTM-3D model that enhances lip and tongue movement. R2X can be connected to cloud AI services such as OpenAI’s GPT4o and xAI’s Grok, and NIM microservices and AI Blueprints, such as PDF retrievers or alternative LLMs, through developer frameworks such as CrewAI, Flowise AI and Langflow.

AI Blueprints Coming to PC

A wafer full of Nvidia Blackwell chips.

NIM microservices are also available to PC users through AI Blueprints – reference AI workflows that can run locally on RTX PCs. With these blueprints, developers can create podcasts from PDF documents, create stunning images guided by 3D scenes and more.

Blueprint for PDF to podcast extracts text, images and tables from a PDF to create a podcast script that users can edit. It can also create a full audio recording from a script using the voices contained in the blueprint or based on a user’s voice sample. Additionally, users can have a real conversation with the AI ​​podcast host to learn more.

The blueprint uses NIM microservices such as Mistral-Nemo-12B-Instruct for language, Nvidia Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever collection of microservices for PDF extraction.

The AI ​​Blueprint for 3D-guided generative AI gives artists better control over image creation. While AI can create amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects placed in a 3D renderer like Blender to guide AI image generation.

The artist can create a 3D asset by hand or generate it with AI, place it in the scene and position the 3D viewport camera. Then, a pre-packaged workflow run by the FLUX NIM microservice uses the current composition to generate high-quality images that match the 3D scene.

Nvidia NIM microservices and AI Blueprints will be available starting in February. NIM-ready RTX AI PCs are available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC , PCS and Scan .



Source link
  • Related Posts

    NYT NYT Connections: Sports Edition Tints, Reply December 11 #444

    Searching for latest Regular Connections Answers? Click here for current connectionsas well as our daily answers and clues for the New York Times Mini Crossword, WinLE and sets puzzles. TODAY…

    Reports of an intensive Chinese GPU Smuggling operation ‘Far-Fetched’

    If some of NVIDIA’s top-shelf – the physical artifacts now at the center of the AI ​​Craze – hypothetically fell to several parties, from the regulators to the customers of…

    Leave a Reply

    Your email address will not be published. Required fields are marked *