AI news
March 26, 2024

Nvidia's new Chat with RTX can run an AI chatbot on your local PC

Create your own personalized AI chatbot.

Jim Clyde Monge
by 
Jim Clyde Monge

Nvidia is going beyond just building chips designed for AI tasks. Today, they have released Chat With RTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own content — docs, notes, videos, or other data.

This means you can now run an AI chatbot without an internet connection and paying a dime for services like ChatGPT or Gemini. Chat with RTX can utilize open-source LLM models like Mistral or Llama.

Here are the software and hardware requirements needed to run it on your machine:

System requirements:

  • GPU: NVIDIA GeForce™ RTX 30 or 40 Series GPU or NVIDIA RTX™ Ampere or Ada Generation GPU with at least 8GB of VRAM
  • CPU: Latest-Gen Intel® Core™ i7 or AMD Ryzen™ 7 processor or better
  • Memory: 32GB of RAM or more
  • Storage: 2TB of free storage space
  • Operating system: Windows 10 or 11

How to run it:

Download and install the Chat with RTX software from the Nvidia website.

  1. Launch the software and select the “Create a new LLM” option.
  2. Choose the type of LLM you want to create (e.g., text-based, code-based, etc.).
  3. Select the data you want to use to train the LLM (e.g., your own documents, code, etc.).
  4. Click the “Train” button and wait for the LLM to be trained.
  5. Once the LLM is trained, you can start chatting with it!

Chat for Developers

The Chat with RTX tech demo is builtfrom a publicly available developer reference project found on GitHub, called TensorRT-LLM RAG. This offers some exciting possibilities for developers:

  • Build Custom Applications: Developers can use the same building blocks from Chat with RTX to create their own AI-powered applications tailored to specific needs. These applications can take advantage of Nvidia’s powerful RTX GPUs, making them run super fast with the optimizations within TensorRT-LLM.
  • Specialized AI Chatbots: Think about developers creating highly specialized chatbots focused on a specific domain of expertise. For example, this could be a medical chatbot offering insights into complex diseases or a coding assistant giving advice on tricky parts of code.
  • Unique User Experiences: With the ability to train an AI language model on unique data and deploy it locally on powerful RTX systems, developers could offer innovative user experiences that don’t rely on internet connections and provide greater privacy to users.
Note: Working with these tools likely requires a solid background in AI development. It’s not exactly a drag-and-drop system where everyone can make custom applications quickly.

Meant For Powerhouse Users

It is important to note that not a lot of people can use it because of the high hardware requirements. You need a powerful computer with an NVIDIA RTX 30 or 40 Series GPU or NVIDIA RTX™ Ampere or Ada Generation GPU with at least 8GB of VRAM, as well as a latest-Gen Intel® Core™ i7 or AMD Ryzen™ 7 processor or better, 32GB of RAM or more, and 2TB of free storage space.

However, if you have the hardware, this is a game-changer. Nvidia has massive resources and a huge pool of funds to improve this technology, so the future of AI chatbots is bright. In the future, we can expect to see even more powerful and versatile AI chatbots that can be used for a wide variety of tasks.

‍

Get your brand or product featured on Jim Monge's audience