ChatRTX upgrade: More Models, More Languages, More Control
Nvidia is enhancing its experimental ChatRTX chatbot, which operates locally on Windows PCs, by expanding its AI model support for RTX GPU owners. Originally launched as a demo app called “Chat with RTX” in February, the chatbot allows users to leverage powerful AI models like Mistral or Llama 2 for querying personal documents. The update adds support for Google’s Gemma, ChatGLM3, and OpenAI’s CLIP model, enhancing the chatbot’s capability to search through personal photos and documents directly from a user’s computer.
The inclusion of Google’s Gemma model in ChatRTX is particularly notable due to Gemma’s design for direct operation on robust laptops or desktop PCs, which aligns with Nvidia’s goal of simplifying the use of advanced AI models locally. According to a story in theverge.com, this update makes the ChatRTX app a more versatile tool by allowing users to choose from a range of models that best suit their data analysis or search needs. The user interface of the chatbot, accessible through a web browser, facilitates interaction with local files, offering powerful search tools that provide summaries and detailed responses based on the user’s personal data.
Additionally, the latest version of ChatRTX now supports ChatGLM3, a large bilingual language model capable of understanding both English and Chinese. This model is based on the general language model framework, broadening the app’s usability for a diverse user base. Moreover, the integration of OpenAI’s CLIP model enhances the chatbot’s functionality by enabling it to interact with and recognize images stored locally, thus allowing users to train the model to better understand and categorize their photo data.
Expanding its functionality further, Nvidia has also introduced support for voice queries in ChatRTX by integrating Whisper, an AI speech recognition system. This addition allows users to interact with and search their data using voice commands, thereby enhancing accessibility and ease of use. The app, requiring an RTX 30- or 40-series GPU with at least 8GB of VRAM, is available as a 36GB download from Nvidia’s website, offering a robust, locally-run AI chatbot solution for personal data management and exploration.
read more at theverge.com
Leave A Comment