GPT4All and the rise of open-source language models
GPT4All has revolutionized the process of installing a language model on your PC. Simply download the installer from gpt4all.io, install it, launch the app, select the models you want to run, and you’re ready to chat. It provides access to nearly 15 different language models, all available with a single click (well, including OpenAI’s GPT models, which can only be used with an API key and an internet connection). Compatible with Windows, Linux, and MacOS, GPT4All provides a convenient and powerful chat platform. However, the current quality of available free language models ranges from ineffective to mediocre, although rapid improvements are expected. Importantly, GPT4All’s language models work locally, and offline – a feature not available with Bing Chat, ChatGPT, or Google Bard. From what I’ve played around with, the gpt4all-l13b-snoozy and the two nous-gpt4-vicuna-13b and stable-vicuna-13B.q4_2 models, all trained with ≈13 billion parameters each, give reasonably satisfactory results. The rate of local output generation depends largely on the specifications of your device. By default, GPT4All uses the CPU’s AVX2 (Advanced Vector Extensions 2) instruction set instead of the GPU, which is usually supported by all 2014 and newer devices. Therefore, the more CPU power your PC or laptop has, the faster GPT4All’s language models will be – the bigger, the better!
So which language models does GPT4All provide? Well, first let’s start with some backgrounds: apps like ChatGPT are based on Large Language Models (LLMs). In simple terms, these are programs that use artificial neural networks to generate text content. To do this, they are loaded with millions to billions of parameters, which are fed with large amounts of text data from the web, documents, books, transcripts, and so on. This data infusion and processing stage is called pre-training, and it’s the most time-consuming part of creating a new language model. In theory, anyone can train language models themselves, but in practice, professional LLMs are trained using thousands of expensive GPUs that continuously compute and train the language model for weeks at a time, often costing millions of dollars. Yet the result is a rudimentary large language model, not a chatbot. Using such a model as a chatbot requires additional training to learn how to complete sentences, answer questions, and understand that Nazis are considered assholes. The difference between ChatGPT and Microsoft’s Bing Chat, which both use the GPT-4 LLM, lies in the retraining (and some additional parameters on top of it). So these basic LLMs, technically called foundation models, only become practical tools like chatbots after a process known as fine-tuning. However, this process is much less demanding than pre-training and can be done within a few days using limited hardware, with the increasing prospect of running it on our own PCs.
Open-source has joined the game
Historically, the open-source community has lacked high-quality base LLMs, but the landscape is evolving. Meta, for example, has made its LLaMa model (Large Language Model Meta AI) available for non-commercial research. It may not be as powerful as Google’s PaLM 2 (Pathways Language Model) or OpenAI’s GPT-4 (Generative Pre-trained Transformer 4), but it offers the freedom to tinker. As a result, more and more LLaMa-based chatbot applications are emerging. In addition, there are more truly open-source base language models with an Apache 2 license, such as Open Assistant, MPT-7B, Cerebras-GPT, and GPT-J from the non-profit research group EleutherAI, which allow for commercial use.
Nevertheless, GPT-4 remains the most powerful model for the time being. But its creator, the ironically named “Open”AI, is far from open: we can’t train it independently, nor can we access the data of the foundation model. Meta’s LLaMa, on the other hand, is open to vaguely defined “non-commercial research”. You have to apply to Meta for access to LLaMa, but it’s now unofficially available in many places out there (just google it). This brings us back to GPT4All, which offers several LLaMa-based chatbot language models, including Vicuna and GPT4All-13b-snoozy, as well as the aforementioned GPT-J. Speaking of which, it offers several fine-tuned language models based on GPT-J, such as gpt4all-j-v1.3-groovy. However, there is a clear trend: the more open a model, the worse its quality – at least for now. For example, the fully open GPT-J chatbots make glaring mistakes and even have spelling weaknesses, so in their current form in GPT4All they are only suitable for, well, let’s call it “entertainment”.
On the bright side, the non-commercial LLaMa models like Vicuna and GPT4All-13b-snoozy are quite good, loosely comparable to the free ChatGPT variant based on GPT-3.5, but not GPT-4. Moreover, the pace of development is surprisingly fast. The prospect of open-source chatbots, especially the ability to retrain open-source base models with personal data, seems promising. Imagine training a chatbot with your diary, allowing you to chat with a version of your past self or a representation of a loved one based on their letters and documents. It may sound creepy, but it’s technically feasible. What’s more, such specialized and locally run chatbots are extremely valuable to businesses in terms of data control and privacy (#GDPR), leading to a proliferation of startups and apps.
Power to the people
Open-source models are gaining real momentum, posing a significant challenge to established companies like OpenAI, Meta, and Google. Google’s recently leaked internal memo revealed that they see the rapidly advancing open-source community, rather than OpenAI or Meta, as their primary competitor in both speech and image generation (though there’s also Adobe Firefly and Midjourney, which are dramatically changing the image generation game right now). Google’s quality lead in AI (if you want to call it that – looking at you, Bard), is eroding at an unprecedented rate. The bottom line of their memo is that even the largest companies can’t compete with a thriving crowd of hundreds of thousands of individuals experimenting and programming AI models for personal or business use. Google’s strategy should therefore shift to partnering with the open-source community rather than trying to monopolize AI products, mirroring its successful approach with Android. And that’s quite a statement!
Coming back to GPT4All, this small but powerful app allows us to explore our very own chatbots instead of relying on platforms like ChatGPT or Bard. It’s your chatbot that keeps your data and you can customize it to your needs, controlling all parameters such as voice and tone and creative freedom, hence the unpredictability. At the moment, however, the models included in GPT4All lag behind the commercial versions in terms of quality, even compared to GPT-3.5. But this will change quickly, maybe even within weeks. And soon we’ll all have our own open-source powered “ChatGPT-with-a-GPT-4-like” assistant on our laptops, PCs, or smartphones, capable of processing all our data and documents without interacting and communicating with third-party vendors or cloud services (hopefully). My data, my language model, my bot, my control. The power of a customized ChatGPT in your pocket is a revolutionary milestone with the potential to secure freedom and rights. And the first step towards this is GPT4All, so download it and start experimenting with the future today 💻🔥🤓
Hero image: (Much wow. A robot. Such creativity. Thug imagination…) Created with Bing Image Creator.