How To Integrate a Local LLM Into VS Code

Wait 5 sec.

If your IDE of choice is VS Code and you’ve wondered what it would be like to use it with a localized AI large language model (LLM), you’re in luck.But why would you do this?Well, you might want to make use of AI to enhance your workflow, help debug your code or even learn a new programming language. For whatever reason you have, integrating VS Code with a local LLM like Ollama (my go-to AI) is not nearly as hard as you might think it would be. The end result is a powerful tool you can leverage for whatever coding or coding-adjacent need you have.The benefits of integrating with a local LLM are that you don’t have to use an API, you’ll have more privacy, it can be used offline and you can select your model of choice.I’m going to show you how to set this up with Ollama on a Linux machine (Pop!_OS to be exact). Yes, you can do this on both macOS and Windows as well; the only difference is the Ollama installation process. On Windows, installing Ollama is as simple as downloading the app and going through the usual installation process.What You’ll NeedTo make this work, you’ll need VS Code installed and (for Linux) a user with sudo privileges. That’s it. Let’s get to work.Installing Ollama on LinuxTo install Ollama on Linux, you only have to run the command:On macOS, the command is:When that command finishes, you’re ready to pull a model. Before you do that, check out the Ollama Model Library and find the model you want to use. One of the better models for code work is called codellama, which can be pulled with the command:Installing the Required ExtensionThe extension in question is called Continue. To install Continue, open VS Code and then hit the Ctrl+P keyboard shortcut (for Linux and Windows), or the Cmd+P shortcut (for macOS). When the search bar appears, type the following into it:The Continue extension should appear in the left sidebar. Click Install (Figure 1) and the extension will install.Figure 1. Installing Continue for VS Code.Configuring the ExtensionWith VS Code open, click on the Continue icon in the left sidebar, and the Continue pop-up should appear. In that window, click “Or, configure your own models” (Figure 2).Figure 2. You can also log in with a Continue Hub account, but we’re using local models, so it’s not necessary.In the next window, scroll to the bottom and click “Click here to view more providers.” In the resulting window, select Ollama as the provider and then select CodeLlama from the Model drop-down (Figure 3).Figure 3. Connecting VS Code to Ollama.Click Connect, and a tutorial will appear. I would suggest reading through this to get a better idea of how it works. You’ll find instructions on how to use the autocomplete, edit, chat and agent features.Using Continue/OllamaLet’s say you want Ollama to write a bit of Python code for you that accepts user input and saves it to a file. In the Continue Prompt (accessed with Ctrl+I), type something like this:write a Python program that accepts input from a user and saves it to a fileYour chosen model will go to work and not only create the script, but also explain how it works.You’ll notice there’s a run button in the output (which is actually Apply Code). Once that completes, you can ask a follow-up like:run the above codeYou should see the sample output that runs the code. If it works, you know the code is golden.You can also opt to open a terminal window (via VS Code), open a new file, copy/paste the code into the file, save it and then run it as you normally would. Or, you could go back to the main VS Code window, copy/paste the code into a new project and run/debug it like that.Speaking of debugging, if you want to debug code with Continue, you could go back to the chat window, type something like fix the following Python code, paste the code into the window and hit Enter. Your local model should be able to go through the code, fix whatever issues it finds and present the results.The nice thing about using VS Code with a local model is that it gives you the chance to learn about the language you are using as you work. You can ask questions, such as “What is a Python Tuple?” The results are surprisingly helpful.And that, my friends, is how you integrate a local LLM into VS Code to use AI to improve your skills.The post How To Integrate a Local LLM Into VS Code appeared first on The New Stack.