This is about running VSCode AI code assist locally replacing Copilot or some other service. You may run local models to guarantee none of your code ends up on external servers. Or, you may not want to maintain an ongoing AI subscription. We are going to use LM Studio and VS Code. This was tested on Windows 11 with an RTX 3060 TI with 8GB of VRAM. 8GB really limits the number and size of the models we can use. LM Studio's simple hosting model of 1 LLM and an embedding works for us in this situation. You want a big card. 8GB is a tiny card. Related blog articles and videos Several related blogs and videos that cover VSCode and local LLMs Blog Get AI code assist VSCode with local LLMs using Ollama and the Continue.dev extension - Mac Get AI code assist VSCode with local LLMs using LM Studio and the Continue.dev extension - Windows Rocking an older Titan RTX 24GB as my local AI Code assist on Windows 11, Ollama and VS Code YouTube Video Using loc...
Comments
Post a Comment