watsonx-code-assistant-individual

License

IBM watsonx Code Assistant Individual

Features

Watsonx Code Assistant Individual is an innovative, lightweight AI coding companion built for IBM’s state-of-the-art Granite large language models. This companion offers robust, contextually aware AI coding assistance for popular programming languages such as C, C++, Go, Java, JavaScript, Python, and TypeScript. Seamlessly integrated into Visual Studio Code, watsonx Code Assistant Individual accelerates development productivity and simplifies coding tasks by providing powerful AI support hosted locally on the developer’s laptop or workstation using Ollama.

Chat with code models

explain

Code completion

Complete the line that you’re currently typing:

Single-line completion in watsonx Code Assistant Individual

And even full methods and functions:

Multi-line completion in watsonx Code Assistant Individual

Turn comments into code

Create a comment that describes a function, method, or piece of logic in your editor, and have watsonx Code Assistant Individual create it.

Comment to code generation in watsonx Code Assistant Individual

Everything is local, configurable by you

Setup

Watsonx Code Assistant Individual accesses models through Ollama, which is a widely used local inferencing engine for LLMs. Ollama wraps the underlying model-serving project llama.cpp.

Install Ollama

MacOS, Linux, Windows: Download and run the ollama installer

If already installed, update Ollama to the latest stable version

If you have installed Ollama via package installer:

Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item, and then click “Restart to update” to apply the update. Updates can also be installed by downloading the latest version manually.

On MacOS, If you have installed Ollama via brew, run the command to upgrade:

brew upgrade ollama

On Linux, re-run the install script:

curl -fsSL https://ollama.com/install.sh | sh

Start the Ollama inference server

In a terminal window, run:

ollama serve

Leave that window open while you use Ollama.

If you receive the message Error: listen tcp 127.0.0.1:11434: bind: address already in use, the Ollama server is already started. There’s nothing else that you need to do.

Install the Granite code model

Get started with watsonx Code Assistant Individual by installing the granite-code:8b model available in the Ollama library.

  1. Open a new terminal window.
  2. On the command line, type ollama run granite-code:8b to download and deploy the model. You see output similar to the following example:

    pulling manifest 
    pulling 8718ec280572... 100% ▕███████████████████████ 4.6 GB
    pulling e50df8490144... 100% ▕███████████████████████ ▏  123 B
    pulling 58d1e17ffe51... 100% ▕███████████████████████▏  11 KB
    pulling 9893bb2c2917... 100% ▕███████████████████████▏  108 B
    pulling 0e851433eda0... 100% ▕███████████████████████▏  485 B
    verifying sha256 digest 
    writing manifest 
    removing any unused layers 
    success 
    >>> 
    
  3. Type /bye after the >>>to exit the Ollama command shell.
  4. Try the model by typing:

    ollama run granite-code:8b "How do I create a python class?"
    
  5. You should see a response similar to the following:

    To create a Python class, you can define a new class using the "class" keyword followed by the name of the class and a colon. Inside the class definition, you can specify the methods and attributes that the class will have. Here is an example: ...
    

Install the watsonx Code Assistant Individual extension

  1. Open the watsonx Code Assistant extension page in the Visual Studio Marketplace.
  2. Click Install on the Marketplace page.
  3. In Visual Studio Code, click Install on the extension.

Configure the Ollama host

By default, the Ollama server runs on IP address 127.0.0.1, port 11434, using http as a protocol. If you change the IP address or the port where Ollama is available:

  1. Open the extension settings.
  2. Locate the entry for API Host.
  3. Add the host IP and port.

Configure the Granite models to use

By default, watsonx Code Assistant Individual uses the granite-code:8b model for both chat and code completion. If your environment has capacity, install the granite-code:8b-base model, and use it as Local Code Gen Model as follows. To use a different model:

  1. Install the granite-code:8b-base model. See Install the Granite code model.
  2. Open the extension settings.
  3. Update the model name for either Local Code Gen Model to granite-code:8b-base.

Securing your setup

Your Visual Studio Code environment

Watsonx Code Assistant Individual does not provide additional security controls. We recommended the following steps to properly secure your setup:

Connecting watsonx Code Assistant Individual and Ollama

By default, the Ollama server runs on IP address 127.0.0.1, port 11434, using http as a protocol, on your local device. To use https instead, or go through a proxy server, follow the Ollama documentation.

Chat conversation storage

Watsonx Code Assistant Individual stores all your chat conversations locally in your file system under <your home directory>/.wca/chat.db, in a database format defined by SQLite. Watsonx Code Assistant Individual does not share conversations with anyone. This file is not encrypted, besides the encryption that your file system provides. Safeguard this file against improper access.

Telemetry data

Watsonx Code Assistant Individual does not collect any telemetry data. In general, watsonx Code Assistant Individual does not send any data that it processes to a third party, IBM included.

Using chat with the Granite code model

Starting the chat

  1. Open the watsonx Code Assistant Individual view by selecting View -> Open View -> watsonx Code Assistant in the menu, or clicking the watsonx Code Assistant icon in the sidebar watsonx Code Assistant icon
  2. The chat panel opens to the left of the Visual Studio Code editor.
  3. To move the chat, drag the icon to the right or bottom of the editor.

Interacting with the chat

Use natural language

Enter a free-text question or instruction and click Enter. watsonx Code Assistant Individual sends your input to the code model, and shows the response in the chat.

watsonx Code Assistant Individual Chat

Reference code

To ask questions or refine a specific file, class, function, or method in your workspace, you can use code references. These references provide important context for the LLM, and can help to increase the accuracy of the answer.

  1. Type the @ sign as part of your chat message.
  2. A screen pops up, and shows all files, classes, and methods from your workspace.
  3. Type the characters of the file, class, or method name that you want to reference. The list filters automatically.
  4. Select the reference.

watsonx Code Assistant Individual code references

Watsonx Code Assistant Individual sends the contents of the reference automatically to the model as part of your message.

Chat message examples:

Use case Example message
Generate a function based on an existing function Create a method send_translate_message that is similar to @send_code_explanation_message
Generate a unit test that follows existing unit tests Create a unit test for @getName that is similar to the unit tests in @testLoadTablesChildConnectionReceiverJob.h
Enhance existing functions Add error handling and log statements to @format_documents
Enhance existing functions Update @setEmail with error handling for null strings
Explain code What does @main.py do
Explain code Explain the logic of @send_invoice
Generate documentation for functions and classes Add javadoc to @Customer
References: Indexing your workspace

When you open a workspace folder, watsonx Code Assistant Individual creates an index of these items in memory so you can reference these files and functions in the chat. The IDE also indexes files that you add or change during your Visual Studio Code session. The index contains up to 1,000 of the most recent files in 7 programming languages: C, C++, Go, Java, JavaScript, Python, and TypeScript.

Chat conversations

Each chat message is part of a chat conversation. We highly recommend keeping conversations focused around a specific subject or task. Create a new chat conversation to get more relevant results for your questions when you switch your context, for example, to another programming language, another project in your workspace, or a different programming task.

To create a new chat conversation:

  1. Open the menu at the top of the chat.
  2. Select New Chat.

To switch between chat conversations:

  1. Open the menu at the top of the chat.
  2. Select Chat Sessions.
  3. Select the conversation.

To delete a chat conversation:

  1. Open the menu at the top of the chat.
  2. Select Chat Sessions.
  3. Select the menu on the right of the conversation.
  4. Click Delete.

To rename a chat conversation:

  1. Open the menu at the top of the chat.
  2. Select Chat Sessions.
  3. Select the menu on the right of the conversation.
  4. Click Rename.

watsonx Code Assistant Individual chat options

Writing effective chat messages

Watsonx Code Assistant Individual and the Granite code models are created to answer questions that are related to code, general programming, and software engineering. While the IDE doesn’t restrict your questions or prompts, the Granite code models are not designed for language tasks. Any such use is at your own risk, and results can be unreliable so validate all output independently and consider deploying a Hate Abuse Profanity (HAP) filter.

Using in-editor code completion and comment-to-code

Single-line completion

  1. Start typing a line of code.
  2. Watsonx Code Assistant Individual adds a code suggestion to complete the line that you typed.
  3. Press Tab to accept the suggestion.

Single-line completion in watsonx Code Assistant Individual

Multi-line completion

  1. Start typing a line of code.
  2. Press Option + . (Mac) or Alt+ . (Windows)
  3. Watsonx Code Assistant Individual adds a code suggestion to complete the line that you typed, and adds code lines.
  4. Press Tab to accept the suggestion.

Multi-line completion in watsonx Code Assistant Individual

Comment-to-code

  1. Type a comment.
  2. Press Option + . (Mac) or Alt+ . (Windows)
  3. Watsonx Code Assistant Individual adds a code suggestion based on your comment.
  4. Press Tab to accept the suggestion.

Comment to code generation in watsonx Code Assistant Individual

Tips for generating code

Chat for larger code blocks. Use in-editor code generation for refinement and boilerplate code

Use chat to:

Use in-editor code generation to:

Comment-to-code: use descriptive comments, not instructions

When you use comment-to-code in the editor, write a comment that describes the intended behavior - as you do when you write comments for code you wrote. For example, use //return even numbers from an arraylist or //method that returns even numbers from an arraylist. Don’t write your comment as an instruction, such as //write a method that returns even numbers from an arraylist. Granite models are trained to complete code on data that contains many “typical”, descriptive comments, so these kinds of comments yield better results.

Context used for in-editor code generation

The key ingredient for code generation is the context, that is, the surrounding code you pass to the model. For in-editor code generation, watsonx Code Assistant Individual uses the following context:

  1. The 20 lines of code before the line where the generation is triggered.
  2. The 20 lines of code after the line where the generation is triggered.
  3. Up to 200 lines from the beginning of the current file where generation is triggered.
  4. Up to 5 code snippets that are similar to the code that surrounds the line where generation was triggered. These snippets are taken from the last 10 files that you opened that are in the same programming language as the current file.

To improve the results of in-editor code generation: