The license for IBM watsonx Code Assistant Individual can be found in the product-licenses folder in this repository.
You can use this repository to add issues for watsonx Code Assistant Individual. The license for issues, discussion, and any files or samples shared in issues can be found in the LICENSE file.
Watsonx Code Assistant Individual is an innovative, lightweight AI coding companion built for IBM’s state-of-the-art Granite large language models. This companion offers robust, contextually aware AI coding assistance for popular programming languages such as C, C++, Go, Java, JavaScript, Python, and TypeScript. Seamlessly integrated into Visual Studio Code, watsonx Code Assistant Individual accelerates development productivity and simplifies coding tasks by providing powerful AI support hosted locally on the developer’s laptop or workstation using Ollama.
Complete the line that you’re currently typing:
And even full methods and functions:
Create a comment that describes a function, method, or piece of logic in your editor, and have watsonx Code Assistant Individual create it.
Watsonx Code Assistant Individual accesses models through Ollama, which is a widely used local inferencing engine for LLMs. Ollama wraps the underlying model-serving project llama.cpp.
MacOS, Linux, Windows: Download and run the ollama installer
Ollama on macOS and Windows will automatically download updates. Click on the taskbar or menubar item, and then click “Restart to update” to apply the update. Updates can also be installed by downloading the latest version manually.
brew upgrade ollama
curl -fsSL https://ollama.com/install.sh | sh
In a terminal window, run:
ollama serve
Leave that window open while you use Ollama.
If you receive the message Error: listen tcp 127.0.0.1:11434: bind: address already in use
, the Ollama server is already started. There’s nothing else that you need to do.
Get started with watsonx Code Assistant Individual by installing the granite-code:8b
model available in the Ollama library.
On the command line, type ollama run granite-code:8b
to download and deploy the model. You see output similar to the following example:
pulling manifest
pulling 8718ec280572... 100% ▕███████████████████████ 4.6 GB
pulling e50df8490144... 100% ▕███████████████████████ ▏ 123 B
pulling 58d1e17ffe51... 100% ▕███████████████████████▏ 11 KB
pulling 9893bb2c2917... 100% ▕███████████████████████▏ 108 B
pulling 0e851433eda0... 100% ▕███████████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>>
/bye
after the >>>
to exit the Ollama command shell.Try the model by typing:
ollama run granite-code:8b "How do I create a python class?"
You should see a response similar to the following:
To create a Python class, you can define a new class using the "class" keyword followed by the name of the class and a colon. Inside the class definition, you can specify the methods and attributes that the class will have. Here is an example: ...
By default, the Ollama server runs on IP address 127.0.0.1
, port 11434
, using http as a protocol. If you change the IP address or the port where Ollama is available:
By default, watsonx Code Assistant Individual uses the granite-code:8b
model for both chat and code completion.
If your environment has capacity, install the granite-code:8b-base
model, and use it as Local Code Gen Model as follows.
To use a different model:
granite-code:8b-base
model. See Install the Granite code model.granite-code:8b-base
.Watsonx Code Assistant Individual does not provide additional security controls. We recommended the following steps to properly secure your setup:
<your home directory>/.wca
. These files are not encrypted, besides the encryption that your file system provides. Safeguard the logs against improper access.By default, the Ollama server runs on IP address 127.0.0.1, port 11434, using http as a protocol, on your local device. To use https instead, or go through a proxy server, follow the Ollama documentation.
Watsonx Code Assistant Individual stores all your chat conversations locally in your file system under <your home directory>/.wca/chat.db
, in a database format defined by SQLite. Watsonx Code Assistant Individual does not share conversations with anyone. This file is not encrypted, besides the encryption that your file system provides. Safeguard this file against improper access.
Watsonx Code Assistant Individual does not collect any telemetry data. In general, watsonx Code Assistant Individual does not send any data that it processes to a third party, IBM included.
Enter a free-text question or instruction and click Enter. watsonx Code Assistant Individual sends your input to the code model, and shows the response in the chat.
To ask questions or refine a specific file, class, function, or method in your workspace, you can use code references. These references provide important context for the LLM, and can help to increase the accuracy of the answer.
@
sign as part of your chat message.Watsonx Code Assistant Individual sends the contents of the reference automatically to the model as part of your message.
Chat message examples:
Use case | Example message |
---|---|
Generate a function based on an existing function | Create a method send_translate_message that is similar to @send_code_explanation_message |
Generate a unit test that follows existing unit tests | Create a unit test for @getName that is similar to the unit tests in @testLoadTablesChildConnectionReceiverJob.h |
Enhance existing functions | Add error handling and log statements to @format_documents |
Enhance existing functions | Update @setEmail with error handling for null strings |
Explain code | What does @main.py do |
Explain code | Explain the logic of @send_invoice |
Generate documentation for functions and classes | Add javadoc to @Customer |
When you open a workspace folder, watsonx Code Assistant Individual creates an index of these items in memory so you can reference these files and functions in the chat. The IDE also indexes files that you add or change during your Visual Studio Code session. The index contains up to 1,000 of the most recent files in 7 programming languages: C, C++, Go, Java, JavaScript, Python, and TypeScript.
Each chat message is part of a chat conversation. We highly recommend keeping conversations focused around a specific subject or task. Create a new chat conversation to get more relevant results for your questions when you switch your context, for example, to another programming language, another project in your workspace, or a different programming task.
To create a new chat conversation:
To switch between chat conversations:
To delete a chat conversation:
To rename a chat conversation:
@<method>
“ to your message.Watsonx Code Assistant Individual and the Granite code models are created to answer questions that are related to code, general programming, and software engineering. While the IDE doesn’t restrict your questions or prompts, the Granite code models are not designed for language tasks. Any such use is at your own risk, and results can be unreliable so validate all output independently and consider deploying a Hate Abuse Profanity (HAP) filter.
Option
+ .
(Mac) or Alt
+ .
(Windows)Option
+ .
(Mac) or Alt
+ .
(Windows)Use chat to:
Use in-editor code generation to:
When you use comment-to-code in the editor, write a comment that describes the intended behavior - as you do when you write comments for code you wrote. For example, use //return even numbers from an arraylist
or //method that returns even numbers from an arraylist
. Don’t write your comment as an instruction, such as //write a method that returns even numbers from an arraylist
. Granite models are trained to complete code on data that contains many “typical”, descriptive comments, so these kinds of comments yield better results.
The key ingredient for code generation is the context, that is, the surrounding code you pass to the model. For in-editor code generation, watsonx Code Assistant Individual uses the following context:
To improve the results of in-editor code generation: