Inhaltsverzeichnis
Large Language Models (LLMs)
Large Language Models (LLMs) have the potential to provide personalized support to learners and transform the programming experience. Java-Editor provides built-in support for LLM-assisted programming, available in two forms.
- A programming assistant available in the editor
- Integrated chat to interact with an LLM model
Both cloud-based and local LLMs are supported:
- Cloud-based LLMs
- OpenAI models like (GPT-3.5 Turbo and GPT-4o)
- Gemini models like (1.5 Flash and 1.5 Pro) from Google
- Local LLMs
- llama3 and codellama from Meta
- gemma from Google
- starcoder2 from Nvidia
- phi and wizardlm from Microsoft
- and many, many others
LLM integration prerequisites
OpenAI
You need to register with OpenAI and create an API key. The API key can be either a project key or a legacy user key. Please note that this is a paid service, so you will need to either:
- sign up for a payment plan or
- add credits on a pay-as-you-go basis
A $10 credit can be very helpful when using OpenAI with Java-Editor. See also the OpenAI pricing. Note that the older GPT-3.5 Turbo model is 10 times cheaper than the newer GPT-4o model, which in turn is much cheaper than its predecessors gpt-4-turbo and gpt-4.
Gemini
You will need an API key. Although it is a commercial service, the Text Embedding 004 pricing model is free to use, making it great for use with Java-Editor.
Ollama
Unlike the cloud-based providers OpenAI and Gemini, Ollama models are run locally.
Why would you want to run LLMs locally? Here are a few reasons:
- Wide selection of open source LLM models.
- Save money as it is free.
- You may want to dive deeper and play with model parameters like temperature and penalty.
- Train models using your or your company's data.
To use Ollama you need:
- a fairly modern and fast CPU.
- a modern and powerful GPU with at least 6 GB of memory. For example, a relatively cheap Nvidia GeForce RTX 3060 with 12 GB of memory can perform well.
- Quite a few GB of free fast hard disk space.
Otherwise, it can be frustratingly slow to use.
Installing Ollama
You must first download and install the Ollama installer. Note that after installation, an Ollama Windows service will start automatically every time you start Windows.
Installing Ollama Models
Ollama provides access to a large number of LLM models such as Codegemma from Google and Codelllama from Meta. To use a specific model you must install it locally. You can do this from a command prompt by typing the command:
ollama pull model_name
After that, you can use the local model with Java-Editor.
The wizard currently works with the codellama:code model and its variants. You can use models like codellama, codegemma, starcoder2, and deepseek-Coder-V2 with the chat. For each model, you can choose the largest model size that fits comfortably in your GPU memory. For example, codellama:13b-code-q3_K_M is a codellama:code variant with a size of 6.3 GB that can be used with a GPU with 8 GB or more memory. Keep in mind that the larger the model, the slower the model response, but the higher the quality of the responses.
The LLM Assistant
You can access the LLM Assistant from the editor's context menu.
Suggest
Request a suggestion from the assiatant for code completion. This is only available when there is no selection in the editor.
The following three commands are only available when a portion of code is selected.
Explain
Ask the assistant to explain the selected section of code. The explanation is in the form of comments.
Fix Bugs
Ask the assistant to fix the bugs in the selected section of code. The assistant should also explain the nature of the original problems and their solution using comments in the code.
Optimize
Ask the assistant to optimize the selected section of code. The optimizations should retain the original functionality of the code.
Cancel
Cancel a pending request. In all the above cases, an activity indicator is displayed. You can also cancel a request by clicking on the activity indicator.
In the Java-Editor configuration, settings of the LLM assistant are entered, in particular the API keys.
The LLM Assistant Suggestion Window
All LLM Assistant answers are displayed in a pop-up suggestion window:
You can edit the answer before executing any of the following commands
Accept (Tab)
Inserts the contents of the suggestion at the cursor position.
Accept Word (Ctrl+Right)
Accept only the first word of the suggestion.
Accept Line (Ctrl+Enter)
Accept only the first line of the suggestion.
Cancel (Esc)
Ignore the suggestion and return to the editor.
The LLM Chat Window
The LLM Chat Window is designed to interact with Large Language Models (LLMs) without leaving Java-Editor. You call it up via the window menu.
It offers:
- The chat is organized by multiple topics
- Each topic can have its own title
- The chat history can be saved and restored
- Java code is syntactically highlighted
- Java code can be easily copied into a code editor
- Within each topic, the conversation context is preserved
Toolbar commands:
New Topic
Adds a new chat topic.
Remove Current Topic
Removes the current chat topic.
Save Chat History
Saves chat topics in a JSON file called „Chat history.json“ in the same directory as JavaEditor.ini. The chat history is automatically restored when Java-Editor is restarted.
Title
Give the current topic a title. The topic title is displayed in the window title.
Next/Previous Topic
Displays the next/previous topic.
Settings for the LLM chat window are made in the configuration.
The Chat context menu
Copy
Copies the question or answer under the cursor to the clipboard.
Copy Code
Copies the Python code under the cursor to the clipboard.
Copy Code to New Editor
Copies the Python code under the cursor to a new editor so you can easily test it.
Entering Prompts
To send your request, press Ctrl+Enter or click the chat icon to the right of the question.