* feat: add getTokensCount method to compute token usage Implemented a new async `getTokensCount` function in the LLaMA.cpp extension. The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions. * Fix: typos * chore: update ui token usage * chore: remove unused code * feat: add image token handling for multimodal LlamaCPP models Implemented support for counting image tokens when using vision-enabled models: - Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file. - Propagated `mmproj_path` from the Tauri plugin into the session info. - Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension: - Detects image content in messages. - Reads GGUF metadata from `mmprojPath` to compute accurate image token counts. - Provides a fallback estimation if metadata reading fails. - Returns the sum of text and image tokens. - Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`. - Minor clean‑ups such as comment capitalization and debug logging. * chore: update FE send params message include content type image_url * fix mmproj path from session info and num tokens calculation * fix: Correct image token estimation calculation in llamacpp extension This commit addresses an inaccurate token count for images in the llama.cpp extension. The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata. Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility. * fix per image calc * fix: crash due to force unwrap --------- Co-authored-by: Faisal Amir <urmauur@gmail.com> Co-authored-by: Louis <louis@jan.ai>
@janhq/core
This module includes functions for communicating with core APIs, registering app extensions, and exporting type definitions.
Usage
Import the package
// Web / extension runtime
import * as core from '@janhq/core'
Build an Extension
-
Download an extension template, for example, https://github.com/menloresearch/extension-template.
-
Update the source code:
-
Open
index.tsin your code editor. -
Rename the extension class from
SampleExtensionto your preferred extension name. -
Import modules from the core package.
import * as core from '@janhq/core' -
In the
onLoad()method, add your code:// Example of listening to app events and providing customized inference logic: import * as core from '@janhq/core' export default class MyExtension extends BaseExtension { // On extension load onLoad() { core.events.on(MessageEvent.OnMessageSent, (data) => MyExtension.inference(data, this)) } // Customized inference logic private static inference(incomingMessage: MessageRequestData) { // Prepare customized message content const content: ThreadContent = { type: ContentType.Text, text: { value: "I'm Jan Assistant!", annotations: [], }, } // Modify message and send out const outGoingMessage: ThreadMessage = { ...incomingMessage, content, } } }
-
-
Build the extension:
- Navigate to the extension directory.
- Install dependencies.
yarn install - Compile the source code. The following command keeps running in the terminal and rebuilds the extension when you modify the source code.
yarn build - Select the generated .tgz from Jan > Settings > Extension > Manual Installation.