* feat: normalize LaTeX fragments in markdown rendering
Added a preprocessing step that converts LaTeX delimiters `\[…\]` to `$$…$$` and `\(...\)` to `$…$` before rendering. The function skips code blocks, inline code, and HTML tags to avoid unintended transformations. This improves authoring experience by supporting common LaTeX syntax without requiring explicit `$` delimiters.
* fix: correct inline LaTeX normalization replacement
The replacement function for inline math (`\(...\)`) incorrectly accepted a fourth
parameter (`post`) and appended it to the result, which could introduce stray
characters or `undefined` into the rendered output. Updated the function to
use only the captured prefix and inner content and removed the extraneous
`${post}` interpolation, ensuring clean LaTeX conversion.
* feat: optimize markdown rendering with LaTeX caching and memoized code blocks
- Added cache to normalizeLatex to avoid reprocessing repeated content
- Introduced CodeComponent with stable IDs and memoization to reduce re-renders
- Replaced per-render code block ID mapping with hash-based IDs
- Memoized copy handler and normalized markdown content
- Simplified plugin/component setup with stable references
- Added custom comparison for RenderMarkdown memoization to prevent unnecessary updates
* refactor: memoize content only
---------
Co-authored-by: Louis <louis@jan.ai>
* feat: implement conversation endpoint
* use conversation aware endpoint
* fetch message correctly
* preserve first message
* fix logout
* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages
* add is dev tag
* feat: add getTokensCount method to compute token usage
Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.
* Fix: typos
* chore: update ui token usage
* chore: remove unused code
* feat: add image token handling for multimodal LlamaCPP models
Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.
* chore: update FE send params message include content type image_url
* fix mmproj path from session info and num tokens calculation
* fix: Correct image token estimation calculation in llamacpp extension
This commit addresses an inaccurate token count for images in the llama.cpp extension.
The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.
Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.
* fix per image calc
* fix: crash due to force unwrap
---------
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
* feat: Prompt progress when streaming
- BE changes:
- Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
- Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.
* Make return_progress optional
* chore: update ui prompt progress before streaming content
* chore: remove log
* chore: remove progress when percentage >= 100
* chore: set timeout prompt progress
* chore: move prompt progress outside streaming content
* fix: tests
---------
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>