44 Commits

Author SHA1 Message Date
Akarshan Biswas
885da29f28
feat: add getTokensCount method to compute token usage (#6467)
* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-23 07:52:19 +05:30
Akarshan Biswas
bf7f176741
feat: Prompt progress when streaming (#6503)
* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
2025-09-22 20:37:27 +05:30
Dinh Long Nguyen
32a2ca95b6
feat: gguf file size + hash validation (#5266) (#6259)
* feat: gguf file size + hash validation

* fix tests fe

* update cargo tests

* handle asyn download for both models and mmproj

* move progress tracker to models

* handle file download cancelled

* add cancellation mid hash run
2025-08-21 16:17:58 +07:00
Akarshan Biswas
906b87022d
chore: re enable reasoning_content in backend (#6228)
* chore: re enable reasoning_content in backend

* chore: handle reasoning_content

* chore: refactor get reasoning content

* chore: update PR review

---------

Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-08-20 13:06:21 +05:30
Louis
55390de070
Merge pull request #6222 from menloresearch/feat/model-tool-use-detection
feat: #5917 - model tool use capability should be auto detected
2025-08-19 13:55:08 +07:00
Louis
bfe671d7b4
feat: #5917 - model tool use capability should be auto detected 2025-08-19 09:51:36 +07:00
Dinh Long Nguyen
2d486d7b3a
feat: add support for reasoning fields (OpenRouter) (#6206)
* add support for reasoning fields (OpenRouter)

* reformat

* fix linter

* Update web-app/src/utils/reasoning.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-08-18 21:59:14 +07:00
Faisal Amir
1d443e1f7d
fix: support load model configurations (#5843)
* fix: support load model configurations

* chore: remove log

* chore: sampling params add from send completion

* chore: remove comment

* chore: remove comment on predefined file

* chore: update test model service
2025-07-22 19:52:12 +07:00
Louis
bc4fe52f8d
fix: llama.cpp integration model load and chat experience (#5823)
* fix: stop generating should not stop running models

* fix: ensure backend ready before loading model

* fix: backend setting should not block onLoad
2025-07-21 09:29:26 +07:00
Akarshan Biswas
92703bceb2
refactor: move thinking toggle to runtime settings for dynamic control (#5800)
* refactor: move thinking toggle to runtime settings for per-message control

Replaces the static `reasoning_budget` config with a dynamic `enable_thinking` flag under `chat_template_kwargs`, allowing models like Jan-nano and Qwen3 to enable/disable thinking behavior at runtime, even mid-conversation.
Requires UI update

* remove engine argument
2025-07-17 20:18:24 +05:30
Akarshan
d4a3d6a0d6
Refactor session PID types from string to number across backend and extension
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
2025-07-04 21:40:54 +05:30
Akarshan
6b86baaa2f
Add tool choice type 2025-07-02 12:28:24 +07:00
Akarshan
6d5251d1c6
Fixup tool type definition 2025-07-02 12:28:24 +07:00
Akarshan
7f25311d26
Add tool type to chat completion requests 2025-07-02 12:28:24 +07:00
Louis
8bd4a3389f
refactor: frontend uses new engine extension
# Conflicts:
#	extensions/model-extension/resources/default.json
#	web-app/src/containers/dialogs/DeleteProvider.tsx
#	web-app/src/routes/hub.tsx
2025-07-02 12:28:24 +07:00
Akarshan
48d1164858
feat: add embedding support to llamacpp extension
This commit introduces embedding functionality to the llamacpp extension. It allows users to generate embeddings for text inputs using the 'sentence-transformer-mini' model.  The changes include:

- Adding a new `embed` method to the `llamacpp_extension` class.
- Implementing model loading and API interaction for embeddings.
- Handling potential errors during API requests.
- Adding necessary types for embedding responses and data.
- The load method now accepts a boolean parameter to determine if it should load embedding model.
2025-07-02 12:27:36 +07:00
Akarshan
dbcce86bb8
refactor: rename interfaces and add getLoadedModels
The changes include:
- Renaming interfaces (sessionInfo -> SessionInfo, unloadResult -> UnloadResult) for consistency
- Adding getLoadedModels() method to retrieve active model IDs
- Updating variable names from modelId to model_id for alignment
- Updating cleanup paths to use XDG-standard locations
- Improving type consistency across extension implementation
2025-07-02 12:27:35 +07:00
Akarshan
c2b606a3fc
feat: enhance chatCompletionRequest with advanced sampling parameters
Add comprehensive sampling parameters for fine-grained control over AI output generation, including dynamic temperature, Mirostat sampling, repetition penalties, and advanced prompt handling. These parameters enable more precise tuning of model behavior and output quality.
2025-07-02 12:27:34 +07:00
Akarshan Biswas
4dfdcd68d5
refactor: rename session identifiers to pid and modelId
The changes standardize identifier names across the codebase for clarity:
- Replaced `sessionId` with `pid` to reflect process ID usage
- Changed `modelName` to `modelId` for consistency with identifier naming
- Renamed `api_key` to `apiKey` for camelCase consistency
- Updated corresponding methods to use these new identifiers
- Improved type safety and readability by aligning variable names with their semantic meaning
2025-07-02 12:27:16 +07:00
Akarshan Biswas
fd9e034461
feat: update AIEngine load method and backend path handling
- Changed load method to accept modelId instead of loadOptions for better clarity and simplicity
- Renamed engineBasePath parameter to backendPath for consistency with the backend's directory structure
- Added getRandomPort method to ensure unique ports for each session to prevent conflicts
- Refactored configuration and model loading logic to improve maintainability and reduce redundancy
2025-07-02 12:27:15 +07:00
Akarshan Biswas
267bbbf77b
feat: add model and mmproj paths to ImportOptions
The `ImportOptions` interface was updated to include `modelPath` and `mmprojPath`. These options are required for importing models and multi-modal projects.
2025-07-02 12:27:13 +07:00
Akarshan Biswas
07d76dc871
feat: Allow specifying mmproj path during model loading
The `loadOptions` interface in `AIEngine.ts` now includes an optional `mmprojPath` property.  This allows users to provide a path to their MMProject file when loading a model, which is required for certain model types.  The `llamacpp-extension/src/index.ts` has been updated to pass this option to the llamacpp server if provided.
2025-07-02 12:27:13 +07:00
Akarshan Biswas
da23673a44
feat: Add API key generation for Llama.cpp
This commit introduces API key generation for the Llama.cpp extension.  The API key is now generated on the server side using HMAC-SHA256 and a secret key to ensure security and uniqueness.  The frontend now passes the model ID and API secret to the server to generate the key. This addresses the requirement for secure model access and authorization.
2025-07-02 12:27:12 +07:00
Akarshan Biswas
31971e7821
(WIP)randomly generate api-key hash each session 2025-07-02 12:27:12 +07:00
Akarshan Biswas
7481fae0df
remove ununsed imports and remove n_ctx key from loadOptions 2025-07-02 12:27:11 +07:00
Thien Tran
d5c07acdb5
feat: add LlamacppConfig for llama.cpp extension to improve settings (#5121)
* add engine settings

* update load options

* rename variable
2025-07-02 12:27:11 +07:00
Akarshan Biswas
742e731e96
Add --reasoning_budget option 2025-07-02 12:27:10 +07:00
Thien Tran
cd36b423b6
add basic model list 2025-07-02 12:27:10 +07:00
Thien Tran
d523166b61
implement delete 2025-07-02 12:27:09 +07:00
Akarshan Biswas
587ed3c83c
refactor OAI request payload type to support image and audio 2025-07-02 12:27:09 +07:00
Thien Tran
ded9ae733a
feat: Model import (download + local import) for llama.cpp extension (#5087)
* add pull and abortPull

* add model import (download only)

* write model.yaml. support local model import

* remove cortex-related command

* add TODO

* remove cortex-related command
2025-07-02 12:27:09 +07:00
Akarshan Biswas
a7a2dcc8d8
refactor load/unload again; move types to core and refactor AIEngine abstract class 2025-07-02 12:27:09 +07:00
Akarshan Biswas
bbbf4779df
refactor load/unload 2025-07-02 12:27:08 +07:00
Louis
942f2f51b7
chore: send chat completion with messages history (#5070)
* chore: send chat completion with messages history

* chore: handle abort controllers

* chore: change max attempts setting

* chore: handle stop running models in system monitor screen

* Update web-app/src/services/models.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: format time

* chore: handle stop model load action

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-05-22 20:13:50 +07:00
Louis
0627f29059
chore: enable / disable proxy configrations (#5050)
* chore: enable / disable proxy configrations

* Update web-app/src/routes/settings/https-proxy.tsx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update web-app/src/lib/completion.ts

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
2025-05-21 14:18:25 +07:00
Louis
72a7157509
feat: Jan Tool Use - MCP frontend implementation 2025-05-15 17:10:20 +07:00
Louis
174f1c7dcb
feat: reroute threads and messages requests to the backend 2024-12-12 16:38:55 +07:00
Louis
4080dc4b65
feat: model and cortex extensions update 2024-11-04 15:37:12 +07:00
Louis
8e603bd5db
fix: #3476 - Mismatch id between model json and path (#3645)
* fix: mismatch between model json and path

* chore: revert preserve model settings

* test: add tests
2024-09-17 16:43:47 +07:00
Louis
a699f8f32f
Revert "Jan integrates Cortex"
This reverts commit ad6fbea22df6deaba31e146dddb456e4a5d5dd75

Revert "chore: add engine logo from local instead of metadata logo (#3363)"

This reverts commit ad6fbea22df6deaba31e146dddb456e4a5d5dd75.

Revert "fix: LaTex formula render issue (#3353)"

This reverts commit 3b2c84c4fee61b886c883c68801be3bc5a8584ad.

Revert "chore: minor ui improvement (#3352)"

This reverts commit 6dd387db2b5b9890f19d0c3505cf9cb770fd492f.

Revert "fix: failed to relaunch app to update (#3351)"

This reverts commit fcaf98a2fa4e674799602e8093914bcc04ced153.

Revert "chore: add back GPU information to system monitoring bar (#3350)"

This reverts commit 03455a91807c7af6c6325901997c6d7231d2cd0d.

Revert "fix: empty model page not shown when delete all threads and models (#3343)"

This reverts commit 9e29fcd69eb9085843896686806fd453a1285723.

Revert "feat: allow user configure remote model from my model (#3348)"

This reverts commit fdab8af057f80cf1ccaae0dc42c4e5161925f51e.

Revert "chore: ui fix button outline for configure cloud model (#3347)"

This reverts commit fe8ed1f26dc86ead92ffea4f36e2989caf7dad88.

Revert "feat: move icon create new thread into top panel (#3346)"

This reverts commit 46cb1b45b997181e2188f8dafb2fc0d0cc12ddcd.

Revert "chore(UI): update experience model dropdown (#3342)"

This reverts commit 8b44613015a907dc491113aeb99c963080424892.

Revert "Chore/simple bug template and correct a copy (#3344)"

This reverts commit 23cd5fd3979e7529811045da5c4912369bcc7532.

Revert "chore(ui): fix alignment loader starter screen (#3338)"

This reverts commit e9f5d2f837ce323b0851ea04cded913ab433388c.

Revert "Increase retry upload to R2 to 5 times (#3337)"

This reverts commit dcfb497934edc795955d971b6d391ee1e6309a03.

Revert "fix: broken jan build - add log trace (jan.log) (#3336)"

This reverts commit 77422c3a7ed240909942ac0d8c4b259af8d87a28.

Revert "chore: disable quick ask (#3334)"

This reverts commit 6e4b6b09ae009149f262d86d5b19bb8096267c19.

Revert "fix: update legacy path (#3328)"

This reverts commit 5eb112142c6431cfe0cdf11ce28810ca650a5427.

Revert "chore: add cortex version (#3318)"

This reverts commit 60587649c56a1f24272e763f25aa5b4042f7719a.

Revert "fix: broken app due to incorrect api path (#3316)"

This reverts commit 3de4eab2a0dfbf9f593d73b9dde6bca1d9df2279.

Revert "feat: modal waiting cortex (#3306)"

This reverts commit 1f5168d4af9080b867c19d334c398bf32e4f54b8.

Revert "fix: refresh should not create new thread (#3314)"

This reverts commit 624d07703c50ea332ed4eeac9dc3a26bc8190d08.

Revert "fix: avoid lose title threads (#3307)"

This reverts commit a4f5fda104c2d1e01ea72798f055e5b4e3cfd616.

Revert "feat: change data folder (#3309)"

This reverts commit b43242b9b24352c7f90995eccab753dede679616.

Revert "feat: embed cortex into jan as a js module (#3305)"

This reverts commit b348110fb73bd5f13c69f1b915168687dea776d0.

Revert "fix: migration item in setting detail omit buttons (#3298)"

This reverts commit 709204b2bc9d9ed08e2245cbb084482f5908ab3a.

Revert "fix: merge gpu arch and os tensorrt models (#3299)"

This reverts commit aa7dbdc9fa701debeee28d9c7eb4af6258685321.

Revert "chore: update cortex new version (#3300)"

This reverts commit 602097909d38b4874db8b9f19a729c65a0ac9619.

Revert "fix: engine logo on model dropdown (#3291)"

This reverts commit 8eb8611c28f6c4cdf1ab142a6e18c82bcc4c2073.

Revert "fix: icon setting can close and open right panel (#3295)"

This reverts commit be31e9315e2df5c483de3f46bd37740d277cfccd.

Revert "fix: error while importing local model is not shown (#3294)"

This reverts commit 26be941e8426462e1e3a28e5b9bf1f834f462f82.

Revert "fix: add lower case quantization support (#3293)"

This reverts commit 3135ccc27e894a4056f882cd25f0bf7e10e56f49.

Revert "fix: onnx can't be selected in download model modal (#3283)"

This reverts commit 2521e1db518e9e01493e89dcc98c181ccd2b48a2.

Revert "feat: add chunk count (#3290)"

This reverts commit bad481bf05aa38edcf553e1273f5d692a65c9225.

Revert "fix: RAM always show 0% (#3287)"

This reverts commit 2201e6c5f87538b953503937fe6b135fe1aa2d94.

Revert "fix: remote engine should not allow reinit (#3284)"

This reverts commit 98abff0da3467c090618233db12a25bfa4c1db69.

Revert "chore": update minor UI (#3281)"

This reverts commit 105a9aa1a1830648a32ae285f751b4078c8ac2b2.

Revert "chore: update z-index tooltip (#3280)"

This reverts commit 5a81865508c205ed8c54df209092553a0c40054f.

Revert "feat: add nvidia engine (#3279)"

This reverts commit 8372f30f0ee99606b123351e7bb62636c62c8b23.

Revert "fix: migration wrong directory (#3278)"

This reverts commit 7fb1354287677f577070ccb065ed3a5f9e5b9882.

Revert "fix: clearer app loading prompt (#3275)"

This reverts commit 44a6401000334b79b225ab6fd6afb79f9da4bd51.

Revert "fix: allow user to reinit engine from settings page (#3277)"

This reverts commit 57cf3c7b3d5bface785763d06813906ba6eab7c9.

Revert "feat: enable copy over instructions (#3266)"

This reverts commit 2074511067201f0addb9d274cc90d1e782f2bc1d.

Revert "chore: toast message on model import fail with reason (#3276)"

This reverts commit 3bebdfe67e1571c7414065a36d16eb5941115ee0.

Revert "fix: should not let second instance terminate cortex (#3274)"

This reverts commit d074a5a445b73ca195a49814a935300f9e895aaa.

Revert "chore: remnove focus button (#3272)"

This reverts commit 07fa79e71a401becdbc0f474c27b860654a8bd62.

Revert "chore: update hub search result (#3273)"

This reverts commit 10b4a9087af709d147b34f6c3ee63d2d3b75c77a.

Revert "chore: temporary hidden import model (#3270)"

This reverts commit db5d8aba454fd4cc1e07253ca4805d4b1b3e7fb2.

Revert "fix: set cortex data folder path when starting jan (#3252)"

This reverts commit 91c77eda78ecd251d480e58b853fe7b261f6de50.

Revert "fix: remote model added manually does not shown in model drop down (#3261)"

This reverts commit 224ca3f7cc25b2577ab123829907964b78b78aa8.

Revert "feat: add more options for cortex popup (#3236)"

This reverts commit 5e06ed8a122aaed9d68fbd04ce42b65bf8987e58.

Revert "feat: manage cloud models from threads screen (#3223)"

This reverts commit 37a3c4f844419e66cfe3f2a9ff79ba688538241f.

Revert "chore: check the legacy incompatible message type (#3248)"

This reverts commit c10caf8d7f1f9cf68551e41de5d54cd4450cf44a.

Revert "chore: minor copy for grammar (#3235)"

This reverts commit f0f23078f31f58e01ba27787d6926f5c1eb2ff0b.

Revert "fix: add back normalize message function (#3234)"

This reverts commit 83579df3a40ff61eac25975da8295fceaec679dc.

Revert "chore: update conditional starter screen after cortex load (#3227)"

This reverts commit 4d3a97f1dca9e6c3ea746586e8607541f2d1c0b3.

Revert "fix: broken status parse due to empty category (#3233)"

This reverts commit 68714eeaf9212a6fdacd5c6a48d8691db9cc99eb.

Revert "feat: make scroll area type auto for make default visible scrollbar (#3220)"

This reverts commit 13428d60e7d3ea6a24c0df8871ea13e2dec0d5fd.

Revert "fix: update new api from cortex to support 0.5.0 (#3221)"

This reverts commit ec9b5bf682a8676e132a08075b6ae03cf9e23132.

Revert "feat: new starter screen (#3217)"

This reverts commit e8ee694abd33b34112d2c7d09f8c03370c2d22cc.

Revert "bump-cortex-0.5.0-1 (#3218)"

This reverts commit 5369da78f5b83b1c8761cb48820ccf3111728a90.

Revert "Deprecate Docker and K8s (#3219)"

This reverts commit 7611a05c44982d07465bec57658d5bf965f30ad5.

Revert "chore: set container max width for chat message and new hub screen (#3213)"

This reverts commit 007daa71616268b0e741e7a890b319401e49a81e.

Revert "feat: integrating cortex (#3001)"

This reverts commit 101268f6f36df96b62982a9eeb8581ebe103a909.
2024-08-15 10:44:47 +07:00
NamH
101268f6f3
feat: integrating cortex (#3001)
* feat: integrating cortex

* Temporary prevent crash

Signed-off-by: James <namnh0122@gmail.com>

* fix yarn lint

Signed-off-by: James <namnh0122@gmail.com>

* refactor: remove core node module - fs - extensions and so on (#3151)

* add migration script for threads, messages and models

Signed-off-by: James <namnh0122@gmail.com>

* remove freq_penalty and presence_penalty if model not supported

Signed-off-by: James <namnh0122@gmail.com>

* add back models in my models

Signed-off-by: James <namnh0122@gmail.com>

* fix api-url for setup API key popup

Signed-off-by: James <namnh0122@gmail.com>

* fix using model name for dropdown model

Signed-off-by: James <namnh0122@gmail.com>

* fix can't click to hotkey

Signed-off-by: James <namnh0122@gmail.com>

* fix: disable some UIs

Signed-off-by: James <namnh0122@gmail.com>

* fix build

Signed-off-by: James <namnh0122@gmail.com>

* reduce calling HF api

Signed-off-by: James <namnh0122@gmail.com>

* some ui update

Signed-off-by: James <namnh0122@gmail.com>

* feat: modal migration UI  (#3153)

* feat: handle popup migration

* chore: update loader

* chore: integrate script migration

* chore: cleanup import

* chore: moving out spinner loader

* chore: update check thread message success migrate

* chore: add handle script into retry button

* remove warning from joi

Signed-off-by: James <namnh0122@gmail.com>

* chore: fix duplicate children

* fix: path after migrating model

Signed-off-by: James <namnh0122@gmail.com>

* chore: apply mutation for config

* chore: prevent calling too many create assistant api

Signed-off-by: James <namnh0122@gmail.com>

* using cortexso

Signed-off-by: James <namnh0122@gmail.com>

* update download api

Signed-off-by: James <namnh0122@gmail.com>

* fix use on slider item

Signed-off-by: James <namnh0122@gmail.com>

* fix: ui no download model or simple onboarding (#3166)

* fix download huggingface model match with slider item

Signed-off-by: James <namnh0122@gmail.com>

* update owner_logo to logo and author

Signed-off-by: James <namnh0122@gmail.com>

* update new cortexso

Signed-off-by: James <namnh0122@gmail.com>

* Add install python step for macos

* add engine table

Signed-off-by: James <namnh0122@gmail.com>

* fix local icons

Signed-off-by: James <namnh0122@gmail.com>

* feat: add search feature for model hub

Signed-off-by: James <namnh0122@gmail.com>

* fix misalign switch

Signed-off-by: James <namnh0122@gmail.com>

* fix: delete thread not focus on other thread

Signed-off-by: James <namnh0122@gmail.com>

* add get model from hugging face

Signed-off-by: James <namnh0122@gmail.com>

* fix download from hugging face

Signed-off-by: James <namnh0122@gmail.com>

* small update

Signed-off-by: James <namnh0122@gmail.com>

* update

Signed-off-by: James <namnh0122@gmail.com>

* fix system monitor rounded only on the left

Signed-off-by: James <namnh0122@gmail.com>

* chore: update ui new hub screen (#3174)

* chore: update ui new hub screen

* chore: update layout centerpanel thread and hub screen

* chore: update detail model by group

* update cortexso 0.1.13

Signed-off-by: James <namnh0122@gmail.com>

* chore: add file size

Signed-off-by: James <namnh0122@gmail.com>

* chore: put engine to experimental feature

Signed-off-by: James <namnh0122@gmail.com>

* chore: open cortex folder

Signed-off-by: James <namnh0122@gmail.com>

* chore: add back user avatar

Signed-off-by: James <namnh0122@gmail.com>

* chore: minor UI hub (#3182)

* chore: add back right click thread list and update 3 dots are overlapping with the text

* chore: update position dropdown list my models

* chore: make on-device tab showing 6 items instead of 4

* chore: update style description modals detail model

* chore: update isGeneration loader and author name on modal

* feat: integrate cortex single executable

Signed-off-by: James <namnh0122@gmail.com>

* fix build

Signed-off-by: James <namnh0122@gmail.com>

* chore: added blank state

* chore: update ui component blank state

* bump cortex binary version

* fix: logic show modal migration (#3165)

* fix: logic show modal migration

* chore: fixed logic

* chore: read contain format gguf local models

* chore: change return hasLocalModel

* chore: intiial skipmigration state

* chore: filter embedding model

* fix: delete top thread not focus on any other thread

* chore: added UI no result component search models group (#3188)

* fix: remote model should show all when user config that engine

Signed-off-by: James <namnh0122@gmail.com>

* chore: set state thread and models migration using getOnInit (#3189)

* chore: set state thread and models migration using getOnInit

* chore: add state as dependecies hooks

* chore: system monitor panel show engine model (#3192)

* fix: remove config api, replace with engine

Signed-off-by: James <namnh0122@gmail.com>

* update

Signed-off-by: James <namnh0122@gmail.com>

* update reactquery

Signed-off-by: James <namnh0122@gmail.com>

* bump cortex 0.4.35

* feat: add waiting for cortex popup

Signed-off-by: James <namnh0122@gmail.com>

* chore: add loader detail model popup (#3195)

* chore: model start loader (#3197)

* chore: added model loader when user starting chat without model active

* chore: update copies loader

* fix: select min file size if recommended quant does not exist

Signed-off-by: James <namnh0122@gmail.com>

* chore: temporary hide gpu config

* fix: tensorrt not shown

Signed-off-by: James <namnh0122@gmail.com>

* fix lint

Signed-off-by: James <namnh0122@gmail.com>

* fix tests

Signed-off-by: James <namnh0122@gmail.com>

* fix e2e tests (wip)

Signed-off-by: James <namnh0122@gmail.com>

* update

Signed-off-by: James <namnh0122@gmail.com>

* fix: adding element and correct test to adapt new UI

* fix: temp skip unstable part

* fix: only show models which can be supported

Signed-off-by: James <namnh0122@gmail.com>

* Update version.txt

* update send message

Signed-off-by: James <namnh0122@gmail.com>

* fix: not allow user send message when is generating

Signed-off-by: James <namnh0122@gmail.com>

* chore: temp skip Playwright test due to env issue

* chore: temp skip Playwright test due to env issue

* update

Signed-off-by: James <namnh0122@gmail.com>

* chore: minor-ui-feedback (#3202)

---------

Signed-off-by: James <namnh0122@gmail.com>
Co-authored-by: Louis <louis@jan.ai>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
Co-authored-by: Hien To <tominhhien97@gmail.com>
Co-authored-by: Van Pham <64197333+Van-QA@users.noreply.github.com>
Co-authored-by: Van-QA <van@jan.ai>
2024-07-26 17:52:43 +07:00
NamH
e0d6049d66
chore: extension should register its own models (#2601)
* chore: extension should register its own models

Signed-off-by: James <james@jan.ai>

---------

Signed-off-by: James <james@jan.ai>
Co-authored-by: James <james@jan.ai>
2024-04-05 14:18:58 +07:00
Louis
8e8dfd4b37
refactor: introduce inference tools (#2493) 2024-03-25 23:26:05 +07:00
Louis
14a67463dc
chore: refactor core folder structure - module based 2024-03-25 16:20:06 +07:00