278 Commits

Author SHA1 Message Date
Minh141120
0fd346181c chore: enable active installation for window installer 2025-07-10 14:57:55 +07:00
Louis
10bb8527bd
feat: bump llama.cpp b5857 2025-07-10 11:51:27 +07:00
D. Rect.
a668204cdc refactor: pin linuxdeploy in make/yarn build process instead of github workflow
- pulls fix for #5463 out of the github release workflow and into
  the make/yarn build process
- implements a wrapper script that pins linuxdeploy and injects
  a new location for XDG_CACHE_HOME into the build pipeline,
  allowing manipulating .cache/tauri without tainting the hosts
  .cache
- adds ./.cache (project_root/.cache) to make clean and mise clean
  task
- remove .devcontainer/buildAppImage.sh, obsolete now that extra
  build steps have been removed from the github workflow and
  incorporated in the normal build process
- remove appimagetool from .devcontainer/postCreateCommand.sh,
  as it was only used by .devcontainer/buildAppImage.sh
2025-07-10 04:50:12 +00:00
D. Rect.
7d04d66a0b refactor: pull appimage packaging steps out of github linux release workflow
- pulled appimage packaging steps out of release workflow into new
  src-tauri/build-utils/buildAppImage.sh
- cleaned up yarn scripts:
  - moved multi platform yarn scripts out of yarn build:tauri:<platform>
    into generic yarn build:tauri
  - split yarn build:tauri:linux:win32 into separate yarn scripts so it's
    clearer what is specific to which platform
- added src-tauri/build-utils/buildAppImage.sh to new yarn build:tauri:linux
  yarn script

    This is also a good entry point to add flatpak builds in the future.

    Part of #5641
2025-07-10 04:50:12 +00:00
D. Rect.
4134917a45 refactor: split platform specific config out of tauri.conf.json
Allows for better per platform default config. Currently the
default serves windows/macos fine while it has to be tweaked
in order to build for linux

make build-tauri now successfully runs where it errored out before.
Appimages made with make alone however is incomplete as there are
still post processing steps in the github release workflow to bundle
additional resources.

- split platform specific config out of tauri.conf.json into auxiliary
  platform specific config files, natively supported by tauri

- pull improved defaults out of template-tauri-build-linux-x64.yml
  into new tauri.linux.conf.json

- fix tauri-build-linx-x64.yml to utilize new tauri.linux.conf.json
2025-07-10 04:50:12 +00:00
Louis
a8ed759a06 fix: model download - windows path issue 2025-07-10 09:42:36 +07:00
Louis
46c95ebb97
feat: bump version of llama.cpp - b5833 2025-07-10 08:29:56 +07:00
Louis
2f02a228cc
fix: download on windows 2025-07-08 15:41:17 +07:00
Louis
b26ae7d0a4
ci: remove cortex build steps 2025-07-07 22:39:04 +07:00
Akarshan
d5ffc6a476
feat: Migrate Jan's API server to llamacpp-extension
Things to ponder:
- Now, the v1/models endpoint of the API server will return an empty
  list if no models are loaded
- Streaming v1/chat/completion routing works as well as v1/models; needs
  further testing
2025-07-07 20:52:00 +05:30
Louis
e3faf09ab2
chore: try fixing CI 2025-07-07 21:27:37 +07:00
Louis
6b496ae413
fix: build issues 2025-07-07 18:27:45 +07:00
Akarshan
d4a3d6a0d6
Refactor session PID types from string to number across backend and extension
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
2025-07-04 21:40:54 +05:30
Akarshan
dbdc031583
chore: store session_info in backend as well for API server(WIP) 2025-07-04 20:31:30 +05:30
Akarshan
03f0c5aad6
fix: remove unsupported BOOL for windows_sys in cleanup to fix windows build(attempt 3) 2025-07-03 18:35:13 +05:30
Akarshan
11db1ecaed
fix: server-side Ctrl-C handling for Windows x86_64 targets (attempt 2)
The current implementation of Ctrl-C handling was not properly tested on Windows x86_64 architectures. To address this, the code has been modified to use `i32` instead of `BOOL` to handle the result of the `GenerateConsoleCtrlEvent` function, ensuring that the return value is correctly checked across different platforms.
2025-07-03 14:13:56 +05:30
Akarshan
6ab7d37a08
fix: Update Cargo.toml dependencies on Windows & fix Ctrl+C handling on Windows
This change updates the dependencies of the Cargo.toml file on Windows to include additional features from the `windows-sys` crate. The `CreateProcess flags like CREATE_NEW_PROCESS_GROUP` feature is now enabled to allow for proper process management.
The code now properly sends Ctrl+C to the llama process on Windows, and also includes error handling for when the Ctrl+C command fails. Additionally, it now uses the `Windows` API to kill the process when it times out, and properly handles the wait for the process to exit.
2025-07-03 13:51:59 +05:30
Louis
e123d22b8d
fix: deprecate sidecar run 2025-07-02 12:48:50 +07:00
Akarshan
663c720f2a
Add windows-sys to cargo.toml 2025-07-02 12:29:03 +07:00
Akarshan
449bf17692
Add process aliveness check 2025-07-02 12:29:03 +07:00
Louis
9b730058b4
feat: use hardware information api 2025-07-02 12:29:02 +07:00
Louis
d264220245
fix: restrict Windows-specific code to x86_64 and update scripts
Updated Rust code to apply Windows-specific logic only on x86_64 targets using #[cfg(all(windows, target_arch = "x86_64"))]. Modified dev:tauri script in package.json to remove CLEAN=true and added CLEAN=true to beforeDevCommand in tauri.conf.json for consistency. Minor formatting changes in tauri.conf.json.
2025-07-02 12:29:02 +07:00
Akarshan
ad06b2a903
Move llama-server cleanup code to a separate file 2025-07-02 12:27:42 +07:00
Akarshan
7de694c0cd
add missing import during rebase 2025-07-02 12:27:42 +07:00
Akarshan
62ba503b86
chore: cleanup llama-server processes upon app exit 2025-07-02 12:27:42 +07:00
Akarshan
01d49a4b28
fix: Update server process handling for Windows and Unix systems 2025-07-02 12:27:42 +07:00
Akarshan
2eeabf8ae6
fix: ensure server process is properly terminated and reaped 2025-07-02 12:27:35 +07:00
Akarshan
4ffc504150
style: Rename camelCase to snake_case in llamacpp extension code
Rename variable, struct, and enum names from camelCase to snake_case throughout the llamacpp extension codebase to align with Rust naming conventions. This change improves readability and consistency without altering functionality.
2025-07-02 12:27:34 +07:00
Akarshan
6c769c5db9
feat: refactor llama server process storage to use HashMap
Change the llama_server_process state from an Option<Child> to a HashMap<String, Child> to support managing multiple server instances by PID. This allows precise process tracking and termination, replacing the previous single-process limitation.

Previously, only one server process could be tracked at a time. Now, each process is stored with its PID as the key, enabling:
- Accurate session matching during unloading
- Proper termination of specific processes
- Better error handling for mismatched PIDs

The load_llama_model function now inserts processes into the map, and unload_llama_model removes them by PID.
2025-07-02 12:27:34 +07:00
Thien Tran
8bf4a5eb7d
remove migration 2025-07-02 12:27:34 +07:00
Thien Tran
ae349159ce
remove yarn install:cortex 2025-07-02 12:27:33 +07:00
Thien Tran
95944fa081
add Jan's library path to path 2025-07-02 12:27:17 +07:00
Thien Tran
1eb49350e9
add is_library_available command 2025-07-02 12:27:17 +07:00
Akarshan Biswas
4dfdcd68d5
refactor: rename session identifiers to pid and modelId
The changes standardize identifier names across the codebase for clarity:
- Replaced `sessionId` with `pid` to reflect process ID usage
- Changed `modelName` to `modelId` for consistency with identifier naming
- Renamed `api_key` to `apiKey` for camelCase consistency
- Updated corresponding methods to use these new identifiers
- Improved type safety and readability by aligning variable names with their semantic meaning
2025-07-02 12:27:16 +07:00
Akarshan Biswas
f9d3935269
feat: allow specifying port via command line argument
This change allows the port to be specified via command line arguments, providing flexibility. The port is parsed from the arguments, defaulting to 8080 if not provided.
2025-07-02 12:27:16 +07:00
Akarshan Biswas
5d61062b0e
feat: enhance argument parsing and add API key generation
The changes improve the robustness of command-line argument parsing in the Llama model server by replacing direct index access with safe iteration methods. A new generate_api_key function was added to handle API key generation securely. The sessionId parameter was standardized to match the renamed property in the client code.
2025-07-02 12:27:15 +07:00
Thien Tran
1ae7c0b59a
update version/backend format. fix bugs around load() 2025-07-02 12:27:15 +07:00
Akarshan Biswas
fd9e034461
feat: update AIEngine load method and backend path handling
- Changed load method to accept modelId instead of loadOptions for better clarity and simplicity
- Renamed engineBasePath parameter to backendPath for consistency with the backend's directory structure
- Added getRandomPort method to ensure unique ports for each session to prevent conflicts
- Refactored configuration and model loading logic to improve maintainability and reduce redundancy
2025-07-02 12:27:15 +07:00
Thien Tran
40cd7e962a
feat: download backend for llama.cpp extension (#5123)
* wip

* update

* add download logic

* add decompress. support delete file

* download backend upon selecting setting

* add some logging and nootes

* add note on race condition

* remove then catch

* default to none backend. only download if it's not installed

* merge version and backend. fetch version from GH

* restrict scope of output_dir

* add note on unpack
2025-07-02 12:27:13 +07:00
Akarshan Biswas
da23673a44
feat: Add API key generation for Llama.cpp
This commit introduces API key generation for the Llama.cpp extension.  The API key is now generated on the server side using HMAC-SHA256 and a secret key to ensure security and uniqueness.  The frontend now passes the model ID and API secret to the server to generate the key. This addresses the requirement for secure model access and authorization.
2025-07-02 12:27:12 +07:00
Thien Tran
39bb3f34d6
patch failing calls to cortex 2025-07-02 12:27:12 +07:00
Akarshan Biswas
31971e7821
(WIP)randomly generate api-key hash each session 2025-07-02 12:27:12 +07:00
Thien Tran
5803fcdb99
add read_yaml. use buffered reader/writer 2025-07-02 12:27:11 +07:00
Thien Tran
d01cbe44ae
use PathBuf to check exists() 2025-07-02 12:27:11 +07:00
Akarshan Biswas
c5a0ee7f6e
refactor unload and implement a destructor to clean up sessions 2025-07-02 12:27:10 +07:00
Thien Tran
ded9ae733a
feat: Model import (download + local import) for llama.cpp extension (#5087)
* add pull and abortPull

* add model import (download only)

* write model.yaml. support local model import

* remove cortex-related command

* add TODO

* remove cortex-related command
2025-07-02 12:27:09 +07:00
Akarshan Biswas
a7a2dcc8d8
refactor load/unload again; move types to core and refactor AIEngine abstract class 2025-07-02 12:27:09 +07:00
Akarshan Biswas
bbbf4779df
refactor load/unload 2025-07-02 12:27:08 +07:00
Akarshan Biswas
b4670b5526
remove cortex engine dirs 2025-07-02 12:27:08 +07:00
Akarshan Biswas
47881db696
remove cortex from tauri.conf.json 2025-07-02 12:27:08 +07:00