* fix: Prevent spamming /health endpoint and improve startup and resolve compiler warnings
This commit introduces a delay and improved logic to the /health endpoint checks in the llamacpp extension, preventing excessive requests during model loading.
Additionally, it addresses several Rust compiler warnings by:
- Commenting out an unused `handle_app_quit` function in `src/core/mcp.rs`.
- Explicitly declaring `target_port`, `session_api_key`, and `buffered_body` as mutable in `src/core/server.rs`.
- Commenting out unused `tokio` imports in `src/core/setup.rs`.
- Enhancing the `load_llama_model` function in `src/core/utils/extensions/inference_llamacpp_extension/server.rs` to better monitor stdout/stderr for readiness and errors, and handle timeouts.
- Commenting out an unused `std::path::Prefix` import and adjusting `normalize_path` in `src/core/utils/mod.rs`.
- Updating the application version to 0.6.904 in `tauri.conf.json`.
* fix grammar!
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* fix grammar 2
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* reimport prefix but only on Windows
* remove instead of commenting
* remove redundant check
* sync app version in cargo.toml with tauri.conf
---------
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
* feat: Improve llamacpp server error reporting and model load stability
This commit introduces significant improvements to how the llamacpp server
process is managed and how its errors are reported.
Key changes:
- **Enhanced Error Reporting:** The llamacpp server's stdout and stderr
are now piped and captured. If the llamacpp process exits prematurely
or fails to start, its stderr output is captured and returned as a
`LlamacppError`. This provides much more specific and actionable
diagnostic information for users and developers.
- **Increased Model Load Timeout:** The `waitForModelLoad` timeout has
been increased from 30 seconds to 240 seconds (4 minutes). This
addresses issues where larger models or slower systems would
prematurely time out during the model loading phase.
- **API Secret Update:** The internal API secret for the llamacpp
extension has been updated from 'Jan' to 'JustAskNow'.
- **Version Bump:** The application version in `tauri.conf.json` has
been incremented to `0.6.901`.
* fix: should not spam load requests
* test: add test to cover the fix
* refactor: clean up
* test: add more test case
---------
Co-authored-by: Louis <louis@jan.ai>
- pulls fix for #5463 out of the github release workflow and into
the make/yarn build process
- implements a wrapper script that pins linuxdeploy and injects
a new location for XDG_CACHE_HOME into the build pipeline,
allowing manipulating .cache/tauri without tainting the hosts
.cache
- adds ./.cache (project_root/.cache) to make clean and mise clean
task
- remove .devcontainer/buildAppImage.sh, obsolete now that extra
build steps have been removed from the github workflow and
incorporated in the normal build process
- remove appimagetool from .devcontainer/postCreateCommand.sh,
as it was only used by .devcontainer/buildAppImage.sh
- pulled appimage packaging steps out of release workflow into new
src-tauri/build-utils/buildAppImage.sh
- cleaned up yarn scripts:
- moved multi platform yarn scripts out of yarn build:tauri:<platform>
into generic yarn build:tauri
- split yarn build:tauri:linux:win32 into separate yarn scripts so it's
clearer what is specific to which platform
- added src-tauri/build-utils/buildAppImage.sh to new yarn build:tauri:linux
yarn script
This is also a good entry point to add flatpak builds in the future.
Part of #5641
Allows for better per platform default config. Currently the
default serves windows/macos fine while it has to be tweaked
in order to build for linux
make build-tauri now successfully runs where it errored out before.
Appimages made with make alone however is incomplete as there are
still post processing steps in the github release workflow to bundle
additional resources.
- split platform specific config out of tauri.conf.json into auxiliary
platform specific config files, natively supported by tauri
- pull improved defaults out of template-tauri-build-linux-x64.yml
into new tauri.linux.conf.json
- fix tauri-build-linx-x64.yml to utilize new tauri.linux.conf.json
Things to ponder:
- Now, the v1/models endpoint of the API server will return an empty
list if no models are loaded
- Streaming v1/chat/completion routing works as well as v1/models; needs
further testing
- Changed `pid` field in `SessionInfo` from `string` to `number`/`i32` in TypeScript and Rust.
- Updated `activeSessions` map key from `string` to `number` to align with new PID type.
- Adjusted process monitoring logic to correctly handle numeric PIDs.
- Removed fallback UUID-based PID generation in favor of numeric fallback (-1).
- Added PID cleanup logic in `is_process_running` when the process is no longer alive.
- Bumped application version from 0.5.16 to 0.6.900 in `tauri.conf.json`.
The current implementation of Ctrl-C handling was not properly tested on Windows x86_64 architectures. To address this, the code has been modified to use `i32` instead of `BOOL` to handle the result of the `GenerateConsoleCtrlEvent` function, ensuring that the return value is correctly checked across different platforms.
This change updates the dependencies of the Cargo.toml file on Windows to include additional features from the `windows-sys` crate. The `CreateProcess flags like CREATE_NEW_PROCESS_GROUP` feature is now enabled to allow for proper process management.
The code now properly sends Ctrl+C to the llama process on Windows, and also includes error handling for when the Ctrl+C command fails. Additionally, it now uses the `Windows` API to kill the process when it times out, and properly handles the wait for the process to exit.
Updated Rust code to apply Windows-specific logic only on x86_64 targets using #[cfg(all(windows, target_arch = "x86_64"))]. Modified dev:tauri script in package.json to remove CLEAN=true and added CLEAN=true to beforeDevCommand in tauri.conf.json for consistency. Minor formatting changes in tauri.conf.json.
Rename variable, struct, and enum names from camelCase to snake_case throughout the llamacpp extension codebase to align with Rust naming conventions. This change improves readability and consistency without altering functionality.
Change the llama_server_process state from an Option<Child> to a HashMap<String, Child> to support managing multiple server instances by PID. This allows precise process tracking and termination, replacing the previous single-process limitation.
Previously, only one server process could be tracked at a time. Now, each process is stored with its PID as the key, enabling:
- Accurate session matching during unloading
- Proper termination of specific processes
- Better error handling for mismatched PIDs
The load_llama_model function now inserts processes into the map, and unload_llama_model removes them by PID.