The loop now extracts session info to retrieve the model ID, ensuring correct unloading of sessions by their associated model identifiers rather than session IDs. This aligns the cleanup process with the actual model resources being managed.
The generateApiKey method now incorporates the model's port to create a unique,
port-specific API key, enhancing security by ensuring keys are tied to both
model ID and port. This change supports better isolation between models
running on different ports. Code formatting improvements were also made
for consistency and readability.
The changes standardize identifier names across the codebase for clarity:
- Replaced `sessionId` with `pid` to reflect process ID usage
- Changed `modelName` to `modelId` for consistency with identifier naming
- Renamed `api_key` to `apiKey` for camelCase consistency
- Updated corresponding methods to use these new identifiers
- Improved type safety and readability by aligning variable names with their semantic meaning
This change allows the port to be specified via command line arguments, providing flexibility. The port is parsed from the arguments, defaulting to 8080 if not provided.
The changes improve the robustness of command-line argument parsing in the Llama model server by replacing direct index access with safe iteration methods. A new generate_api_key function was added to handle API key generation securely. The sessionId parameter was standardized to match the renamed property in the client code.
- Changed load method to accept modelId instead of loadOptions for better clarity and simplicity
- Renamed engineBasePath parameter to backendPath for consistency with the backend's directory structure
- Added getRandomPort method to ensure unique ports for each session to prevent conflicts
- Refactored configuration and model loading logic to improve maintainability and reduce redundancy
The `ImportOptions` interface was updated to include `modelPath` and `mmprojPath`. These options are required for importing models and multi-modal projects.
The `loadOptions` interface in `AIEngine.ts` now includes an optional `mmprojPath` property. This allows users to provide a path to their MMProject file when loading a model, which is required for certain model types. The `llamacpp-extension/src/index.ts` has been updated to pass this option to the llamacpp server if provided.
* wip
* update
* add download logic
* add decompress. support delete file
* download backend upon selecting setting
* add some logging and nootes
* add note on race condition
* remove then catch
* default to none backend. only download if it's not installed
* merge version and backend. fetch version from GH
* restrict scope of output_dir
* add note on unpack
This commit introduces API key generation for the Llama.cpp extension. The API key is now generated on the server side using HMAC-SHA256 and a secret key to ensure security and uniqueness. The frontend now passes the model ID and API secret to the server to generate the key. This addresses the requirement for secure model access and authorization.
* add pull and abortPull
* add model import (download only)
* write model.yaml. support local model import
* remove cortex-related command
* add TODO
* remove cortex-related command