* feat: Add model compatibility check and memory estimation This commit introduces a new feature to check if a given model is supported based on available device memory. The change includes: - A new `estimateKVCache` method that calculates the required memory for the model's KV cache. It uses GGUF metadata such as `block_count`, `head_count`, `key_length`, and `value_length` to perform the calculation. - An `isModelSupported` method that combines the model file size and the estimated KV cache size to determine the total memory required. It then checks if any available device has sufficient free memory to load the model. - An updated error message for the `version_backend` check to be more user-friendly, suggesting a stable internet connection as a potential solution for backend setup failures. This functionality helps prevent the application from attempting to load models that would exceed the device's memory capacity, leading to more stable and predictable behavior. fixes: #5505 * Update extensions/llamacpp-extension/src/index.ts Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Update extensions/llamacpp-extension/src/index.ts Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Extend this to available system RAM if GGML device is not available * fix: Improve model metadata and memory checks This commit refactors the logic for checking if a model is supported by a system's available memory. **Key changes:** - **Remote model support**: The `read_gguf_metadata` function can now fetch metadata from a remote URL by reading the file in chunks. - **Improved KV cache size calculation**: The KV cache size is now estimated more accurately by using `attention.key_length` and `attention.value_length` from the GGUF metadata, with a fallback to `embedding_length`. - **Granular memory check statuses**: The `isModelSupported` function now returns a more specific status (`'RED'`, `'YELLOW'`, `'GREEN'`) to indicate whether the model weights or the KV cache are too large for the available memory. - **Consolidated logic**: The logic for checking local and remote models has been consolidated into a single `isModelSupported` function, improving code clarity and maintainability. These changes provide more robust and informative model compatibility checks, especially for models hosted on remote servers. * Update extensions/llamacpp-extension/src/index.ts Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Make ctx_size optional and use sum free memory across ggml devices * feat: hub and dropdown model selection handle model compatibility * feat: update bage model info color * chore: enable detail page to get compatibility model * chore: update copy * chore: update shrink indicator UI --------- Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Faisal Amir <urmauur@gmail.com>
59 lines
2.1 KiB
Rust
59 lines
2.1 KiB
Rust
use super::helpers;
|
|
use super::types::GgufMetadata;
|
|
use reqwest;
|
|
use std::fs::File;
|
|
use std::io::BufReader;
|
|
|
|
/// Read GGUF metadata from a model file
|
|
#[tauri::command]
|
|
pub async fn read_gguf_metadata(path: String) -> Result<GgufMetadata, String> {
|
|
if path.starts_with("http://") || path.starts_with("https://") {
|
|
// Remote: read in 2MB chunks until successful
|
|
let client = reqwest::Client::new();
|
|
let chunk_size = 2 * 1024 * 1024; // Fixed 2MB chunks
|
|
let max_total_size = 120 * 1024 * 1024; // Don't exceed 120MB total
|
|
let mut total_downloaded = 0;
|
|
let mut accumulated_data = Vec::new();
|
|
|
|
while total_downloaded < max_total_size {
|
|
let start = total_downloaded;
|
|
let end = std::cmp::min(start + chunk_size - 1, max_total_size - 1);
|
|
|
|
let resp = client
|
|
.get(&path)
|
|
.header("Range", format!("bytes={}-{}", start, end))
|
|
.send()
|
|
.await
|
|
.map_err(|e| format!("Failed to fetch chunk {}-{}: {}", start, end, e))?;
|
|
|
|
let chunk_data = resp
|
|
.bytes()
|
|
.await
|
|
.map_err(|e| format!("Failed to read chunk response: {}", e))?;
|
|
|
|
accumulated_data.extend_from_slice(&chunk_data);
|
|
total_downloaded += chunk_data.len();
|
|
|
|
// Try parsing after each chunk
|
|
let cursor = std::io::Cursor::new(&accumulated_data);
|
|
if let Ok(metadata) = helpers::read_gguf_metadata(cursor) {
|
|
return Ok(metadata);
|
|
}
|
|
|
|
// If we got less data than expected, we've reached EOF
|
|
if chunk_data.len() < chunk_size {
|
|
break;
|
|
}
|
|
}
|
|
Err("Could not parse GGUF metadata from downloaded data".to_string())
|
|
} else {
|
|
// Local: use streaming file reader
|
|
let file =
|
|
File::open(&path).map_err(|e| format!("Failed to open local file {}: {}", path, e))?;
|
|
let reader = BufReader::new(file);
|
|
|
|
helpers::read_gguf_metadata(reader)
|
|
.map_err(|e| format!("Failed to parse GGUF metadata: {}", e))
|
|
}
|
|
}
|