6287 Commits

Author SHA1 Message Date
Nguyen Ngoc Minh
4d5cc0033a
Merge pull request #6634 from menloresearch/docs/update-missing-redirects
docs: update missing redirect links
2025-09-28 23:28:17 -07:00
Minh141120
a3153ee4cd docs: update missing redirect links 2025-09-29 13:23:48 +07:00
Nguyen Ngoc Minh
5403e58681
Merge pull request #6618 from github-roushan/show-supported-files
Show supported files
2025-09-29 03:17:39 +00:00
Roushan Kumar Singh
86c7496a70
Merge branch 'dev' into show-supported-files 2025-09-27 11:20:37 +05:30
Faisal Amir
abb0da491b
Merge pull request #6562 from menloresearch/emre/docsv2
Update handbook content with Nextra callout and content improvements
2025-09-26 22:13:28 +07:00
eckartal
97cb7c45a5 trigger PR banner 2025-09-26 21:23:20 +07:00
Emre Can Kartal
183645c637 Update README.md 2025-09-26 21:23:20 +07:00
Emre Can Kartal
12012232ac Update README.md 2025-09-26 21:23:20 +07:00
eckartal
03a53cbed3 Clean up installation page titles and descriptions
- Revert titles to clean sidebar navigation (Mac, Linux, Windows)
- Improve meta descriptions to be concise but SEO-friendly
- Keep key terms: local AI, offline, GPU acceleration, platform details
2025-09-26 21:23:20 +07:00
eckartal
bfbd198202 Optimize installation pages SEO meta titles and descriptions
 SEO Improvements:
- Mac: 'Run AI models locally on your Mac - Jan'
- Linux: 'Run AI models locally on Linux - Jan'
- Windows: 'Run AI models locally on Windows - Jan'

🎯 Meta descriptions now include:
- Target keywords (local AI, LLM, offline, ChatGPT-like)
- Platform-specific details (Apple Silicon, Ubuntu/Debian, Windows 10/11)
- Key benefits (GPU acceleration, privacy, no internet required)

📍 Sidebar navigation titles unchanged - only SEO meta data optimized
2025-09-26 21:23:20 +07:00
eckartal
ae171574e8 docs: fix broken internal links and remove privacy page
- Fix broken links in troubleshooting.mdx pointing to install pages
- Remove privacy.mdx page and update _meta.json navigation
- Update various documentation links for consistency
- Ensure all internal links use proper absolute paths
2025-09-26 21:23:17 +07:00
eckartal
da38384be2 Update handbook navigation structure and meta.json files
- Updated handbook/_meta.json to properly organize navigation
- Fixed duplicate entries by removing files that belong in subfolders
- Updated why folder title to 'Why does Jan exist?'
- Cleaned up why/_meta.json with proper titles for Open Superintelligence and Open-Source sections
2025-09-26 21:22:52 +07:00
eckartal
220cb3ae0a docs: enhance overview page with improved structure and internal linking
- Restructured main content with cleaner formatting
- Added comprehensive internal linking for better navigation
- Improved visual hierarchy and readability
- Enhanced acknowledgements section with better organization
- Updated product suite section with consistent formatting
2025-09-26 21:22:49 +07:00
eckartal
d47a3efe89 Update handbook content with Nextra callout and content improvements
- Convert blockquote to Nextra callout in open-superintelligence.mdx
- Add Edison link and improve content flow
- Refine language for better clarity
2025-09-26 21:22:35 +07:00
Roushan Singh
0c5ccea9d4 chore: add logging for TauriDialog Service 2025-09-26 16:04:34 +05:30
Roushan Singh
c091b8cd77 refactor: safely strip prefix and extensions from filename 2025-09-26 15:02:23 +05:30
Roushan Singh
7d6e0c22ac chore: fix Encoded logging 2025-09-26 15:02:23 +05:30
Roushan Singh
c6be66e595 refactor(utils): add helper to remove extensions from file paths 2025-09-26 15:02:23 +05:30
Dinh Long Nguyen
b422970369
feat: scrolling behaves like chatgpt with padding (#6598)
* scroll like chatgpt with padding

* minor refactor
2025-09-26 15:53:05 +07:00
Faisal Amir
580bdc511a
Merge pull request #6616 from menloresearch/chore/projects
chore: update project page title too long
2025-09-26 15:47:35 +07:00
Nguyen Ngoc Minh
191c0eec83
Merge pull request #6617 from menloresearch/docs/update-redirect-links
docs: update redirect links
2025-09-26 08:39:42 +00:00
Minh141120
eeaaf5ce0d chore: remove duplicate links 2025-09-26 15:29:08 +07:00
Minh141120
1dd9adf8d4 docs: update redirect links 2025-09-26 15:25:59 +07:00
Faisal Amir
b7dae19756
feat: custom downloaded model name (#6588)
* feat: add field edit model name

* fix: update model

* chore: updaet UI form with save button, and handle edit capabilities and  rename folder will need save button

* fix: relocate model

* chore: update and refresh list model provider also update test case

* chore: state loader

* fix: model path

* fix: model config update

* chore: fix remove depencies provider on edit model dialog

* chore: avoid shifted model name or id

---------

Co-authored-by: Louis <louis@jan.ai>
2025-09-26 15:25:44 +07:00
Faisal Amir
3d224f8cff chore: update project page title too long 2025-09-26 14:59:38 +07:00
Nguyen Ngoc Minh
453df559b5
Merge pull request #6615 from menloresearch/docs/update-redirect-list
docs: update redirect list
2025-09-26 07:28:56 +00:00
Minh141120
39e3a02b3e docs: update redirect list 2025-09-26 14:24:06 +07:00
Faisal Amir
ea124f7fd4
Merge pull request #6614 from menloresearch/enhancement/jan-web
enhancement: update statistic number jan web
2025-09-26 14:23:20 +07:00
Faisal Amir
d1c8cd2dc9 chore: update link target banner announcement 2025-09-26 14:11:47 +07:00
Faisal Amir
c12bef55e9 enhancement: update statistic number jan web 2025-09-26 14:00:31 +07:00
Dinh Long Nguyen
1bbea4b30f
models and cookies invalidation (#6613) 2025-09-26 13:50:10 +07:00
Faisal Amir
39aa1c4f7e
Merge pull request #6607 from menloresearch/fix/projects-left-panel-title
fix: projects title long name
2025-09-26 13:20:54 +07:00
Nguyen Ngoc Minh
32df9fda0f
Merge pull request #6611 from menloresearch/refactor/remove-mise
refactor: remove mise
2025-09-26 06:17:28 +00:00
Louis
55c42ba526
fix: lock all of the dependencies (#6561)
* fix: pin web app dependencies

* fix: pin extension versions

* fix: pin extensions-web dependencies

* fix: pin extensions lockfile

* fix: remove unnecessary semicolon
2025-09-26 13:07:29 +07:00
Minh141120
cc218b1b40 chore: revert cargo lock file 2025-09-26 12:49:26 +07:00
Minh141120
fb9bbb66b0 refactor: remove mise 2025-09-26 12:42:01 +07:00
Faisal Amir
0a82fdd784 fix: projects title long name 2025-09-26 10:06:42 +07:00
Nguyen Ngoc Minh
75396dbd06
Merge pull request #6600 from menloresearch/docs/update-redirects
docs: update redirects
2025-09-25 13:57:09 +00:00
Minh141120
20c8991f55 docs: update redirects 2025-09-25 20:53:43 +07:00
Faisal Amir
d806c4719e
Merge pull request #6586 from menloresearch/feat/thread-project-org
feat: thread organization folder
2025-09-25 20:21:02 +07:00
Nguyen Ngoc Minh
978565e7f0
Merge pull request #6599 from menloresearch/docs/add-redirect
docs: add redirects
2025-09-25 11:48:10 +00:00
Minh141120
9fb3171d82 docs: remove duplicate redirect url 2025-09-25 18:21:31 +07:00
Minh141120
9923c6028e docs: add redirects 2025-09-25 18:10:58 +07:00
Faisal Amir
d690e0fa87 chore: max height project list on left panel 2025-09-25 17:29:38 +07:00
Low Keng Hoong, Warren
0fb6413368
Merge pull request #6596 from menloresearch/feat/kernel-benchmarking
feat: Add kernel benchmarking blogpost
2025-09-25 18:28:54 +08:00
Faisal Amir
a8b9e1f147 chore: fix navigartion thread from project 2025-09-25 16:24:55 +07:00
DESU CLUB
374a1f9771 feat: Fix image links 2025-09-25 17:10:59 +08:00
DESU CLUB
97af43cadb feat: Fix image links 2025-09-25 17:05:40 +08:00
DESU CLUB
2aead28c9b feat: Add kernel benchmarking blogpost 2025-09-25 16:58:27 +08:00
Akarshan Biswas
11b3a60675
fix: refactor, fix and move gguf support utilities to backend (#6584)
* feat: move estimateKVCacheSize to BE

* feat: Migrate model planning to backend

This commit migrates the model load planning logic from the frontend to the Tauri backend. This refactors the `planModelLoad` and `isModelSupported` methods into the `tauri-plugin-llamacpp` plugin, making them directly callable from the Rust core.

The model planning now incorporates a more robust and accurate memory estimation, considering both VRAM and system RAM, and introduces a `batch_size` parameter to the model plan.

**Key changes:**

- **Moved `planModelLoad` to `tauri-plugin-llamacpp`:** The core logic for determining GPU layers, context length, and memory offloading is now in Rust for better performance and accuracy.
- **Moved `isModelSupported` to `tauri-plugin-llamacpp`:** The model support check is also now handled by the backend.
- **Removed `getChatClient` from `AIEngine`:** This optional method was not implemented and has been removed from the abstract class.
- **Improved KV Cache estimation:** The `estimate_kv_cache_internal` function in Rust now accounts for `attention.key_length` and `attention.value_length` if available, and considers sliding window attention for more precise estimates.
- **Introduced `batch_size` in ModelPlan:** The model plan now includes a `batch_size` property, which will be automatically adjusted based on the determined `ModelMode` (e.g., lower for CPU/Hybrid modes).
- **Updated `llamacpp-extension`:** The frontend extension now calls the new Tauri commands for model planning and support checks.
- **Removed `batch_size` from `llamacpp-extension/settings.json`:** The batch size is now dynamically determined by the planning logic and will be set as a model setting directly.
- **Updated `ModelSetting` and `useModelProvider` hooks:** These now handle the new `batch_size` property in model settings.
- **Added new Tauri commands and permissions:** `get_model_size`, `is_model_supported`, and `plan_model_load` are new commands with corresponding permissions.
- **Consolidated `ModelSupportStatus` and `KVCacheEstimate`:** These types are now defined in `src/tauri/plugins/tauri-plugin-llamacpp/src/gguf/types.rs`.

This refactoring centralizes critical model resource management logic, improving consistency and maintainability, and lays the groundwork for more sophisticated model loading strategies.

* feat: refine model planner to handle more memory scenarios

This commit introduces several improvements to the `plan_model_load` function, enhancing its ability to determine a suitable model loading strategy based on system memory constraints. Specifically, it includes:

-   **VRAM calculation improvements:**  Corrects the calculation of total VRAM by iterating over GPUs and multiplying by 1024*1024, improving accuracy.
-   **Hybrid plan optimization:**  Implements a more robust hybrid plan strategy, iterating through GPU layer configurations to find the highest possible GPU usage while remaining within VRAM limits.
-   **Minimum context length enforcement:** Enforces a minimum context length for the model, ensuring that the model can be loaded and used effectively.
-   **Fallback to CPU mode:** If a hybrid plan isn't feasible, it now correctly falls back to a CPU-only mode.
-   **Improved logging:** Enhanced logging to provide more detailed information about the memory planning process, including VRAM, RAM, and GPU layers.
-   **Batch size adjustment:** Updated batch size based on the selected mode, ensuring efficient utilization of available resources.
-   **Error handling and edge cases:**  Improved error handling and edge case management to prevent unexpected failures.
-   **Constants:** Added constants for easier maintenance and understanding.
-   **Power-of-2 adjustment:** Added power of 2 adjustment for max context length to ensure correct sizing for the LLM.

These changes improve the reliability and robustness of the model planning process, allowing it to handle a wider range of hardware configurations and model sizes.

* Add log for raw GPU info from tauri-plugin-hardware

* chore: update linux runner for tauri build

* feat: Improve GPU memory calculation for unified memory

This commit improves the logic for calculating usable VRAM, particularly for systems with **unified memory** like Apple Silicon. Previously, the application would report 0 total VRAM if no dedicated GPUs were found, leading to incorrect calculations and failed model loads.

This change modifies the VRAM calculation to fall back to the total system RAM if no discrete GPUs are detected. This is a common and correct approach for unified memory architectures, where the CPU and GPU share the same memory pool.

Additionally, this commit refactors the logic for calculating usable VRAM and RAM to prevent potential underflow by checking if the total memory is greater than the reserved bytes before subtracting. This ensures the calculation remains safe and correct.

* chore: fix update migration version

* fix: enable unified memory support on model support indicator

* Use total_system_memory in bytes

---------

Co-authored-by: Minh141120 <minh.itptit@gmail.com>
Co-authored-by: Faisal Amir <urmauur@gmail.com>
2025-09-25 12:17:57 +05:30