docs: update slugs again

This commit is contained in:
Nicole Zhu 2024-03-14 21:08:50 +08:00
parent b4eff9a108
commit 1fe3dff875
6 changed files with 18 additions and 20 deletions

View File

@ -1,12 +0,0 @@
---
title: Llama-CPP Extension
slug: /guides/engines/llama-cpp
---
## Overview
[LlamaCPP](https://github.com/ggerganov/llama.cpp) is the default AI engine downloaded with Jan. It is served through Nitro, a C++ inference server, that handles additional UX and hardware optimizations.
The source code for Nitro-llama-cpp is [here](https://github.com/janhq/nitro).
There is no additional setup needed.

View File

@ -1,6 +1,6 @@
--- ---
title: Extensions title: Inference Providers
slug: /guides/engines slug: /guides/providers
--- ---
import DocCardList from "@theme/DocCardList"; import DocCardList from "@theme/DocCardList";

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

@ -0,0 +1,10 @@
---
title: llama.cpp
slug: /guides/providers/llama-cpp
---
## Overview
[Nitro](https://github.com/janhq/nitro) is an inference server on top of [llama.cpp](https://github.com/ggerganov/llama.cpp). OpenAI-compatible API, queue, & scaling.
Nitro is the default AI engine downloaded with Jan. There is no additional setup needed.

View File

@ -1,6 +1,6 @@
--- ---
title: TensorRT-LLM Extension title: TensorRT-LLM
slug: /guides/engines/tensorrt-llm slug: /guides/providers/tensorrt-llm
--- ---
Users with Nvidia GPUs can get 20-40% faster* token speeds on their laptop or desktops by using [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). Users with Nvidia GPUs can get 20-40% faster* token speeds on their laptop or desktops by using [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM).

View File

@ -201,15 +201,15 @@ const sidebars = {
}, },
{ {
type: "category", type: "category",
label: "AI Engines", label: "Inference Providers",
className: "head_SubMenu", className: "head_SubMenu",
link: { link: {
type: 'doc', type: 'doc',
id: "guides/engines/README", id: "guides/providers/README",
}, },
items: [ items: [
"guides/engines/llama-cpp", "guides/providers/llama-cpp",
"guides/engines/tensorrt-llm", "guides/providers/tensorrt-llm",
] ]
}, },
{ {