jan/docs/src/pages/cortex/quickstart.mdx
hiento09 f93deb6749
chore: move Jan Docs back into Jan Repo (#3790)
Co-authored-by: Hien To <tominhhien97@gmail.com>
2024-10-14 10:07:16 +07:00

55 lines
1.4 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Quickstart
description: Cortex Quickstart.
keywords:
[
Jan,
Customizable Intelligence, LLM,
local AI,
privacy focus,
free and open source,
private and offline,
conversational AI,
no-subscription fee,
large language models,
Cortex,
Jan,
LLMs
]
---
import { Callout, Steps } from 'nextra/components'
import { Cards, Card } from 'nextra/components'
# Quickstart
<Callout type="warning">
🚧 Cortex is under construction.
</Callout>
To get started, confirm that your system meets the [hardware requirements](/cortex/hardware), and follow the steps below:
```bash
# 1. Install Cortex using NPM
npm i -g @janhq/cortex
# 2. Download a GGUF model
cortex models pull llama3
# 3. Run the model to start chatting
cortex models run llama3
# 4. (Optional) Run Cortex in OpenAI-compatible server mode
cortex serve
```
<Callout type="info">
For more details regarding the Cortex server mode, please see here:
- [Server Endpoint](/cortex/server)
- [`cortex serve` command](/cortex/cli/serve)
</Callout>
## What's Next?
With Cortex now fully operational, you're ready to delve deeper:
- Explore how to [install Cortex](/cortex/installation) across various hardware environments.
- Familiarize yourself with the comprehensive set of [Cortex CLI commands](/cortex/cli) available for use.
- Gain insights into the systems design by examining the [architecture](/cortex/architecture) of Cortex.