Hoang Ha 1e0d4f3753
Feat: Adjust model hub v0.4.13 (#2879)
* fix: correct phi3

* redundant phi2 dolphin

* add: hermes llama3

* add: ngl settings

* correct ctx len

* correct ngl

* correct maxlen + ngl

* disable phi3

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* remove redundant  hermes pro

* add ngl

* add ngl

* add ngl

* remove miqu

* add ngl

* add ngl

* add ngl

* add ngl

* remove redundant

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* add ngl

* version package bump

* feat: resolve issue of cannot found model in the extensions due to the removal

* feat: completely remove hermes-pro-7b

* feat: completely remove openhermes-neural-7b and miqu-70b, and add llama3-hermes-8b via renaming from Rex

* fix: correct description

---------

Co-authored-by: Van-QA <van@jan.ai>
2024-05-13 11:48:03 +07:00

37 lines
1.0 KiB
JSON

{
"sources": [
{
"filename": "mistral-7b-instruct-v0.2.Q4_K_M.gguf",
"url": "https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf"
}
],
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4",
"version": "1.1",
"description": "Mistral Instruct 7b model, specifically designed for a comprehensive understanding of the world.",
"format": "gguf",
"settings": {
"ctx_len": 32768,
"prompt_template": "[INST] {prompt} [/INST]",
"llama_model_path": "mistral-7b-instruct-v0.2.Q4_K_M.gguf",
"ngl": 32
},
"parameters": {
"temperature": 0.7,
"top_p": 0.95,
"stream": true,
"max_tokens": 32768,
"stop": ["[/INST]"],
"frequency_penalty": 0,
"presence_penalty": 0
},
"metadata": {
"author": "MistralAI",
"tags": ["Featured", "7B", "Foundational Model"],
"size": 4370000000,
"cover": "https://raw.githubusercontent.com/janhq/jan/dev/models/mistral-ins-7b-q4/cover.png"
},
"engine": "nitro"
}