Hoang Ha 84a09ae03f
Chore/update model hub (#1342)
* fix(mistral-ins): clean redundant parameters

* add(yarn-mistral): update new requested model

* fix(trinity-v1): delete trinity v1 from the hub

* add(tulu-2-70b): llama 70b alternative

* fix(lzlv-70b): delete lzlv-70b and changed to tulu-2

* fix(mistral-ins): upgrade model version to v0.2

* fix(model-extention): pump version to 1.0.18

* add(dolphin 8x7b): update the current  best moe finetuned model

* add(openchat): the best 7b model

* fix(tinyllama): pump version of the model to v1

* fix(stealth): upgrade stealth to v1.3

* Revert "fix(stealth): upgrade stealth to v1.3"

This reverts commit da24df3fb5d69f93d92cc4dd45f991d548aff6aa.

* fix(stealth): upgrade version to v1.3
2024-01-05 13:50:35 +07:00

24 lines
902 B
JSON

{
"source_url": "https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf",
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4",
"version": "1.0",
"description": "This is a 4-bit quantized iteration of MistralAI's Mistral Instruct 7b model, specifically designed for a comprehensive understanding through training on extensive internet data.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"prompt_template": "<s>[INST]{prompt}\n[/INST]"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": ["Featured", "7B", "Foundational Model"],
"size": 4370000000,
"cover": "https://raw.githubusercontent.com/janhq/jan/main/models/mistral-ins-7b-q4/cover.png"
},
"engine": "nitro"
}