Hoang Ha ecc866427b
Update Model.json (#1005)
* add(mixtral): add model.json for mixtral

* archived some models + update the model.json

* add(model): add pandora 10.7b

* fix(model): update description

* fix(model): pump vers and change the featured model to trinity

* fix(model): archive neuralchat

* fix(model): decapriated all old models

* fix(trinity): add cover image and change description

* fix(trinity): update cover png

* add(pandora): cover image

* fix(pandora): cover image

* add(mixtral): add model.json for mixtral

* archived some models + update the model.json

* add(model): add pandora 10.7b

* fix(model): update description

* fix(model): pump vers and change the featured model to trinity

* fix(model): archive neuralchat

* fix(model): decapriated all old models

* fix(trinity): add cover image and change description

* fix(trinity): update cover png

* add(pandora): cover image

* fix(pandora): cover image

* chore: model desc nits

* fix(models): adjust the size for solars and pandoras

* add(mixtral): description

---------

Co-authored-by: 0xSage <n@pragmatic.vc>
2023-12-15 14:19:49 +07:00

23 lines
784 B
JSON

{
"source_url": "https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/resolve/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf",
"id": "mixtral-8x7b-instruct",
"object": "model",
"name": "Mixtral 8x7B Instruct Q4",
"version": "1.0",
"description": "The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"prompt_template": "[INST] {prompt} [/INST]"
},
"parameters": {
"max_tokens": 2048
},
"metadata": {
"author": "MistralAI, TheBloke",
"tags": ["MOE", "Foundational Model"],
"size": 26440000000
},
"engine": "nitro"
}