Hoang Ha e6812b1247
chore: pre-populate Jan's /models folder with model.jsons (#775)
* draft model.json

* islm3b update

* capybara 34b update

* deepseek coder update

* dolphin yi update

* fix the maxtokens of islm

* lzlv 70b update

* marx3b update

* mythomax 13b update

* update neural chat 7b

* noromaid 20b update

* update openchat 7b

* openhermes7b update

* openorca 7b

* orca 13b update

* phind 34b update

* rocket 3b update

* starling 7b update

* storytelling 70b update

* tiefighter 13B

* update tiefighter tags

* tinyllama update

* wizard coder 13b

* update wizard coder 13b description

* wizard coder 34b update

* wizard coder minor fix

* xwin 70b update

* yarn 70b

* yi 34b

* zephyr beta 7b

* neuralhermes-7b update

* change path + ctxlen

* update id

* fix startling
2023-12-01 17:20:58 +07:00

24 lines
968 B
JSON

{
"source_url": "https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/resolve/main/neural-chat-7b-v3-1.Q4_K_M.gguf",
"id": "neural-chat-7b",
"object": "model",
"name": "Neural Chat 7B",
"version": "1.0",
"description": "The Neural Chat 7B model, developed on the foundation of mistralai/Mistral-7B-v0.1, has been fine-tuned using the Open-Orca/SlimOrca dataset and aligned with the Direct Preference Optimization (DPO) algorithm. It has demonstrated substantial improvements in various AI tasks and performance well on the open_llm_leaderboard.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
"system_prompt": "### System: ",
"user_prompt": "### User: ",
"ai_prompt": "### Assistant: "
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Intel, The Bloke",
"tags": ["General Use", "Role-playing", "Big Context Length"],
"size": 4370000000
}
}