Hoang Ha e6812b1247
chore: pre-populate Jan's /models folder with model.jsons (#775)
* draft model.json

* islm3b update

* capybara 34b update

* deepseek coder update

* dolphin yi update

* fix the maxtokens of islm

* lzlv 70b update

* marx3b update

* mythomax 13b update

* update neural chat 7b

* noromaid 20b update

* update openchat 7b

* openhermes7b update

* openorca 7b

* orca 13b update

* phind 34b update

* rocket 3b update

* starling 7b update

* storytelling 70b update

* tiefighter 13B

* update tiefighter tags

* tinyllama update

* wizard coder 13b

* update wizard coder 13b description

* wizard coder 34b update

* wizard coder minor fix

* xwin 70b update

* yarn 70b

* yi 34b

* zephyr beta 7b

* neuralhermes-7b update

* change path + ctxlen

* update id

* fix startling
2023-12-01 17:20:58 +07:00

23 lines
950 B
JSON

{
"source_url": "https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6/resolve/main/ggml-model-q4_0.gguf",
"id": "tinyllama-1.1b",
"object": "model",
"name": "TinyLlama Chat 1.1B",
"version": "1.0",
"description": "The TinyLlama project, featuring a 1.1B parameter Llama model, is pretrained on an expansive 3 trillion token dataset. Its design ensures easy integration with various Llama-based open-source projects. Despite its smaller size, it efficiently utilizes lower computational and memory resources, drawing on GPT-4's analytical prowess to enhance its conversational abilities and versatility.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"system_prompt": "<|system|>\n",
"user_prompt": "<|user|>\n",
"ai_prompt": "<|assistant|>\n"
},
"parameters": {
"max_tokens": 2048
},
"metadata": {
"author": "TinyLlama",
"tags": ["General Use"],
"size": 637000000
}
}