* feat: jan can see feat: Add GPT-4 Vision model (Preview) fix: Add visionModel as property in ModelInfo fix: Fix condition to load local messages in useSetActiveThread hook feat: Enable Image as input for chat fix: Update model parameters in JSON files for remote GPT models fix: Add thread as optional fix: Add support for message as image fix: Linter fix: Update proxyModel to proxy_model and add textModel chore: Change proxyModel to proxy_model fix: Update settings with visionModel and textModel fix: vision model passed through the retrieval tool fix: linter * fix: could not load image and request is not able to be sent --------- Co-authored-by: Louis <louis@jan.ai>
36 lines
901 B
JSON
36 lines
901 B
JSON
{
|
|
"sources": [
|
|
{
|
|
"filename": "ggml-model-q5_k.gguf",
|
|
"url": "https://huggingface.co/mys/ggml_bakllava-1/resolve/main/ggml-model-q5_k.gguf"
|
|
},
|
|
{
|
|
"filename": "mmproj-model-f16.gguf",
|
|
"url": "https://huggingface.co/mys/ggml_bakllava-1/resolve/main/mmproj-model-f16.gguf"
|
|
}
|
|
],
|
|
"id": "bakllava-1",
|
|
"object": "model",
|
|
"name": "BakLlava 1",
|
|
"version": "1.0",
|
|
"description": "BakLlava 1 can bring vision understanding to Jan",
|
|
"format": "gguf",
|
|
"settings": {
|
|
"vision_model": true,
|
|
"text_model": false,
|
|
"ctx_len": 4096,
|
|
"prompt_template": "\n### Instruction:\n{prompt}\n### Response:\n",
|
|
"llama_model_path": "ggml-model-q5_k.gguf",
|
|
"mmproj": "mmproj-model-f16.gguf"
|
|
},
|
|
"parameters": {
|
|
"max_tokens": 4096
|
|
},
|
|
"metadata": {
|
|
"author": "Mys",
|
|
"tags": ["Vision"],
|
|
"size": 5750000000
|
|
},
|
|
"engine": "nitro"
|
|
}
|