From 882996a87530f34244bf3412162f4cc860e60aa7 Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 11 Aug 2025 15:45:01 +1000 Subject: [PATCH] Update website/src/content/blog/rag-is-not-enough.mdx Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> --- website/src/content/blog/rag-is-not-enough.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/src/content/blog/rag-is-not-enough.mdx b/website/src/content/blog/rag-is-not-enough.mdx index 8d86c3238..416c5b2f9 100644 --- a/website/src/content/blog/rag-is-not-enough.mdx +++ b/website/src/content/blog/rag-is-not-enough.mdx @@ -18,7 +18,7 @@ We present a straightforward approach to customizing small, open-source models u In short, (1) extending a general foundation model like [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) with strong math and coding, and (2) training it over a high-quality, synthetic dataset generated from the intended corpus, and (3) adding RAG capabilities, can lead to significant accuracy improvements. -Problems still arise with catastrophic forgetting in general tasks, commonly observed during specilizied domain fine-tuning. In our case, this is likely exacerbated by our lack of access to Mistral’s original training dataset and various compression techniques used in our approach to keep the model small. +Problems still arise with catastrophic forgetting in general tasks, commonly observed during specialized domain fine-tuning. In our case, this is likely exacerbated by our lack of access to Mistral’s original training dataset and various compression techniques used in our approach to keep the model small. ## Selecting a strong foundation model