diff --git a/website/src/content/blog/rag-is-not-enough.mdx b/website/src/content/blog/rag-is-not-enough.mdx index 8d86c3238..416c5b2f9 100644 --- a/website/src/content/blog/rag-is-not-enough.mdx +++ b/website/src/content/blog/rag-is-not-enough.mdx @@ -18,7 +18,7 @@ We present a straightforward approach to customizing small, open-source models u In short, (1) extending a general foundation model like [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) with strong math and coding, and (2) training it over a high-quality, synthetic dataset generated from the intended corpus, and (3) adding RAG capabilities, can lead to significant accuracy improvements. -Problems still arise with catastrophic forgetting in general tasks, commonly observed during specilizied domain fine-tuning. In our case, this is likely exacerbated by our lack of access to Mistral’s original training dataset and various compression techniques used in our approach to keep the model small. +Problems still arise with catastrophic forgetting in general tasks, commonly observed during specialized domain fine-tuning. In our case, this is likely exacerbated by our lack of access to Mistral’s original training dataset and various compression techniques used in our approach to keep the model small. ## Selecting a strong foundation model