From fc56c418d6a8799dec82ab515ce9010fcdcd96bc Mon Sep 17 00:00:00 2001 From: Ramon Perez Date: Mon, 11 Aug 2025 15:44:43 +1000 Subject: [PATCH] Update website/src/content/blog/data-is-moat.mdx Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> --- website/src/content/blog/data-is-moat.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/src/content/blog/data-is-moat.mdx b/website/src/content/blog/data-is-moat.mdx index 2172ef0e0..5e238103a 100644 --- a/website/src/content/blog/data-is-moat.mdx +++ b/website/src/content/blog/data-is-moat.mdx @@ -102,7 +102,7 @@ In the open-source community, 2 notable examples of fine-tuning with Mistral as ## Conclusion -The ownership and strategic use of pre-trained data serve as an invisible moat. It not only enables the tackling of complex challenges like catastrophic forgetting but also provides a baseline for continuous, targeted improvements. Although there is a solution to decomotralize, the cost remains reasonably high. +The ownership and strategic use of pre-trained data serve as an invisible moat. It not only enables the tackling of complex challenges like catastrophic forgetting but also provides a baseline for continuous, targeted improvements. Although there is a solution to decentralize, the cost remains reasonably high. Fully open pretrained + open weight