From d1cb1e44ea9d6a4389e18746ba81e0895d96cb71 Mon Sep 17 00:00:00 2001 From: hieu-jan <150573299+hieu-jan@users.noreply.github.com> Date: Fri, 1 Mar 2024 23:30:44 +0900 Subject: [PATCH] docs: standardize reference --- ...-chatgpt-with-open-source-alternatives.mdx | 24 ++++++++++++------- 1 file changed, 16 insertions(+), 8 deletions(-) diff --git a/docs/blog/02-surpassing-chatgpt-with-open-source-alternatives.mdx b/docs/blog/02-surpassing-chatgpt-with-open-source-alternatives.mdx index 3eb0138fe..01103c767 100644 --- a/docs/blog/02-surpassing-chatgpt-with-open-source-alternatives.mdx +++ b/docs/blog/02-surpassing-chatgpt-with-open-source-alternatives.mdx @@ -130,12 +130,20 @@ Anecdotally, we’ve had some success using this model in practice to onboard ne A full research report with more statistics can be found [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/README.md). -# References +## References -- [Catastrophic forgetting](https://arxiv.org/abs/2308.08747) -- [Math specialization](https://arxiv.org/abs/2308.09583) -- [Code specialization](https://arxiv.org/abs/2306.08568) -- [Search specialization](https://github.com/SciPhi-AI/agent-search) -- [Evol Instruct](https://github.com/nlpxucan/WizardLM) -- [Lost in the middle](https://arxiv.org/abs/2307.03172) -- [Instruction tuning](https://arxiv.org/pdf/2109.01652.pdf) \ No newline at end of file +[1] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. "Finetuned Language Models Are Zero-Shot Learners." arXiv preprint arXiv:2109.01652 (2021). URL: https://arxiv.org/abs/2109.01652 + +[2] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang. "WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct." arXiv preprint arXiv:2308.09583 (2023). URL: https://arxiv.org/abs/2308.09583 + +[3] Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. (2023). "An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning." arXiv preprint arXiv:2308.08747. URL: https://arxiv.org/abs/2308.08747 + +[4] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang. "WizardCoder: Empowering Code Large Language Models with Evol-Instruct." arXiv preprint arXiv:2306.08568 (2023). URL: https://arxiv.org/abs/2306.08568 + +[5] SciPhi-AI, "Agent Search Repository." GitHub. URL: https://github.com/SciPhi-AI/agent-search + +[6] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. "Lost in the Middle: How Language Models Use Long Contexts." arXiv preprint arXiv:2307.03172 (2023). URL: https://arxiv.org/abs/2307.03172 + +[7] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., & Zhang, D. (2023). "WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct." arXiv preprint arXiv:2308.09583. URL: https://arxiv.org/abs/2308.09583 + +[8] nlpxucan et al., "WizardLM Repository." GitHub. URL: https://github.com/nlpxucan/WizardLM \ No newline at end of file