docs: update references

This commit is contained in:
hieu-jan 2024-03-01 23:44:47 +09:00
parent d1cb1e44ea
commit bf568bc11f

View File

@ -128,22 +128,22 @@ We conclude that this combination of model merging + finetuning + RAG yields pro
Anecdotally, weve had some success using this model in practice to onboard new team members to the Nitro codebase. Anecdotally, weve had some success using this model in practice to onboard new team members to the Nitro codebase.
A full research report with more statistics can be found [here](https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/README.md). A full research report with more statistics can be found at https://github.com/janhq/open-foundry/blob/main/rag-is-not-enough/README.md.
## References ## References
[1] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. "Finetuned Language Models Are Zero-Shot Learners." arXiv preprint arXiv:2109.01652 (2021). URL: https://arxiv.org/abs/2109.01652 [1] Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, Quoc V. Le. Finetuned Language Models Are Zero-Shot Learners. *arXiv preprint arXiv:2109.01652*, 2021. URL: https://arxiv.org/abs/2109.01652
[2] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang. "WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct." arXiv preprint arXiv:2308.09583 (2023). URL: https://arxiv.org/abs/2308.09583 [2] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. *arXiv preprint arXiv:2308.09583*, 2023. URL: https://arxiv.org/abs/2308.09583
[3] Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. (2023). "An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning." arXiv preprint arXiv:2308.08747. URL: https://arxiv.org/abs/2308.08747 [3] Luo, Y., Yang, Z., Meng, F., Li, Y., Zhou, J., & Zhang, Y. An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning. *arXiv preprint arXiv:2308.08747*,2023 URL: https://arxiv.org/abs/2308.08747
[4] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang. "WizardCoder: Empowering Code Large Language Models with Evol-Instruct." arXiv preprint arXiv:2306.08568 (2023). URL: https://arxiv.org/abs/2306.08568 [4] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang. WizardCoder: Empowering Code Large Language Models with Evol-Instruct., *arXiv preprint arXiv:2306.08568*, 2023. URL: https://arxiv.org/abs/2306.08568
[5] SciPhi-AI, "Agent Search Repository." GitHub. URL: https://github.com/SciPhi-AI/agent-search [5] SciPhi-AI, "Agent Search Repository." GitHub. URL: https://github.com/SciPhi-AI/agent-search
[6] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. "Lost in the Middle: How Language Models Use Long Contexts." arXiv preprint arXiv:2307.03172 (2023). URL: https://arxiv.org/abs/2307.03172 [6] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, Percy Liang. "Lost in the Middle: How Language Models Use Long Contexts." *arXiv preprint arXiv:2307.03172*, 2023. URL: https://arxiv.org/abs/2307.03172
[7] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., & Zhang, D. (2023). "WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct." arXiv preprint arXiv:2308.09583. URL: https://arxiv.org/abs/2308.09583 [7] Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., & Zhang, D. WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct. *arXiv preprint arXiv:2308.09583*, 2023. URL: https://arxiv.org/abs/2308.09583
[8] nlpxucan et al., "WizardLM Repository." GitHub. URL: https://github.com/nlpxucan/WizardLM [8] nlpxucan et al., "WizardLM Repository." GitHub. URL: https://github.com/nlpxucan/WizardLM