Using LangSmith to Support Fine-tuning
Summary We created a guide for fine-tuning and evaluating LLMs using LangSmith for dataset management and evaluation. We did this both with an open source LLM on CoLab and HuggingFace for model training, as well as OpenAI's new finetuning service. As a test case, we fine-tuned LLaMA2-7b-chat and gpt-3.5-turbo for an extraction task (knowledge graph triple extraction) using training data exported from LangSmith and also evaluated the results using LangSmith. The CoLab guide is here. Context I
Nicolas A. Duerr on LinkedIn: #futurebrains #platform #marketplace #strategy #innovation
LangSaaS - No Code LangChain SaaS - Product Information, Latest Updates, and Reviews 2024
🧩DemoGPT (@demo_gpt) / X
Thread by @RLanceMartin on Thread Reader App – Thread Reader App
Applying OpenAI's RAG Strategies 和訳|p
Thread by @LangChainAI on Thread Reader App – Thread Reader App
🧩DemoGPT (@demo_gpt) / X
LangChain on X: 📰Latest LangChain Newsletter is out! -an entire section on retrieval (lots of exciting stuff here) -what's new in Open Source & LangSmith -favorite blog posts and use-cases read and
Week of 8/21] LangChain Release Notes
Nicolas A. Duerr on LinkedIn: #business #strategy #partnerships