BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//datacraft - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:datacraft
X-ORIGINAL-URL:https://datacraft.paris
X-WR-CALDESC:Events for datacraft
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Paris:20240228T180000
DTEND;TZID=Europe/Paris:20240228T190000
DTSTAMP:20260419T102319
CREATED:20240131T142206Z
LAST-MODIFIED:20241205T132815Z
UID:9887-1709143200-1709146800@datacraft.paris
SUMMARY:State of the art - Practical insights for LLM fine-tuning and evaluation
DESCRIPTION:Inscription\n                \n            \n            \n			\n				\n				\n				\n				\n				[The workshop will be dispense in english ]  \nMachine Learning Level : \n**Good knowledge of Machine Learning \n Python Level \n*Basic skills in Python \nWorkshop description \nIf vendors almost announce to sell AGI-as-as-service through API to enterprise client\, the reality after trying to use proprietary LLM as a service on a specific use case is often different. Non relevant generation\, out-of-context answer\, misunderstanding of the user queries and more broadly lacking subject matter expertise starts to erode the users and shareholders confidence of the potential transformative power of deploying LLM-enhanced business workflow across your organisation. You’re not alone in this journey. \nIn this talk\, we’ll explore the landscape of fine-tuning solutions for open-source LLM\, weighing their pros and cons. We’ll delve into the data required and how to design a robust evaluation framework to systematically assess your in-house model’s performance. \nWe’ll deep dive on the subtle differences between the Parameter Efficient Finetuning Methods PEFT)\, the reinforcement learning approaches\, what to keep in mind when considering which one to use. \nThis talk is a synthesis of deploying LLM capabilities at various organisations\, from startup to corporate environments. It’s a blend of insights from research papers and pragmatic experiences. We won’t go onto the details of the mathematical operations under the hood for each fine-tuning approach\, instead our goal is to share the intuition of those concepts\, equipping you to design an effective roadmap for fine-tuning an LLM for your specific business use case. \nSlides in pdf will be made available for free on the speaker twitter at the end of the talk @fpaupier. \nIntervenants : \nFrançois Paupier\, machine learning engineer\, fpaupier engineering services \n 
URL:https://datacraft.paris/event/etat-de-lart-from-agi-promises-to-llm-realities-practical-insights-into-language-model-fine-tuning-and-evaluation/
CATEGORIES:- Event in English -
END:VEVENT
END:VCALENDAR