본문 바로가기 주 메뉴 바로가기

연구교류 세미나Instruction Tuning and Retrieval-Augmented Generation for Large Language Models

작성일2024-01-15 작성자 박세영
event02
접수기간2024-01-15 08:58 ~ 2024-01-26 10:00
행사기간2024-01-26 00:00
# https://youtube.com/live/qJGIfHZLBVQ?feature=share 1. 일시: 2024년 1월 26일(금), 10:00 - 12:00 2. 장소: 판교 테크노밸리 산업수학혁신센터 세미나실 - 경기 성남시 수정구 대왕판교로 815, 기업지원허브 231호 국가수리과학연구소 - 무료주차는 2시간 지원됩니다. 3. 발표자: 서상현 박사(㈜엘지경영개발원) 4. 주요내용: Instruction Tuning and Retrieval-Augmented Generation for Large Language Models In this seminar, I will present research trends on Large Language Models(LLMs), focusing on instruction tuning and Retrieval-Augmented Generation (RAG). Recently, LLMs have been proposed by pretraining Transformer models over large-scale corpora and have shown strong capabilities in solving various natural language processing (NLP) tasks. instruction tuning is a crucial technique to enhance the capabilities and controllability of LLMs. Instruction tuning bridges the gap between the next-word prediction objective of LLMs and the users’ objective of having LLMs adhere to human instructions. One the other hand, LLMs demonstrate significant capabilities but face challenges such as hallucination, outdated knowledge, and nontransparent, untraceable reasoning processes. RAG has emerged as a promising solution by incorporating knowledge from external databases. This presentation will help you understand the core technologies and applications of the latest LLMs. ## *유튜브 스트리밍 예정입니다.