../
Stanford CS230 | Autumn 2025 | Lecture 8: Agents, Prompts, and RAG
Reference: https://www.youtube.com/watch?v=k1njvbBmfsw&t=22s
Problem with base models
- Lack of domain specific knowledge
- Trained data is old
- Not feasible retrain the LLM every month
- Breadth focus training is not great for depth first results
- Context management
- Attention is not great with very large context windows
- Needle in the haystack not really ideal situation for LLMs
Dimensions for improvement
- The base model itself
- Better data
- Different architecture
- Bigger model
- Improve the context and the tools surrounding the LLM
Prompt Design
- Think like an expert xyz
- Ask it to break it down step by step (Chain of Thought)
- Few shot examples
- Chaining
- This is different from chain of thought. There we ask the LLM to break it down into steps in its output
- This is splitting the prompt into individual chunks
- The advantage is that we can test which part of the prompt is weaker/stronger