#ReadingList #CoT
CoT-Reading-List
基础论文
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
- Large Language Models are Zero-Shot Reasoners
- Automatic Chain of Thought Prompting in Large Language Models
问题分解
- Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
- Measuring and Narrowing the Compositionality Gap in Language Models
融合预测
- Self-Consistency Improves Chain of Thought Reasoning in Language Models
- Active Prompting with Chain-of-Thought for Large Language Models
- Rationale-Augmented Ensembles in Language Models
生成-校验
- STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning
- On the Advance of Making Language Models Better Reasoners
多语言
大模型背景
- PaLM: Scaling Language Modeling with Pathways
- Emergent Abilities of Large Language Models
- Language Model Cascades
Points to know
- [[Sampling Methods]]
- Top-k Sampling
- Nucleus Sampling
- Beam Search
- Temperature Sampling
- [[Decoding Methods]]
- Greedy Decoding
- Self-Consistency
- [[CoT-Decoding]]
- Instruction-tuned
- System I
- System II