Mar 22, 2024
We have observed extensive adoption of Large Language Models (LLMs) across various applications. To the untrained observer, it may appear that LLMs have achieved a level of intelligence surpassing that of human beings. However, I have come to understand that this is not the case.
Large Language Models are trained using various optimization objectives, often entailing the comparison of LLM output with human language present in the training data. This implies that the quality of an LLM's responses relies on the data it was trained on.
- LLMs can perform well in answering questions if they have encountered similar questions in the training data.
- LLMs might generate incorrect information, i.e. hallucinate, if the questions in the prompt are vastly different from those LLM have encountered during training.
LLMs excel at:
- Summarizing information
- Creative writing
- Translation
- Writing code
However, LLMs may struggle with:
- Accurately recalling information, which could be improved by accessing external databases or data sources (e.g., RAG).
- Performing math calculations, a limitation that could be addressed by utilizing external modules.
- Engaging in rigorous logical reasoning, a skill that could be enhanced by incorporating similar chains of reasoning in the prompt or training data.
(Notes from the course Generative AI with Large Language Models on Coursera)
Ming Dao School uses 1-1 coaching and group events to help high-tech professionals grow their careers and handle career transitions.
If you like to join our upcoming mock system design interview events or other coaching programs, please contact us on LinkedIn.