Skip to main content

Understanding AI: How LLMs and foundation models work — visual and interactive

Understanding AI: How LLMs and foundation models work — visual and interactive
Understanding the Brain of AI – in 10 Minutes
Understanding the Brain of AI – in 10 Minutes
Large Language Models: How an AI Model Understands Our Language
Large Language Models: How an AI Model Understands Our Language
The Tokenizer: How AI Models Translate Text into Numbers
The Tokenizer: How AI Models Translate Text into Numbers
Embeddings: How AI understands the meaning of words
Embeddings: How AI understands the meaning of words
Context window & positional encoding in the transformer
Context window & positional encoding in the transformer
Self-Attention – The Heart of Modern AI
Self-Attention – The Heart of Modern AI
Self-Attention decoded – how transformers really think
Self-Attention decoded – how transformers really think

© 2025 Oskar Kohler. Alle Rechte vorbehalten.
Hinweis: Der Text wurde manuell vom Autor verfasst. Stilistische Optimierungen, Übersetzungen sowie einzelne Tabellen, Diagramme und Abbildungen wurden mit Unterstützung von KI-Tools vorgenommen.