Contact
AI

Transformer

The neural network architecture underlying all modern LLMs, introduced in the 2017 paper 'Attention Is All You Need.' Uses self-attention mechanisms to process sequences in parallel rather than sequentially, enabling massive scaling.

Related Resources