Transformers

Encord Computer Vision Glossary

Transformers are a type of neural network architecture that has gained popularity in recent years for its remarkable performance in natural language processing tasks. Unlike traditional recurrent neural networks, transformers do not rely on the sequential processing of input data, which allows them to capture long-term dependencies in the input sequence more efficiently.

Transformers use a self-attention mechanism that allows them to weigh the importance of different input elements when generating output predictions. This mechanism is called the "attention mechanism," and it assigns a weight to each element in the input sequence based on its relevance to the current output prediction. The attention mechanism enables transformers to capture contextual information more effectively, making them particularly useful in language-related tasks.

One of the most well-known applications of transformers is machine translation. In this task, the transformer receives a sentence in one language as input and generates a corresponding sentence in another language as output. The transformer's attention mechanism allows it to identify and weigh the most relevant parts of the input sentence to generate the appropriate output translation.

Transformers have also achieved state-of-the-art performance in other natural language processing tasks, such as language modeling and question-answering. Their ability to capture contextual information and long-term dependencies in input data makes them particularly effective in tasks that involve analyzing sequences of text.

Scale your annotation workflows and power your model performance with data-driven insights
medical banner

In addition to natural language processing tasks, transformers have also been used in computer vision tasks such as image captioning and video recognition. Their ability to capture contextual information and dependencies in input data makes them a versatile tool in machine learning, and they are likely to be used in many future applications.

cta banner

Discuss this blog on Slack

Join the Encord Developers community to discuss the latest in computer vision, machine learning, and data-centric AI

Join the community
cta banner

Automate 97% of your annotation tasks with 99% accuracy