-
Customer Service Modules
INTRODUCTION TO NATURAL LANGUAGE PROCESSING
Fundamentals: brief history, definitions, types of approaches, textual representations, types of textual pattern analysis. Text pre-processing: standardization, tokenization, normalization, filtering, word relevance, morphological tagging. Text mapping: feature extraction, word bag, vectorization, term frequency, document inverse frequency. Language models: probabilistic, Markov, unigrams, bigrams, n-grams, evaluation.
ATTENTIONAL MODELS
Introduction to attention and attentional mechanisms: hard, soft and self-attention. Classic attention-based deep learning architectures: CNNs, recurring and generative. End-to-end attentional models: NT, GAT. Applications and benefits of attention-based models.
TRANSFORMERS
Motivation and architectural overview. Encoder (positional encoders, multi-head attention, feed-forward layer). Decoder (masked multi-head attention, linear layer and softmax). Transformer variations (BERT, T5, GPT3, Efficient Transformer, etc.).
APPLICATIONS
Mono/Multilingual Transformers for PLN (Text Generation, Translation). Transformers in Images (image classification). Transformers in other media (audio, sensors, etc.). Challenges and opportunities (explicability, size of models, train vs. refine).
-
Details
Prerequisite: Basic knowledge of programming.
Target Audience: Computer professionals, trained in Computing or related areas (Engineering or Exact).
Course type: Extension course.
Required Material: As it is a practical course, all students must use their notebooks in class.
Course coordinator: Prof. doctor Zanoni Dias.
Offering: Exclusively in the "in company" model (closed to companies). Request a quote by
Email.