Toggle navigation sidebar
Toggle in-page Table of Contents
Fundamentos de Deep Learning
Welcome
Info 2024.2 - UdeA
01 - INTRODUCTION
1.1 - DL Overview
1.2 - Models derived from data
1.3 - ML algorithm design
LAB 01.01 - WARM UP
02 - NEURAL NETWORKS
2.1 - The Perceptron
2.2 - The Multilayer Perceptron
2.3 - Overfitting and regularization
2.4 - Loss functions in Tensorflow
2.5 - Autoencoders
2.6 - Multimodal architectures
2.7 - Vanishing gradients
2.8 - Weights initialization
LAB 2.1 - Customized loss function
LAB 2.2 - Sparse Autoencoders
LAB 2.3 - Pairwise classification
LAB 2.4 - Model instrumentation
03 - TENSORFLOW CORE
3.1 - Symbolic computing for ML
3.2 - TF symbolic engine
3.3 - Using
tf.function
3.4 - Batch normalization
LAB 3.1 - TF model subclassing
LAB 3.2 - Low level
tensorflow
04 - CONVOLUTIONAL NETWORKS
4.1 - Convolutions
4.2 - Convolutional Neural Networks
4.3 - Dropout, pooling
4.4 - CNN Architectures
4.5 - Transfer learning
4.6 - Object detection
4.7 - Transposed convolutions
4.8
- UNet Image segmentation
4.9 - Atrous convolutions
LAB 4.1 - Convolutions
LAB 4.2 - Transfer learning
LAB 4.3 - Object detection
LAB 4.4 - Semantic segmentation
05 - SEQUENCE MODELS
5.0 Crossvalidation in time series
5.1 Recurrent Neural Networks
5.2 LSTM and GRU
5.3 Truncated BPTT
5.4 Text processing
5.5 Sequences generation
5.6 Bidirectional RNNs
5.7 ELMo
5.8 Transformer
5.9 CNN-LSTM architectures
LAB 5.1 - Time series prediction
LAB 5.2 - Padding - Masking
LAB 5.3 - Transformer - BERT
Index