Natural Language Processing and Understanding (NLU)


Artificial Intelligence (AI)
├─ Expert Systems
│  ├─ Rule-based Systems
│  └─ Knowledge-based Systems
├─ Natural Language Processing (NLP)
│  ├─ Text Classification
│  ├─ Named Entity Recognition (NER)
│  ├─ Sentiment Analysis
│  ├─ Machine Translation
│  └─ Question Answering
├─ Computer Vision
│  ├─ Image Classification
│  ├─ Object Detection
│  ├─ Image Segmentation
│  └─ Facial Recognition
├─ Robotics
├─ Planning & Scheduling
└─ Machine Learning (ML)
   ├─ Supervised Learning
   │  ├─ Classification
   │  │  ├─ Logistic Regression
   │  │  ├─ Decision Trees
   │  │  ├─ Random Forest
   │  │  ├─ Support Vector Machines (SVM)
   │  │  ├─ Naive Bayes
   │  │  └─ K-Nearest Neighbors (KNN)
   │  └─ Regression
   │     ├─ Linear Regression
   │     ├─ Polynomial Regression
   │     └─ Ridge/Lasso Regression
   ├─ Unsupervised Learning
   │  ├─ Clustering
   │  │  ├─ K-Means
   │  │  ├─ DBSCAN
   │  │  └─ Hierarchical Clustering
   │  └─ Dimensionality Reduction
   │     ├─ PCA (Principal Component Analysis)
   │     ├─ t-SNE
   │     └─ UMAP
   ├─ Reinforcement Learning
   │  ├─ Q-Learning
   │  ├─ Policy Gradient
   │  ├─ Actor-Critic
   │  └─ Deep Q-Networks (DQN)
   └─ Deep Learning
      ├─ Feedforward Neural Networks (FNN)
      │  ├─ Perceptron
      │  └─ Multi-Layer Perceptron (MLP)
      ├─ Convolutional Neural Networks (CNN)
      │  ├─ LeNet
      │  ├─ AlexNet
      │  ├─ VGGNet
      │  ├─ ResNet
      │  └─ EfficientNet
      ├─ Recurrent Neural Networks (RNN)
      │  ├─ Vanilla RNN
      │  ├─ LSTM (Long Short-Term Memory)
      │  │  ├─ Bidirectional LSTM
      │  │  └─ Stacked LSTM
      │  └─ GRU (Gated Recurrent Unit)
      ├─ Transformer Architecture
      │  ├─ Encoder-Only (BERT, RoBERTa)
      │  │  └─ Embedding Models ← Your RAG embeddings!
      │  │     ├─ MiniLM (384 dims)
      │  │     ├─ multi-qa-mpnet (768 dims)
      │  │     └─ OpenAI ada-002 (1536 dims)
      │  ├─ Decoder-Only (GPT, Claude, LLaMA)
      │  │  └─ Large Language Models (LLMs)
      │  └─ Encoder-Decoder (T5, BART)
      ├─ Generative Models
      │  ├─ Variational Autoencoders (VAE)
      │  ├─ Generative Adversarial Networks (GAN)
      │  │  ├─ DCGAN
      │  │  ├─ StyleGAN
      │  │  └─ CycleGAN
      │  └─ Diffusion Models
      │     ├─ Stable Diffusion
      │     ├─ DALL-E
      │     └─ Midjourney
      └─ Graph Neural Networks (GNN)
         ├─ Graph Convolutional Networks (GCN)
         └─ Graph Attention Networks (GAT)