XAI-lecture 4-auto generated

Cards (58)

  • Neural network architecture
    A) input
    B) Hidden
    C) output
    D) input nodes
    E) hidden nodes
    F) output nodes
    G) weights
    H) bias nodes
    I) error + backpropagation
  • 5 W's and How questions :
    • WHY would one want to use visualization in deep learning?
    • WHO would use and benefit from visualizing deep learning?
    • WHAT data, features, and relationships in deep learning can be visualized?
    • HOW can we visualize deep learning data, features, and relationships?
    • WHEN in the deep learning process is visualization used?
    • WHERE has deep learning visualization been used?
  • Color coding cells is a common visualization technique.
  • Some visualization techniques include coloring of textual data, where words in a sentence are colored according to their activation magnitudes.
  • Confusion matrices are a technique for summarizing the performance of a classification algorithm, and can be visualized as a heat map.
  • High-dimensional visualization techniques, such as dimensionality reduction, can be used to take advantage of this feature.
  • Convolutional layers apply filters over the input data, often represented as a two-dimensional matrix of values, to generate smaller representations of the data to pass to later layers in the network.
  • These filters, like the previously mentioned traditional weights, are then updated throughout the training process, i.e., learned by the network, to support a given task.
  • Common techniques for dimensionality reduction include projecting feature vectors into 2D or 3D space.
  • Visualizing the learned filters could be useful as an alternate explanation for what a model has learned.
  • Each neuron in a layer becomes a 'dimension' resulting in a feature vector.
  • Models Responding to User-provided Input Data include GAN Dissection, Distill Handwriting Article, and Google Quick Draw.
  • How Hyperparameters Affect Results can be explored on TensorFlow Playground.
  • Visualization systems should dynamically update the charts with metrics recomputed after every epoch.
  • Once a neural network model has been trained, one can compute the activations for a given test dataset and visualize the activations (e.g, in the Embedding Projector).
  • There is a vibrant research community in Deep Learning Visualization.
  • During Training, it's important to monitor a model as it learns to closely observe and track its performance.
  • Algorithms for Attribution & Feature Visualization include Heatmaps for Attribution, Attention, & Saliency and Feature Visualization.
  • Deep Learning Visualization has been used in application domains and models such as Drug discovery, Protein folding, Cancer classification, Autonomous driving, etc.
  • Research Directions & Open Problems in Deep Learning Visualization include Further Interpretability, System & Visual Scalability, Design Studies for Evaluation: Utility & Usability, The Human Role in Interpretability & Trust, Social Good & Bias Detection.
  • After Training, most of the previously mentioned algorithmic techniques are performed.
  • TensorFlow Playground is a web-based visual analytics tool for exploring simple neural networks.
  • ActiVis by Facebook is a tool for Model Users.
  • Images can be represented as feature vectors inside of a neural network.
  • Text can be represented as vectors in word embeddings for natural language processing.
  • Line Charts are useful for diagnosing the long training process of deep learning models.
  • Line Charts are used to track the progression of models by monitoring different metrics computed after each epoch.
  • The most common visualization is line charts.
  • Model Metrics are summarized every epoch and represented as a time series over the course of a model’s training.
  • Visualizing Deep Learning can be done using Node-link Diagrams for Network Architectures, which show where data flows and the magnitude of edge weights.
  • Model Metrics include loss, accuracy, and other measures of error.
  • Common techniques for Dimensionality Reduction include Principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), Uniform Manifold Approximation and Projection (UMAP).
  • In the 2D case, all data instances are plotted as points in a scatter plot, and in the 3D case, each data instance is plotted as a point in 3D space.
  • Both embeddings are mathematically represented as large tensors, or sometimes as 2D matrices, where each row may correspond to an instance and each column a feature.
  • Model Metrics provide a quick and easy way to compare performance of multiple models.
  • Weight magnitude and sign are often encoded using color or link thickness.
  • Aggregated information in deep learning can be grouped into groups of instances and model metrics.
  • Model Users use well-known neural network architectures for developing domain specific applications, training smaller-scale models, and downloading pre-trained model weights online to use as a starting point.
  • TensorBoard is a tool for Model Developers & Builders.
  • Convolutional layers are a type of layer in deep learning.