Convolutional layers apply filters over the input data, often represented as a two-dimensional matrix of values, to generate smaller representations of the data to pass to later layers in the network.
These filters, like the previously mentioned traditional weights, are then updated throughout the training process, i.e., learned by the network, to support a given task.
Once a neural network model has been trained, one can compute the activations for a given test dataset and visualize the activations (e.g, in the Embedding Projector).
Deep Learning Visualization has been used in application domains and models such as Drug discovery, Protein folding, Cancer classification, Autonomous driving, etc.
Research Directions & Open Problems in Deep Learning Visualization include Further Interpretability, System & Visual Scalability, Design Studies for Evaluation: Utility & Usability, The Human Role in Interpretability & Trust, Social Good & Bias Detection.
Visualizing Deep Learning can be done using Node-link Diagrams for Network Architectures, which show where data flows and the magnitude of edge weights.
Common techniques for Dimensionality Reduction include Principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), Uniform Manifold Approximation and Projection (UMAP).
Both embeddings are mathematically represented as large tensors, or sometimes as 2D matrices, where each row may correspond to an instance and each column a feature.
Model Users use well-knownneuralnetwork architectures for developing domain specific applications, training smaller-scale models, and downloading pre-trained model weights online to use as a starting point.