Abstract:This paper conceptualizes the Deep Weight Spaces (DWS) of neural architectures as hierarchical, fractal-like, coarse geometric structures observable at discrete integer scales through recursive dilation. We introduce a coarse group action termed the fractal transformation, $T_{r_k} $, acting under the symmetry group $G = (\mathbb{Z}, +) $, to analyze neural parameter matrices or tensors, by segmenting the underlying discrete grid $\Omega$ into $N(r_k)$ fractals across varying observation scales $ r_k $. This perspective adopts a box count technique, commonly used to assess the hierarchical and scale-related geometry of physical structures, which has been extensively formalized under the topic of fractal geometry. We assess the structural complexity of neural layers by estimating the Hausdorff-Besicovitch dimension of their layers and evaluating a degree of self-similarity. The fractal transformation features key algebraic properties such as linearity, identity, and asymptotic invertibility, which is a signature of coarse structures. We show that the coarse group action exhibits a set of symmetries such as Discrete Scale Invariance (DSI) under recursive dilation, strong invariance followed by weak equivariance to permutations, alongside respecting the scaling equivariance of activation functions, defined by the intertwiner group relations. Our framework targets large-scale structural properties of DWS, deliberately overlooking minor inconsistencies to focus on significant geometric characteristics of neural networks. Experiments on CIFAR-10 using ResNet-18, VGG-16, and a custom CNN validate our approach, demonstrating effective fractal segmentation and structural analysis.
Abstract:This paper introduces a scale-invariant methodology employing \textit{Fractal Geometry} to analyze and explain the nonlinear dynamics of complex connectionist systems. By leveraging architectural self-similarity in Deep Neural Networks (DNNs), we quantify fractal dimensions and \textit{roughness} to deeply understand their dynamics and enhance the quality of \textit{intrinsic} explanations. Our approach integrates principles from Chaos Theory to improve visualizations of fractal evolution and utilizes a Graph-Based Neural Network for reconstructing network topology. This strategy aims at advancing the \textit{intrinsic} explainability of connectionist Artificial Intelligence (AI) systems.
Abstract:Non Performing Asset(NPA) has been in a serious attention by banks over the past few years. NPA cause a huge loss to the banks hence it becomes an extremely critical step in deciding which loans have the capabilities to become an NPA and thereby deciding which loans to grant and which ones to reject. In this paper which focuses on the exact crux of the matter we have proposed an algorithm which is designed to handle the financial data very meticulously to predict with a very high accuracy whether a particular loan would be classified as a NPA in future or not. Instead of the conventional less accurate classifiers used to decide which loans can turn to be NPA we build our own classifier model using Entropy as the base. We have created an entropy based classifier using Shannon Entropy. The classifier model categorizes our data points in two categories accepted or rejected. We make use of local entropy and global entropy to help us determine the output. The entropy classifier model is then compared with existing classifiers used to predict NPAs thereby giving us an idea about the performance.