Abstract:Graph-based representations of images have recently acquired an important role for classification purposes within the context of machine learning approaches. The underlying idea is to consider that relevant information of an image is implicitly encoded into the relationships between more basic entities that compose by themselves the whole image. The classification problem is then reformulated in terms of an optimization problem usually solved by a gradient-based search procedure. Vario-eta through structure is an approximate second order stochastic optimization technique that achieves a good trade-off between speed of convergence and the computational effort required. However, the robustness of this technique for large scale problems has not been yet assessed. In this paper we firstly provide a theoretical justification of the assumptions made by this optimization procedure. Secondly, a complexity analysis of the algorithm is performed to prove its suitability for large scale learning problems.
Abstract:Recursive Neural Networks are non-linear adaptive models that are able to learn deep structured information. However, these models have not yet been broadly accepted. This fact is mainly due to its inherent complexity. In particular, not only for being extremely complex information processing models, but also because of a computational expensive learning phase. The most popular training method for these models is back-propagation through the structure. This algorithm has been revealed not to be the most appropriate for structured processing due to problems of convergence, while more sophisticated training methods enhance the speed of convergence at the expense of increasing significantly the computational cost. In this paper, we firstly perform an analysis of the underlying principles behind these models aimed at understanding their computational power. Secondly, we propose an approximate second order stochastic learning algorithm. The proposed algorithm dynamically adapts the learning rate throughout the training phase of the network without incurring excessively expensive computational effort. The algorithm operates in both on-line and batch modes. Furthermore, the resulting learning scheme is robust against the vanishing gradients problem. The advantages of the proposed algorithm are demonstrated with a real-world application example.