During last several years, our research team worked on development of a spiking neural network (SNN) architecture, which could be used in the wide range of supervised learning classification tasks. It should work under the condition, that all participating signals (the classified object description, correct class label and SNN decision) should have spiking nature. As a result, the CoLaNET (columnar layered network) SNN architecture was invented. The distinctive feature of this architecture is a combination of prototypical network structures corresponding to different classes and significantly distinctive instances of one class (=columns) and functionally differing populations of neurons inside columns (=layers). The other distinctive feature is a novel combination of anti-Hebbian and dopamine-modulated plasticity. While CoLaNET is relatively simple, it includes several hyperparameters. Their choice for particular classification tasks is not trivial. Besides that, specific features of the data classified (e.g. classification of separate pictures like in MNIST dataset vs. classifying objects in a continuous video stream) require certain modifications of CoLaNET structure. To solve these problems, the deep mathematical exploration of CoLaNET should be carried out. However, SNNs, being stochastic discrete systems, are usually very hard for exact mathematical analysis. To make it easier, I developed a continuous numeric (non-spiking) machine learning algorithm which approximates CoLaNET behavior with satisfactory accuracy. It is described in the paper. At present, it is being studied by exact analytic methods. We hope that the results of this study could be applied to direct calculation of CoLaNET hyperparameters and optimization of its structure.