One of the major issues in dealing with Artificial Neural Networks (ANN) relates to the high computational costs required for training both accurate and efficient networks especially when using ANN models in commodity hardware. We take the direction of training “sparsified” ANN which significantly reduces computational time. Thus, the proposed research is focused developing new evolutionary methods, which are expected to enable a further, significant acceleration and miniaturization of ANNs with both Machine Learning and Network Science strategies.