We consider the problem of synthesizing multiple valued logic (MVL) functions by neural networks. A differential evolution algorithm is proposed to train the learnable multiple valued logic network. The optimum window and biasing parameters to be chosen for convergence are derived.
Experiments performed on benchmark problems demonstrate the convergence and robustness of the network. Preliminary results indicate that differential evolution is suitable to train MVL networks for synthesizing MVL functions.