Abstract
Logic-based problems such as planning, theorem proving, or puzzles, typically involve combinatoric search and structured knowledge representation. Artificial neural networks are very successful statistical learners, however, for many years, they have been criticized for their weaknesses in representing and in processing complex structured knowledge which is crucial for combinatoric search and symbol manipulation. Two neural architectures are presented, which can encode structured relational knowledge in neural activation, and store bounded First Order Logic constraints in connection weights. Both architectures learn to search for a solution that satisfies the constraints. Learning is done by unsupervised practicing on problem instances from the same domain, in a way that improves the network-solving speed. No teacher exists to provide answers for the problem instances of the training and test sets. However, the domain constraints are provided as prior knowledge to a loss function that measures the degree of constraint violations. Iterations of activation calculation and learning are executed until a solution that maximally satisfies the constraints emerges on the output units. As a test case, block-world planning problems are used to train networks that learn to plan in that domain, but the techniques proposed could be used more generally as in integrating prior symbolic knowledge with statistical learning
Abstract (translated by Google)
URL
https://arxiv.org/abs/1712.03049