Deep neural networks (DNNs) provide increasingly precise outputs as the volume and variety of their training data increases. While investing in large-scale, high-quality labeled datasets is one strategy for improving models, another is to apply “rules” – reasoning heuristics, equations, associative logic, or limitations. Consider the classical physics problem of predicting the future state of a double pendulum system using a model. Although the model can learn to predict the total energy of the system at any given time solely from empirical data, it will often overestimate the energy unless given an equation that incorporates known physical constraints, such as than the conservation of energy. The model alone cannot represent such well-established physical principles. How could such rules be taught so that DNNs get the necessary data rather than just learning from it?
What is DeepCTRL
Google Cloud AI researchers have proposed a unique deep learning training approach that incorporates rules so that rule strength can be controlled during inference. DeepCTRL (Deep Neural Networks with Controllable Rule Representations) combines a rule encoder and a rule-based lens in the model, enabling a shared representation for decision making. The data type and architecture of the model are irrelevant to DeepCTRL.
It can be used with any entry/exit rule. The main feature of DeepCTRL is that it does not require retraining to change the strength of the rule – the user can adjust it for inference based on the desired accuracy against the rule check report.
Advantages of DeepCTRL
The benefits of rule-based learning are many. For starters, rules can provide additional insight in circumstances where data monitoring is limited, which improves test accuracy. Second, rules can help DNNs gain trust and reliability. The fact that DNNs are “black boxes” is a major obstacle to their widespread adoption. User trust is often eroded due to a lack of understanding of the reasons for their reasoning and the discrepancies of their results with human judgment. Inconsistencies can be minimized and user trust can be improved by implementing rules. For example, if a DNN for loan delinquency forecasting can absorb all of a bank’s decision heuristics, the bank’s loan officers can have more confidence in the forecast.
Third, DNNs are sensitive to a variety of inputs that are incomprehensible to humans. The impact of these changes can be reduced using rules since the search space of the model is further confined to reduce underspecification.
Various ways to incorporate “rules” into deep learning, taking into account existing knowledge in a wide range of applications, have been investigated. One method for injecting rules into forecasts is posterior regularization. The teacher network is created by projecting the student network into a rule-regulated (logical) subspace and then updating the student network to balance between reproducing the teacher’s output and anticipation of labels. Adversarial learning is used to penalize unwanted bias, especially for bias rules.
What makes DeepCTRL different?
DeepCTRL offers a training framework with rules that take advantage of Lagrangian duality. Constrained learning is studied using a formulation on the space of confusion matrices and optimization solvers that operate in a series of linear reduction steps. For variational autoencoders, KL divergence is used to inject output diversity regulation constraints or disentangled latent factor representations. DeepCTRL differs from others in that it injects the rules in a way that provides rule strength controllability to inference without relearning, which is possible by properly learning the rule representations in the data collector. Beyond simply increasing rule checking for target accuracy, this opens up new possibilities.
DeepCTRL offers a number of possible uses in real-world deep learning deployments, including improving accuracy, increasing reliability, and enhancing human-AI interaction. On the other hand, the researchers thought it relevant to point out that DeepCTRL’s ability to effectively encode rules can have unintended consequences if used with bad intentions to teach unethical biases.