Get our free extension to see links to code for papers anywhere online!Free add-on: code for papers everywhere!Free add-on: See code for papers anywhere!
Abstract:This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.
* In Pascal Hitzler, Md Kamruzzaman Sarker (eds.), Neuro-Symbolic
Artificial Intelligence: The State of the Art. Frontiers in Artificial
Intelligence and Applications Vol. 342, IOS Press, Amsterdam, 2022. arXiv
admin note: text overlap with arXiv:2003.04707