The performance of artificial intelligence (AI) algorithms in practice depends on the realism and correctness of the data, models, and feedback (labels or rewards) provided to the algorithm. This paper discusses methods for improving the realism and ecological validity of AI used for autonomous cyber defense by exploring the potential to use Inverse Reinforcement Learning (IRL) to gain insight into attacker actions, utilities of those actions, and ultimately decision points which cyber deception could thwart. The Tularosa study, as one example, provides experimental data of real-world techniques and tools commonly used by attackers, from which core data vectors can be leveraged to inform an autonomous cyber defense system.