A key challenge in using intelligent systems in complex, dynamic, multi-agent environments is the attainment of robustness in face of uncertainty. In such environments the combinatorial nature of state-space complexity inhibits any designer’s ability to anticipate all possible states that the agent might find itself in. Therefore, agents will fail in such environments, as the designer cannot supply them with full information about the correct response to take at any state. To overcome these failures, agents must display post-failure robustness, enabling them to autonomously detect, diagnose and recover from failures as they happen. Our hypothesis is that through agent-modeling (the ability of an agent to model the intentions, knowledge, and actions of other agents in the environment) an agent may significantly increase its robustness in a multi-agent environment, by allowing it to use others in the environment to evaluate and improve its own performance. We examine this hypothesis in light of two real-world applications in which we improve robustness: domain-independent teamwork, and target-recognition and identification systems. We discuss the relation between the ability of an agent-modeling algorithm to represent uncertainty and the applications, and highlight key lessons learned for real-world applications.