Autonomous virtual agents that operate in complex IoT environ- ments face two fundamental challenges: (i) they usually lack su - cient start-up knowledge and (ii) hence are incapable to adequately adjust their internal knowledge base and decision-making policies during runtime to meet speci c user requirements and preferences. This is problematic in Ambient Assisted Living (AAL) and Health- Care (HC) scenarios, since an agent has to expediently operate from the beginning of its lifecycle and adequately address the target users’ needs; without prior user and environmental knowledge, this is not possible. The presented approach addresses these problems by providing a semantic use-case simulation framework that can be tailored to speci c AAL and HC use cases. It builds upon a semantic knowledge representation framework to simulate device events and user activities based on semantic task- and ambient descriptions. Through such a simulated environment, agents are provided with the ability to learn the best matching actions and to adjust their policies based on generated datasets. We demonstrate the practical applicability of the simulation framework through the simulation of a clinical pathway. Thereby, we deliver a Proof of Concept (PoC), illustrating that a RL agent improves its performance during and after the training based on our approach.