Handling uncertainty in system behavior : observing, controlling and learning
When dealing with real-world systems and their behavior, one can not rely on having complete knowledge. Moreover, it is very common to work with partial, faulty or uncertain information.
We take a look at handling uncertainty in behavior networks from three different views: Observing, controlling and learning. For each of these main aspects, we combine or generalize well-established models in order to introduce new frameworks handling novel problems in regard to uncertainty reasoning.
First, we deal with the issue of observing a dynamic system whose state changes are only partially observable. In order to efficiently store our knowledge about the current state and update this knowledge accordingly, we exploit a generalization of Bayesian networks. By introducing modularity into ordinary Bayesian networks, we enable the compact representation of information that is not only conditional but also dynamic. We describe update mechanisms for this generalized framework on how to incorporate partial observations.
Next, we focus on systems a user can interact with in order to maximize a reward. The user makes an initial decision to (de)activate certain events and the system is then executed by the environment. In order to build this novel framework, we combine stochastic Petri nets and Markov decision processes. We discuss the problem and complexity of finding an optimal policy and how to calculate the expected reward given a certain policy.
Finally, we investigate the problem of learning the parameters of a probabilistic model with faulty data. We generalize hidden Markov chains such that missing observations are compensated and describe how to adjust the established Viterbi and Baum-Welch algorithms accordingly. They are used to calculate the most likely state sequence given an observation and learn the model parameters respectively.