Message Passing - Propogating Beliefs Through Graph Networks

Contents

Introduction

One for the toolkit

A powerful form of modelling non-deterministic systems is through probabilistic graphical models. In such models the graphs consist of nodes (also called vertices) and edges linking them together to depict conditional dependencies. If we had a probabilistic system of 3 different random variables these would make up 3 unique nodes. If two of these variables were dependent on eachother… $ P(A)*P(B) != P(AB) then this is expressed by an edge linking the two. In probabilistic systems we want to parameterise the system meaning fully determine the probabilities of each variable to obtain a joint distribution over the entire system. $ P(graph) In a typical probabilistic system to do so we would have to determine the marginal distributions of each individual variable (node) which for larger complex systems results in computationally tractable summations over all possible states. Message passing presents a solution to this by decomposing the complex joint distribution of probabilistic systems represented as a graph into a product of simpler local distributions. It does so by passing messaged (information about the state) from variable to variable iteratively updating our believe about the joint probability distribution via a markov chain that depends only on the previous step.

This is a bit of a bowl of word spaghetti so let’s use an example. A particularly relevant use case to my blogs is the Hidden Markov Model (which I’ve spoken about being effectively used in protein modelling) however for this explanation we will use the classic HMM example of predicting the weather from observations of someone’s actions also discussed in my post here

Let’s take the following 3 state HMM with 3 observations: [image] With the given HMM we are provided all the probabilities in the system but we want to compute the joint probability of a given sequence of observations given the HMM. With sequence ….

0%