Andrius Kulikauskas: I am trying to master the
Expectation-maximization algorithm
A.P.Dempster, N.M.Laird, D.B.Rubin. Maximum Likelihood from Incomplete Data via the EM Algorithm 1976
Videos
- Brian Greco. The EM Algorithm Clearly Explained.
- Ritvik. EM Algorithm : Data Science Concepts
- Statistics but you're missing data (The EM Algorithm)
- Karl Friston. Learning and inference in the brain.
- In density learning, representational learning has two
components that are framed in terms of expectation maximisation (EM, Dempster, Laird, & Rubin, 1977). Iterations of an E-step ensure the recognition approximates the inverse of the generative model and the M-step ensures that the generative model can predict the observed inputs. Probabilistic recognition proceeds by using qðv; u;fÞ to determine the probability that v caused the observed sensory inputs. EM provides a useful procedure for density estimation that helps relate many different models within a framework that has direct connections with statistical mechanics. Both steps of the EM algorithm involve maximising a function of the densities that corresponds to the negative free energy in physics