Active Inference, Julia, Active Inference Julia, Active Inference Julia Tutorial
Andrius: I am preparing the proposal below to get European Union funding for ActiveInference.jl Then Peter Thestrup Waade and his team and Daniel Friedman can finish it up. Peter will submit it. The deadline is August 1, 2025.
Total length | 5 pages {$\leq 17,700$} characters |
Overview | 1/3 page {$\leq 1200$} characters |
Your relationship to the project | 2/3 page {$\leq 2500$} characters |
Budget | 2/3 page {$\leq 2500$} characters |
Compare your project with others | 1 1/3 page {$\leq 4000$} characters |
Technical challenges | 1 2/3 page {$\leq 5000$} characters |
Ecosystem | 2/3 page {$\leq 2500$} characters |
NGI Zero Commons Fund Guide for Applicants
Vision The enduring success of the Internet lies in permission-free innovation, openness and interoperability. The Next Generation Internet is set up to empower, to unlimit our choices. It fosters diversity and decentralisation, and grows the potential for disruptive innovation. This extends far beyond the technical realm. The Next Generation Internet will achieve a sustainably open environment for our cultures and economies, celebrating our values and promoting creativity and well-being. Let's re-invent Internet to reach the full human potential, for all generations.
- Projects should be in line with the NGI vision and the sub-granting call applied for.
- Projects should have research and development as their primary objective.
- Projects should satisfy any other hard eligibility criteria specific to the sub-granting call, such as having a clear European Dimension.
Projects receive an initial rating on three criteria:
- 30% Technical excellence/feasibility
- 40% Relevance/Impact/Strategic potential
- 30% Cost effectiveness/Value for money
The second stage is used to select strategic projects which not only satisfy the minimal criteria, but also have potentially a lasting impact on society. Projects are to be selected based on their potential contribution to the Next Generation Internet and its key drivers for change. In the second stage, the reviewers are able to ask additional clarifying questions and make (minor) suggestions to improve the quality and impact of the project.
- Project examples: Verticals + Search
- Project examples: Services + Applications
- Project examples: Data + AI
Application form Please be short and to the point in your answers; focus primarily on the what and how, not so much on the why. Add longer descriptions as attachments (see below). ... On the up side, you can be as technical as you need to be (but you don't have to). Do stay concrete.
Proposal
Applicant Peter Thestrup Waade (Denmark)
My only concern is that my time is very limited in the next few months (handing in my dissertation in August), so I have pretty limited time resources for grant writing. In other words, I'd say this would only be possible if Andrius and/or others can lead the application writing - but then I would be very happy to contribute as much as I can! The same is true for the rest of my team 🙂
Proposal name
ActiveInference.jl for the ABCs and ZYXs of Active Inference
Website
Can you explain the whole project and its expected outcome(s). (1200 characters ~ 200 words)
The Active Inference community appreciates the significance of ActiveInference.jl for formulating, prototyping and computing Active Inference models in neuroscience, AI, robotics, psychiatry, psychology, biology and sociology. Adoption of our Julia library depends on clearly separating the modeling layer (what users need to know) from the computational layer (what they should not need to know). For our library to be trustworthy, the modeling layer must guide users through Active Inference founder Karl Friston's step-by-step procedure. For it to be resilient, it needs to unleash Julia's speed and versatility, interface with other packages, and serve compatible theories. For it to be sustainable, users, modelers, coders, both volunteers and professionals, need to find each other through the Active Inference Institute, the hub of our ecosystem.
We will seed an online archive of projects, with 12 from users we help and 12 from exemplary research papers. We will create two tutorials, the authoritative ABCs and the inclusive ZYXs, to walk users through model-based analysis. We will reorganize our code accordingly, rework key routines for speed, and add requested functionality.
Have you been involved with projects or organisations relevant to this project before? And if so, can you tell us a bit about your contributions? (2500 characters ~ 430 words)
Samuel William Nehrer, Jonathan Ehrenreich Laursen and I are the authors of ActiveInference.jl I am the Principal Investigator and supervisor for the package development, while they have been the primary developers and maintainers. We started in November, 2023. We meet weekly. I have worked on ActiveInference.jl about 4 hours per week and they have each worked about 15 hours per week.
In March, 2024, we uploaded our first beta release v0.0.0-beta dev branch. In December, 2024, we uploaded our eighth and latest release v0.1.1, which we described in our paper "Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models". We three coauthored this paper with Active Inference founder Karl Friston, pymdp founder Conor Heins, and with Christoph Mathys, who helped supervise software development. We currently are part of Christoph Mathys's group Inference, Learning and Action in the Brain (ILAB), which is part of the Interacting Minds Centre at Aarhus University.
We decided to do ActiveInference.jl because I suddenly had these two very competent and very motivated students, Jonathan and Samuel. Julia was lacking support for Active Inference and so I had been wanting to do this for a while. There were no packages where you could fit active inference models to data using Markov Chain Monte Carlo (MCMC) methods. We built ActiveInference.jl to be compatible with other Julia libraries, notably ActionModels.jl and HierarchicalGaussianFiltering.jl
I am one of the moderators for Karl Friston's weekly Theoretical Neurobiology meetings. I have coauthored three papers with him.
Since ..., I have been a developer for the Translational Algorithms for Psychiatry-Advancing Science (TAPAS) ecosystem, which is hosted at the Translational Neuromodeling Unit (TNU), which does computational psychiatry. TAPAS hosts ActiveInference.jl at GitHub.
This August, I am completing my PhD studies in Computational Cognitive Science at the Department of Language, Cognitive Science and Semiotics at Aarhus University. I will now go to the TNU with Klaas Enno Stephan, which is jointly at ETH ZĂĽrich and the University of ZĂĽrich. Samuel and Jonathan have recently completed their Bachelor's degrees and will now do Master's degrees at ETH, ZĂĽrich. Our team has grown to include John Boik, who is working on our current update. In the fall we will include Dominik Firisz, Aleksandrs Baskakovs and Andrius Kulikauskas.
Requested Amount (between 5,000 and 50,000 EUR)
44,280 EUR
Explain what the requested budget will be used for?
- Does the project have other funding sources, both past and present?
- Explain costs for hardware, human labor (including rates used), travel cost to technical meetings, etc.
- (max 2500 characters, be concise)
- (If you want, you can in addition attach a budget at the bottom of the form).
The budget will fund 1,260 hours of work for 3 releases of ActiveInference.jl, 2 tutorials and 1 project repository.
As students, we devoted 3,500 hours and will devote 2,000 more next year. We have no funding sources.
We request 44,280 EUR.
- 17,280 euros = 36 EUR/hr x 480 hr for a software developer (Aleksandrs Baskakovs)
- 12,000 euros = 25 EUR/hr x 480 hr for a researcher / organizer (Andrius Kulikauskas)
- 12,000 euros = 40 EUR/hr x 25 hr (on average) x 12 modelers serving users
- 3,000 euros to an institution that organizes modelers (Active Inference Institute = AII)
We intend 3 stages of work, each lasting 2-4 months. Each stage will end in a new release of ActiveInference.jl. All 3 stages will include user testing, creating tools for testing, optimizing code for speed, documentation.
Stage 1.
- developer (160 hours)
- Implement the DEM Toolbox's spm_MDP_VB_XXX.m functionality.
- researcher (160 hours)
- Overview existing demos of Active Inference and related theories.
- Analyze the spm_MDP_VB_XXX.m implementation.
- Author the ABCs tutorial to present Friston's understanding conceptually, mathematically, computationally.
- Find users and modelers for the project repository and learn their needs.
Stage 2.
- developer (160 hours)
- Add requested functionality.
- Make interfaces with Julia libraries.
- researcher (160 hours)
- Set up the repository.
- Analyze the assumptions of Active Inference and related theories.
- Consult with Friston et al.
- Architect conceptual foundations for relating Active Inference with other theories.
- modelers (150 hours)
- Create models for users.
Stage 3.
- developer (160 hours)
- Refactor code to implement the conceptual foundations.
- Enable other generative models.
- Enable comparison of models, implementations and libraries.
- researcher (160 hours)
- Author the ZYXs tutorial that guides a researcher through their options in modeling behavioral data.
- Author with Friston, AII President Daniel Friedman, and our team a paper presenting this release and the conceptual foundations.
- modelers (150 hours)
- Create models for users.
The researcher will lead a six-month weekly online workshop to teach ActiveInference.jl to users, teachers, modelers and coders, learn and meet their needs, help them collaborate, and test the library. AII will host this workshop and invite participants. AII could also distribute payments to modelers.
Baskakovs is in Denmark, Kulikauskas is in Lithuania, modelers will be global, and AII is registered in the US.
Compare your own project with existing or historical efforts. E.g. what is new, more thorough or otherwise different. (max 4000 characters ~ 690 words, be concise)
ActiveInference.jl brings together the Active Inference theoretical framework and the Julia programming language, which are both sources of inspiration for the Next Generation Internet. Our aim is that ActiveInference.jl provide scientists and students the clearest methodology, most authoritative, useful and up-to-date, for applying Active Inference. We are focusing on meta-Bayesian modeling and making the most of the Julia ecosystem. We aspire to be the repository for the wisdom of Active Inference, whose third generation is currently inscribed in the function spm_MDP_VB_XXX.m of the DEM (Dynamic Expectation Maximisation) toolbox for the American proprietary software MATLAB.
Neuroscientist Karl Friston is famous for statistical parametric mapping (SPM). He's been cited 200,000 times. In 2006, he introduced Active Inference. He implemented it in DEM as part of the freely available SPM software package but it is not discussed in the SPM manual. Simulating a sequence of trials involves running a single function such as spm_MDP_VB_XXX.m to act on a fixed set (A,B,C,D,E...) of matrices and vectors which define the generative model. This implementation is constraining and cryptic. DEM includes more than 80 demos. There is a 2021 tutorial by Smith et al. for the first generation spm_MDP_VB_X.m The Active Inference textbook includes examples. The 2008 article "DEM: A variational treatment of dynamic systems" has been cited 335 times. But we were born after MATLAB and even Python. We are a new generation for a new Internet.
Increasing interest in Active Inference led to the open-source Python library pymdp, abbreviating Python Markov Decision Process, originally written by Conor Heins, who is active in ActiveInference.jl. pymdp splits up the core functionality. It provides different levels of abstraction, from Agent functionality to specifying variational inference algorithms. PyMDP uses the Jax layer. The first release pymdp 0.0.3 at GitHub was in October, 2021, and the sixth and latest release 0.0.7.1 was in March, 2023. The 2022 article "pymdp: A Python library for active inference in discrete state spaces" has been cited 75 times. pymdp has received 539 stars at GitHub. There are helpful tutorials. The cognitive computing company VERSES uses pymdp. pymdp can simulate agents but it can't be used to fit behavioral data.
The C++ library cpp-AIF is rarely used.
RxInfer.jl is a Julia package for automatic Bayesian inference on a factor graph with reactive message passing. Its back-end is efficient. It is not specifically designed for Active Inference but gets used for some parts of it, which are based on the first generation spm_MDP_VB_X.m from 2010. The first version was released on June, 2022 and there have been 72 releases. It has 341 stars on GitHub.
ActiveInference.jl and DEM are the only libraries for scientists to calculate which generative model is the best match for behavioral data, as needed for computational phenotyping, and cognitive and behavioral modeling, as in computational psychiatry, cognitive science and neuroscience. Unlike DEM, ActiveInference.jl allows not just variational inference but also the more accurate Markov Chain Monte Carlo. Julia'a autodifferentiability makes this possible.
We're updating ActiveInference.jl so that it
- communicates and implements Karl Friston's vision for meta-Bayesian modeling, integrating other theories, facilitating theoretical modeling,
- works with fast special purpose Julia packages - RxInfer.jl, Turing.jl, Gen.jl, Optimization.jl, DifferentialEquations.jl, NeuralEstimators.jl, Enzyme.jl, ActionModels.jl, Agents.jl, ReinforcementLearning.jl - supporting cognitively innovative machine learning,
- allows discrete Partially Observable Markov Decision Processes (POMDPs) but also other generative models,
- allows selection and comparison of implementations.
The MIT license for ActiveInference.jl is very open and compatible for collaboration.
What are significant technical challenges you expect to solve during the project, if any? (optional but recommended, max 5000 characters ~ 860 words)
We aim to grow ActiveInference.jl as a natural platform for expressing the current third generation and any future generation of Karl Friston's vision of Active Inference. Thus our technical challenges come with conceptual challenges and social challenges.
Initially, we want to completely reproduce the functionality of the 1,923-line Matlab file spm_MDP_VB_XXX.m, including inductive inference and sophisticated inference. We need to do this with Julia files that disentangle the conceptual layer and the computational layer. This modular design should allow us to make use of Julia libraries such as RxInfer.jl, Turing.jl, Gen.jl and pthers for generalized filtering, Hierarchical Gaussian Filtering, and Predictive coding networks.
We must create a thorough set of tests. We want to be able to get the same results as DEM, pymdp and RxInfer.jl or explain discrepancies.
We need an architecture that is true to Active Inference but also inclusive of other frameworks. We should organize our library around that. It should be clear how we or others can modify functionality. We need interpretability. We can then optimize for Julia's speed.
Ultimately, we want to allow all parameters to be included in meta-Bayesian modeling. We'll support a wide variety of (objective) perceptual models, such as analytic i.e. Dirichlet parameter updates, hand-coded variational inference, with gradient descent on VFE, hand-coded or external optimisation scheme, hand-coded gradients vs autodiff, analytic approximations, i.e., the HGF, External software, RxInfer.jl, Turing.jl, Gen.jl, MCMC & Amortisation, Factorization, message-passing, distributed inference. We'll support (subjective) generative models, including continuous state space models, collaboration of individual agents, recursively observing an observer, Theory of Mind models. We'll allow for variations of action.
We must solve conceptual challenges. Active Inference determines which generative model best fits behavioral data. The architecture and the tutorials must reflect this purpose, known as model-based analysis or meta-Bayesian modeling, as in Chapter 9 of the Active Inference textbook.
We need to systematize existing demos of Active Inference and other frameworks and clarify the assumptions. We should study the existing tutorials and also good tutorials for similar software, notably PyHGF. The tutorial should, in parallel tracks, cover the concepts, diagrams, equations, matrices, computations, sample code and illustrative projects.
The initial ABCs tutorial needs to authoritatively present Karl Friston's procedure for defining matrices A, B, C, D, E, H and distinguish it from any deviations. Setting a free energy term to zero yield a simpler framework (reinforcement learning, KL divergence, expected utility maximization). We are aware of peculiarities: states are often simply locations; preferences are linked to states but not to observations; policies are strings of actions linked to time steps. We should support alternatives.
The ultimate ZYXs tutorial needs to rethink the relevance for a researcher coming from a different theoretical framework. Starting with their behavioral data, they can decide whether to optimize one model or compare different models, what objective model to use, whether or not to suppose a generative model, whether or not to optimize that using Friston's Free Energy Principle, and so on, proceeding towards a full fledged embrace of Active Inference but having options to deviate as desired. For model-based analysis, the generative models will get inverted. But we also want to support theoretical modeling and cognitively novel machine learning. It is a challenge to present and support all options.
Our social challenges have us support Karl Friston and his vision but disentangle it from the DEM Toolbox. Through the Active Inference Institute, we need to discuss and distinguish the conceptual, mathematical and computational layers. We need to consider how the Active Inference framework may be formulated to catalyze convergence of other frameworks.
We need to find current users and also engage interested teachers, modelers, researchers, students and learn how to meet their needs. We're proposing to organize such a workshop at the Applied Active Inference Symposium on November 12-14.
Through the Active Inference Institute, we need users to get help from modelers, and to share their projects in our repository. We need to include examples from noteworthy research papers, recoding them with our library. In general, we need to encourage collaboration, which is a natural challenge for science and key to the Next Generation Internet. This will facilitate community based testing and a rapid cycle for addressing user needs.
Describe the ecosystem of the project, and how you will engage with relevant actors and promote the outcomes? (E.g. which actors will you involve? Who should run or deploy your solution to make it a success? (max 2500 characters ~ 430 words, be concise)
ActiveInference.jl will succeed if embraced by researchers doing meta-Bayesian modeling, by theoretical modelers running simulations, and by experimenters of cognitively innovative machine learning. We need to authoritatively replicate the DEM Toolbox implementation of Active Inference and then allow for a wider range of options.
Karl Friston is the primary author of the DEM Toolbox, which is very successful and freely available. It is a high standard to meet and exceed.
He is also an amazingly collaborative, understanding and supportive person. I have regular access to him, including as a moderator of the Theoretical Neurobiology meetings, where hundreds of researchers have presented their work, had it openly discussed, and received his feedback. He supports ActiveInference.jl and the other implementations of Active Inference.
Active Inference, as a forward looking Bayesian theory, unifying perception and action, implying truth and freedom, attracts lovely people. The Active Inference Institute, founded in 2020, is the hub of our ecosystem. It is a prime example of the Next Generation Internet, fostering an ecosystem for open access, with video livestreams, free classes, a textbook, knowledge system, Discord server, projects and Research Fellows. It includes academics but truly serves citizen scientists.
In November, the Institute's Symposium includes "Open Science & Tool Development: Increasing the function and accessibility of Active Inference software across languages." We will engage users and modelers and coordinate with leaders of DEM Toolbox, PyMDP, RxInfer.jl and other software packages.
The AII is a natural forum for coming together to express Active Inference as a conceptual language. Our work on ActiveInference.jl will encourage collaboration across Active Inference software packages.
ActiveInference.jl, as part of the open source Julia world, will increase the clarity of Active Inference conceptually, mathematically and computationally, and bring us together around the Active Inference Institute. We can then reach out further to the Bayesian modeling community, and then the cognitive modeling community, and possibly the Artificial General Intelligence community. Active Inference is a conceptually elegant, mathematically cogent, empirically manifested, widely beloved theoretical framework that catalyzes dialogue and convergence with many other frameworks, clarifying the unity and diversity of our humanity.
Attachments: add any additional information about the project that may help us to gain more insight into the proposed effort, for instance a more detailed task description, a justification of costs or relevant endorsements. Attachments should only contain background information, please make sure that the proposal without attachments is self-contained and concise. Don't waste too much time on this. Really.
Endorsements
- Active Inference Institute (can coordinate a team, can position with regard to other software projects, can position within the Applied Active Inference Symposium).
- Karl Friston
- Theoretical Neurobiology Meeting
- Users
Additional text that may be useful somewhere
What is Active Inference?
- Active inference is a framework under the free energy principle that provides a formal and implementable prescription for sentient behavior of agents from first principles (K. Friston et al., 2016).
What is ActiveInference.jl
- The Julia package ActiveInference.jl is being developed with an intention to provide a fully open-source, easy-to-use tool for applying the framework of active inference for empirical, theoretical, and applied research.
- The main vision of the package is therefore threefold:
- firstly, to allow fitting active inference models directly to empirical data to test mechanistic hypotheses about cognition, perception, or decision-making in individuals and populations;
- secondly, to run theoretical simulations of agents to study how complex behavior emerges from underlying generative models, with relevance to fields such as cell biology, social dynamics, and computational psychiatry;
- thirdly, to deploy active inference as a principled and interpretable alternative to standard machine learning approaches, which would allow for the development of the next-generation artificial intelligence systems for solving real-world problems in engineering and robotics.
Julia has the advantages that
- Julia makes it very easy to write performant code that is also easy to read, extend and develop - so it should make it much easier for the community help develop the package.
Julia
- Julia lets you write your own functions in Julia without needing a deeper layer such as Jax. Thus it solves the two-language problem because the modeling layer and the computational layer are in the same language.
- Julia has a strong ecosystem for scientific modeling. Scientists in Active Inference need to collaborate because of their different skill sets. This collaboration represents the next generation Internet and the importance of open source.
- Julia is beginner friendly. It allows beginners and experts to work side-by-side.
- Julia is a free and open-source high-level programming language that retains an easy user interface reminiscent of that in Matlab and Python.
- Simultaneously, Julia uses its “just-in-time” (JIT) compilations via the LLVM framework to approach the speed of languages like C without relying on external compilers.
- Julia is also natively auto-differentiable, which means it can solve what is called the two-language problem (i.e., that high-level languages often have to rely on lower-level languages, either for performance or for auto-differentiability; this is the case with standard tools for cognitive modelling, where languages like R must rely on external languages like STAN for Bayesian model fitting).
- Felix Wechsler and Guillaume Dalle As regards Large Language Models and also Deep Learning, Julia is simply not competitive, it is underresourced, there is no way it can catch up to Mojo and other work by the big companies. There is scientific deep learning but classical machne learning is not well supported, according to him. It is usable for neural network modeling. The go to for Julia is scientific machine learning, where you want to enrich a differential equation solver with a neural network.
Other Julia libraries
- Turing, Julia’s powerful library for Bayesian model fitting (2,100 stars)
Sister package: ActionModels.jl (20 stars)
- Specialises in applied cognitive modeling. Computational modeling of cognition and behaviour. Unified interface across models for Agent-based simulation (Agents.jl) Gym-like environments (ReinforcementLearning.jl Fully compatible with ActiveInference.jl
- Turing's newly developed extension for behavioural modelling, ActionModels, makes it possible to use cutting-edge Markov Chain Monte Carlo methods, as well as variational methods, for Bayesian model fitting with AIF.
- Crucially, this allows researchers to not only simulate AIF in a fast programming language, but to also fit them to empirical behaviour, as is performed in cognitive modelling and computational psychiatry.
- Importantly, this also places AIF models in an ecosystem of other models for computational psychiatry so that it can easily be compared with models, like Hierarchical Gaussian Filters, and reinforcement learning models, like the classic Rescorla–Wagner model.
es, I believe it is. However, or uses variational inference to do that, which is in general considered less accurate compared to MCMC in most fields. With ActiveInference.jl, it is possible to use MCMC or variational methods as desired Markov Chain Monte Carlo, is MCMC
Other tools do not
- do not usually allow for fitting models to empirically observed data, which is a fundamental method used in cognitive modelling, often in the context of computational psychiatry, to infer the mechanisms underlying variations in behaviour or to investigate the differences between (for example, clinical) populations.
- Smith and colleagues provided a guide for manually doing variational Bayesian parameter estimation based on empirical data, but only in Matlab and restricted to a particular class of variational parameter estimation methods (variational Laplace), instead of the sampling-based methods that currently predominate in the field of cognitive modelling
meta-Bayesian modeling (instead of model-based analysis) (the subject, according to Active Inference, uses Bayesian inference, and we as researchers use Bayesian inference to study it).
Break tasks down into 10 hour units.
- Aleksandrs Baskakovs's skills: Active Inference and related Bayesian mind models.
- software developer and work with us on extending and improving the package
- Alex is trained in meta-Bayesian modeling, model-based analysis
- Is applying for his PhD. Competent and kind.
- Andrius Kulikauskas's skills: scientific, organizing, writing, modeling, coding, interacting, marketing
- PhD in mathematics
- coordinator, tester, and person who will get feedback from the community
DEM toolbox
- 395 files. 40 MB (Compare with 200 MB for all of spm) At least 80 demos.
- spm_MDP_VB_X.m, spm_MDP_VB_XX.m, spm_MDP_VB_XXX.m are each about 70 kb and less than 2,000 lines
- spm_MDP_VB_X.m is the variational message passing scheme for fixed policies; i.e., ordered sequences of actions Copyright 2008-2022.
- spm_MDP_VB_XX.m which is the variational message passing scheme for sophisticated policy searches under the assumption that the generative process and model have the same structure that are specified a priori.
- This implementation equips agents with the prior beliefs that they will maximise expected free energy. Variational free energy can be interpreted in several ways - most intuitively as minimising the KL divergence between predicted and preferred outcomes (specified as prior beliefs) - while simultaneously minimising ambiguity. This particular scheme is designed for any allowable policies or control variables specified in MDP.U.
- In the comments, there is an 800 word explanation in jargon. Copyright 2008-2022.
- spm_MDP_VB_XXX.m This implementation generalises previous MDP based formulations of active inference by equipping each factor of latent states with a number of paths; some of which may be controllable and others not. Controllable factors are now specified with indicator variables in the vector MDP.U. Furthermore, because the scheme uses sophisticated inference (i.e., a recursive tree search accumulating path integral is of expected free energy) a policy reduces to a particular combination of controllable paths or dynamics over factors. In consequence, posterior beliefs cover latent states and paths; with their associated variational free energies.
Furthermore, it is now necessary to specify the initial states and the initial paths using D and E respectively. In other words, he now plays the role of a prior over the path of each factor that can only be changed if it is controllable (it no longer corresponds to a prior over policies). Copyright 2019.
- in terms of matrices A, B, C, D, E
Aleks: I intend to allocate ~20 hours per week on the project from the start of the project and onwards for 6 months, totaling up to ~480 hours over the whole span. In the case that I start my PhD in February, we will adjust to the situation at hand according to the options. I have aligned with @Peter Thestrup Waade on this. Cheers!
- our purpose here is to make an easy to use and understand active inference implementation; not to create an alternative or further extension of it (we do not have the time for that, I think)
- open ended. We do what we have time for and what users want. Which could include continuous states space models or not depending on user desires and on the difficiulty.
- I'm happy to allow for experimentation; and we might also replace the "ABC of active inference" tutorial paper with a more exploratory work by you in the end, if this is important for motivation etc
- Yes; the difference between what are DEM implementation details and what are "true" active inference is perhaps not so simple, however - and something we can discuss with Karl too 🙂
- we are actually implementing a more general structure now, which is within active inference bounds but allows for other types of generative models
- a tutorial for a new framework that shows it can all be presented generally.
- In gen3, I think it's possible to do preferences over states; if not, that's something that would make sense to include
- Depending on how radically you'd want to depart from active inference, the last tutorial might be more your own work - but that I think we can discuss when we get to that 🙂
- Yes we can see how it all develops until then. My goal would be simply to have a software architecture that is faithful to the ideas of Active Inference, to the needs of the users, to the existing demos, to the concepts of related frameworks, but does not bake-in any unwarranted notions from DEM toolbox but keeps them optional.
Vision
- computational modeling of active inference
- a gateway interface for researchers to make modeling Active Inference as straightforward as possible
- presenting the code so that it follows Karl's thinking and serves
- up-to-date, definitive, citable statement of the fundamentals of model-based analysis with Active Inference.
Uses
- make Active Inference available to
- computational psychiatry
- AI
- interface between AI/machine learning and psychiatry/psychology/neuroscience
Sources
- Presentation and slides: Peter Thestrup Waade. The Future of ActiveInference.jl
- GitHub. Computational Psychiatry. ActiveInference.jl
- Samuel William Nehrer,Jonathan Ehrenreich Laursen,Conor Heins, Karl Friston, Christoph Mathys, Peter Thestrup Waade. Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models.
- Active Inference textbook Chapter 9. Model-ÂBased Data AnalyÂsis.
- Active Inference Ontology
- Conor Heins. PyMDP. Tutorial 1: Active inference from scratch. In terms of matrices A, B, C.
- Jean Daunizeau, Hanneke E. M. den Ouden, Matthias Pessiglione, Stefan J. Kiebel, Klaas E. Stephan, Karl J. Friston. Observing the Observer (I): Meta-Bayesian Models of Learning and Decision-Making. 2010.
- Karl Friston, Conor Heins, Tim Verbelen, Lancelot Da Costa, Tommaso Salvatori, Dimitrije Markovic, Alexander Tschantz, Magnus Koudahl, Christopher Buckley, Thomas Parr. From pixels to planning: scale-free active inference. 2024.
- FEP and Active Inference Paper Repository
- Karl Friston. A free energy principle for a particular physics 2019.
- DEM: A variational treatment of dynamic systems
- Statistical Parametric Mapping
- spm at GitHub
- Smith tutorial for DEM toolbox
- Coda: Active Inference Implementations
- Karl J Friston, Thomas Parr, Bert de Vries. The graphical brain: Belief propagation and active inference.
- PyMDP
- Tutorial 1: Active inference from scratch
- RxInfer
- ReactiveBayes/RxInfer.jl at GitHub
- PyHGF neural network library for predictive coding - Hierarchical Gaussian Filters - is a good example of the kind of tutorial and website we want to do. And we can do it with GitHub. (90 stars)
Other somewhat similar projects
- related to science: An OpenScience flavour of Bonfire on NixOS for preprints — Discuss preprints based on W3C
- PyCM — Machine learning post-processing and analysis. PyCM is an open-source Python library designed to systematically evaluate, quantify, and report the performance of machine learning algorithms. It offers an extensive range of metrics to assess algorithm performance comprehensively, enabling users to compare different models and identify the optimal one based on their specific requirements and priorities. Additionally, PyCM supports generating evaluation reports in various formats. Widely recognized as a standard and reliable post-processing tool, PyCM has been adopted by leading open-source AI projects, including TensorFlow, Google’s scaaml, Torchbearer, and CLaF. In this grant, the team will implement several new features, such as data distribution analysis, dissimilarity / distance matrices and curve analysis. In addition the project will improve benchmarking and confidence, and introduce an API and GUI for wider adoption.
Secondary
We have also discussed making like a GUI-like tool which runs Julia under the hood, to reduce the coding complexity for users. Developing that would be great, but it strikes me as a secondary thing after the software itself has grown up properly