Neural implementation of probabilistic models of cognition

@article{Kharratzadeh2015NeuralIO,
  title={Neural implementation of probabilistic models of cognition},
  author={Milad Kharratzadeh and Thomas R. Shultz},
  journal={Cognitive Systems Research},
  year={2015},
  volume={40},
  pages={99-113},
  url={https://api.semanticscholar.org/CorpusID:14646384}
}

Figures from this paper

Ask This Paper
AI-Powered

Probability Matching via Deterministic Neural Networks

We propose a constructive neural-network model comprised of deterministic units which estimates and represents probability distributions from observable events -- a phenomenon related to the concept

A Computational Model of Children's Learning and Use of Probabilities Across Different Ages

NPLS can accurately simulate children's probability judgments at different ages, tasks and difficulty levels to discriminate two probabilistic choices through accurate probability learning and sampling and discusses the roles of two model parameters that can be adjusted to simulate the probability matching versus probability maximization phenomena in children.

The Paradox of Help Seeking in the Entropy Mastermind Game

When people seek help as a function of problem state in the Entropy Mastermind code breaking game is explored, showing that participants tended to ask for help late in the game play, often when they already had all the necessary information needed to crack the code.

A computational model of infant learning and reasoning with probabilities.

A novel computational system called Neural Probability Learner and Sampler (NPLS) that learns and reasons with probabilities, providing a computationally sufficient mechanism to explain infant probabilistic learning and inference.

Confirmation in the Cognitive Sciences: The Problematic Case of Bayesian Models

It is argued that the purported confirmation of Bayesian models of human learning largely relies on a methodology that depends on premises that are inconsistent with the claim that people are Bayesian about learning and inference.

Neural-network Modelling of Bayesian Learning and Inference

A complete, mod- ular neural-network structure implementing Bayesian learn- ing and inference in a general form is proposed, which is able to successfully implement Bayesian learning and inference and replicate analytical results with high precision in a brain-like fashion.

Comparing the inductive biases of simple neural networks and Bayesian models

It is shown that a linear neural network can approximate the generalization performance of a probabilis- tic model of property induction, and that training this network by gradient descent with early stopping results in similar per- formance to Bayesian inference with a particular prior.

Bayesian just-so stories in psychology and neuroscience.

It is argued that many of the important constraints in Bayesian theories in psychology and neuroscience come from biological, evolutionary, and processing considerations that have no adaptive relevance to the problem per se.

Explicit Bayesian Reasoning with Frequencies, Probabilities, and Surprisals

Explicit Bayesian Reasoning with Frequencies, Probabilities, and Surprisals Heather Prime (heath.prime@gmail.com) Department of Human Development and Applied Psychology, Ontario Institute for Studies

Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks

This work proposes to avoid the infeasibility of the E step by bounding likelihoods instead of computing them exactly, and shows that the estimation of the network parameters can be made fast by performing the estimation in either of the alternative domains.

How Robust Are Probabilistic Models of Higher-Level Cognition?

It is argued that the view that the mind should be viewed as a near-optimal or rational engine of probabilistic inference is markedly less promising than widely believed, and is undermined by post hoc practices that merit wholesale reevaluation.
...