Learning Machines, Extended Logic, & Intelligence

This page contains loosely connected research lines on machine learning, information theory, as well as artificial and other intelligence. Learning Machines better reason according to logic. If uncertainties are involved, this should be extended, probabilistic, or Bayesian logic. The same is true for any form of intelligence, whether of human, artificial, or other nature.


Manipulative communication in humans and machines

A universal sign of higher intelligence is communication. However, not all communications are well-intentioned. How can an intelligent system recognise the truthfulness of information and defend against attempts to deceive? How can a egoistic intelligence subvert such defences? What phenomena arise in the interplay of deception and defence? To answer such questions, researchers at the Max Planck Institute for Astrophysics in Garching, the University of Sydney and the Leibniz-Institut für Wissensmedien in Tübingen have studied the social interaction of artificial intelligences and observed very human behaviour.

  • Learn more
  • Artificial intelligence combined

    Artificial intelligence expands into all areas of the daily life, including research. Neural networks learn to solve complex tasks by training them on the basis of enormous amounts of examples. Researchers at the Max Planck Institute for Astrophysics in Garching have now succeeded in combining several networks, each one specializing in a different task, to jointly solve tasks using Bayesian logic in areas none was originally trained on. This enables the recycling of expensively trained networks and is an important step towards universally deductive artificial intelligence.

    • Learn more
    • The embarrassment of false predictions -
      How to best communicate probabilities?

      Complex predictions such as election forecasts or the weather reports often have to be simplified before communication. But how should one best simplify these predictions without facing embarrassment? In astronomical data analysis, researchers are also confronted with the problem of simplifying probabilities. Two researchers at the Max Planck Institute for Astrophysics now show that there is only one mathematically correct way to measure how embarrassing a simplified prediction can be. According to this, the recipient of a prediction should be deprived of the smallest possible amount of information.

Learning Machines

Extended Logik

Artificial, Human, & Other Intelligence

  • Information and Agreement in the Reputation Game Simulation
    Viktoria Kainz, & Céline Bœhm, Sonja Utz, Torsten A. Enßlin, Entropy 2022, 24, 1768. https://doi.org/10.3390/e24121768
  • Upscaling Reputation Communication Simulations
    Viktoria Kainz, & Céline Bœhm, Sonja Utz, Torsten A. Enßlin, Phys. Sci. Forum 2022, 5(1), 39; https://doi.org/10.3390/psf2022005039. PsyArXiv:vd8w9
  • Reputation Communication from an Information Perspective
    Torsten A. Enßlin, Viktoria Kainz, & Céline Bœhm, Physical Sciences Forum. Vol. 5. No. 1. Multidisciplinary Digital Publishing Institute, 2022. https://doi.org/10.3390/psf2022005015
  • Simulating Reputation Dynamics and their Manipulation - An Agent Based Model Built on Information Theory
    Torsten A. Enßlin, Viktoria Kainz, & Céline Bœhm, submitted. PsyArXiv:wqcmb
  • A Reputation Game Simulation: Emergent Social Phenomena from Information Theory
    Torsten A. Enßlin, Viktoria Kainz, & Céline Bœhm, Annalen der Physik 2022, 2100277. https://doi.org/10.1002/andp.202100277, arxiv:2106.05414
  • The Rationality of Irrationality in the Monty Hall Problem
    Torsten A. Enßlin & Margret Westerkamp, Annalen der Physik, vol. 531, issue 3, p. 1800128 (2019) doi: 10.1002/andp.201800128 arxiv:1804.04948