MENU
  • En
  • Centrum pro studium jazyka, mysli a společnosti
    při Katedře filosofie a společenských věd Filozofické fakulty
    Univerzity Hradec Králové

    Jaroslav Malík: (Dis)continuity of Artificial Reason

    926
    0
    Jaroslav Malík
    (Univerzita Hradec Králové)

     

    (Dis)continuity of Artificial Reason

     

    ABSTRAKT: AI safety literature typically models superintelligent AI (from now on as SI) agents as both highly instrumentally rational (coherent in decision-theoretic terms) and generally intelligent (capable across diverse domains). Drawing on theories of bounded rationality, this presentation puts forward that these two properties may be fundamentally at odds. There is an often cited counterexample to decision theory: the inverse relationship between cognitive complexity and rational coherence. Empirical studies present a trend where less complex animals generally demonstrate greater coherence than humans in decision-theoretic tests. However, the picture of SI, presents an idea of anomalous agents that are both highly complex and highly coherent. I connect general inteligence with complexity and argue that greater degrees of general inteligence are likely to produce incoherence in rational agents. My argument is that there is a „Valley of Incoherence“ between known biological agents and hypothetical SI systems that makes existance of SI agent (at least as presented in AI safety) improbable. While AI safety theorists might appeal to evolutionary bottlenecks or collective rationality as counterarguments, current AI development mirrors this trend: narrow RL agents show high coherence while general-purpose LLMs exhibit increasing incoherence. The concept of complexity-coherence tradeoff challenges standard assumptions about superintelligent agency.

     

    This LMS Centre talk is financially supported by the project OP JAK: Knowledge in the Age of Distrust, CZ.02.01.01/00/23_025/0008711.