Barriers to Complexity-Theoretic Proofs that Achieving AGI Using Machine Learning is Intractable
Abstract
A critique of a complexity-theoretic proof claiming learning from data cannot achieve human-like intelligence, identifying flawed assumptions about input-output distributions and highlighting challenges in defining human-like capabilities and accounting for inductive biases.
A recent paper (van Rooij et al. 2024) claims to have proved that achieving human-like intelligence using learning from data is intractable in a complexity-theoretic sense. We identify that the proof relies on an unjustified assumption about the distribution of (input, output) pairs to the system. We briefly discuss that assumption in the context of two fundamental barriers to repairing the proof: the need to precisely define ``human-like," and the need to account for the fact that a particular machine learning system will have particular inductive biases that are key to the analysis.
Get this paper in your agent:
hf papers read 2411.06498 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper