Aug 3, 2018

Why AGI does not exists? Or it already exists?

Reading the Wiki:


It reminded me of a similar attempt by Hilbert in his second problem:


In mathematicsHilbert's second problem was posed by David Hilbert in 1900 as one of his 23 problems. It asks for a proof that the arithmetic is consistent – free of any internal contradictions. Hilbert stated that the axioms he considered for arithmetic were the ones given in Hilbert (1900), which include a second order completeness axiom.

Completeness and consistency also means that the entire mathematical universe can be generated automatically from just a few axioms.

Godel proved that such consistency does not exists:



Gödel's incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system containing basic arithmetic. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The theorems are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible.

And the method he used is to construct a system based on consistent components and proved by contradiction that it does not exists.

So generalizing from this:   the perfect and complete intelligence that can solve all problems does not exists, but perhaps problem by problem, we can identify the solutions for it and solved it.   But likely to be different problem will have different solutions.   So in a way:   We are already there:  AGI for a limited set of problems.

Similarly, we are trying to build a general AI system that can emulate general human intelligence - assuming that the software can do everything human can do.   But we have not done it so far.   So that implies there exists human intelligence which human cannot comprehend and understand.   And translating that to the software - the software MUST be able to do likewise - NOT able to understand this part of itself, on how to build itself to understand itself.   This part does not contradict with the earlier statement that it can do everything human can do, because we cannot understand ourselves.   And the software - which is built by us - should be able to do everything we can do, but not those that we cannot - whose existence we had earlier argued to exist, it must be able to recognize its existence and immediately reclassify the problem as unsolvable.

In summary, one core ability in AGI is to recognize what problem is solvable and what it not - and understand its own limit of intelligence.

References:

https://www.youtube.com/watch?v=9EN_HoEk3KY&t=6s

http://www.gitxiv.com/posts/Qz32eCb54wcZSPR5M/meta-learning-shared-hierarchies

https://blog.openai.com/learning-a-hierarchy/

https://arxiv.org/abs/1710.09767

https://arxiv.org/abs/1611.01578

No comments: