Exploring Complexity and Intelligence: A Curated Reading List
Lately, I've been delving into the concept of Complexity and its connection to learning and intelligence. I came across a website called LessWrong, and I've been engrossed in its dense and informative content. I want to take notes on some of the interesting articles I've read, although I have to admit that I don't grasp all of it - maybe about 50% at best.
Complexity and Intelligence
Building Something Smarter — LessWrong
- The creation can exceed the creator. For example, the creator of Deep Blue is not as skilled in playing chess as Deep Blue.
- We cannot argue that humans are necessarily at least as good at chess as Deep Blue just because humans created it. We might as well say that proteins are as smart as humans.
Say Not "Complexity" — LessWrong
- The author advises against using the word "complexity" for unknown processes, as it can create a false sense of understanding. Instead, they suggest using "magic" as a placeholder for unsolved problems, serving as a reminder of work yet to be done.
- The article also serves as a backlink to the next article, "Complexity and Intelligence" on LessWrong, where the author explains Kolmogorov Complexity, which has a different meaning from the normal English usage of Complexity.
That Alien Message — LessWrong
- I don't quite understand the content.
- I asked Generative AI to help me summarise the key message: The human capacity for extracting information and gaining understanding from sensory data is far more limited than we often realise, even for highly intelligent individuals. The story illustrates that there's an enormous gap between what humans can deduce from information and the theoretical limits of information extraction. The author suggests that we often underestimate the potential capabilities of truly advanced AI, which might be able to extract and process information in ways that are difficult for us to comprehend.
- The more complicated an explanation is, the more evidence is needed to support it. In Traditional Rationality, this is often misleadingly expressed as "The more complex a proposition is, the more evidence is required to argue for it."
- If you only consider how well an explanation fits the data and ignore complexity, you will always prefer explanations that claim to accurately predict the data and assign it a 100% probability. On the other hand, if you only consider the complexity and ignore how well the explanation fits the data, the hypothesis of a fair coin will always seem simpler than any other hypothesis. Good explanations should both fit the data well and be simple (have a short program length).
Complexity and Intelligence — LessWrong
- The author argues that Kolmogorov complexity can be counterintuitive. For example, the entire universe might have lower Kolmogorov complexity than a single planet within it.
- A key point to remember is that closed systems cannot increase their Kolmogorov complexity. Some argue that AI systems need external input to improve truly. The article challenges the idea that increasing Kolmogorov complexity is necessary for increasing intelligence or problem-solving ability. Simply increasing complexity doesn't necessarily improve mathematical ability, for example, the Halting Problem. The author uses thought experiments involving simulated universes to illustrate that external sensory information may not necessarily provide mathematical knowledge that couldn't be obtained through internal simulation. While there may be theoretical limits to what a closed system can achieve mathematically, these limits are extremely high, beyond what we might consider superintelligence.
Contra Anton 🏴☠️ on Kolmogorov complexity and recursive self improvement — LessWrong
- The author refutes @atroyn's claim on Twitter that recursive self-improvement is impossible, arguing that a program with a better model of the world does not necessarily have a higher Kolmogorov complexity.
- Ideal Bayesian reasoning can be implemented with a relatively small program, while non-ideal reasoners (which make errors) can have a much higher Kolmogorov complexity, yet still be worse predictors. The succeeding program can have equal or lower Kolmogorov complexity than its predecessor but still be better at predicting outcomes. (I do not fully understand the argument. The argument is similar to the previous article: Complexity and Intelligence)
Other Related Topics
Solomonoff Induction
Solomonoff induction is an inference system defined by Ray Solomonoff that will learn to correctly predict any computable sequence with only the absolute minimum amount of data. This system, in a certain sense, is the perfect universal prediction algorithm.
- An Intuitive Explanation of Solomonoff Induction — LessWrong
- Occam's Razor and the Universal Prior — LessWrong
- A Semitechnical Introductory Dialogue on Solomonoff Induction — LessWrong
Why Deep Neural Network Works?
Value Learning
Value learning is a proposed method for incorporating human values in an AGI. It involves the creation of an artificial learner whose actions consider many possible sets of values and preferences, weighed by their likelihood. Value learning could prevent an AGI of having goals detrimental to human values, hence helping in the creation of Friendly AI.
Comments ()