1 added 15th September, 2006 at 11:30:49
By the way, only the quote is by the scientist working on prozac. The rest of the argument and analysis is original, and thought of by me.
2 Peer review [reviewer #7116] added 15th September, 2006 at 20:11:08
Essentially, this is a claim that a brain of complexity A cannot “understand” anything of complexity B, where B>A. The notion of “understanding,” however, leaves a great deal undefined. Certainly it is true that a human brain cannot consciously plot out and consider every neural pathway in itself, but nor can we consciously plot out, for example, every vector on a billiard ball. Yet we do understand Newtonian mechanics.
I think the appropriate avenue to further this observation is to ask: “can we even imagine a brain so complicated that it could, in fact, understand itself?” If so, how would test for that understanding? Do humans pass that test? But if we decide that we cannot even imagine such a situation, then the observation becomes, to some degree, recursive.
Originality: 2, Importance: 2, Overall quality: 3
3 added 16th September, 2006 at 06:08:53
To answer the question “can we even imagine a brain so complicated that it could, in fact, understand itself?”, I would use the fact that there is genetic variation between brains of different species of animals.
Using your notation of B vs A, as before, I would postulate that B can understand A if and only if B>A. B=A is excluded from that.
Hence, there are only 2 ways to understand the brain.
The first way is to increase B. To suggest an example, a super computer of “intelligence” higher than the human brain could be built, and hence be able to “comprehend” the human brain.
The second way is to decrease A. An example would be to study brains of other animals, like rats and apes first, and then use the information to fill in “missing gaps” of our knowledge of the human brain, if applicable.
Currently, neuroscience is generally using human minds to study the human brain, where B=A approximately. Hence, it is not really effective, as shown by the fact that mental illnesses remain one of the hardest illnesses to treat.
4 Peer review [reviewer #187] added 17th September, 2006 at 17:04:02
What is this observation for? Simply taking somebody else’s words and rephrasing them is no basis for a piece of academic work: the author adds nothing to the original statement at all. In addition, the previous review is correct to note that there is no definition of what ‘understanding’ is here. And even leaving this issue aside, there is no evidence or argument presented to support the idea that a simple brain could not understand something complex. Turing showed us that an extremely simple computational device can in principle compute anything at all that can be computed. By analogy, one has to be very careful of assuming that a simple device is necessarily limited in what it can achieve and certainly should not make blanket statements like this.
Originality: 1, Importance: 1, Overall quality: 1
5 Peer review [reviewer #6621] added 21st September, 2006 at 21:24:50
If cognition turns out to involve only a very small part of the brain’s activity, maybe understanding the the rest of the brain could be possiible
The rest of the brain may be able to then understand the cognitive part
Originality: 6, Importance: 5, Overall quality: 5
6 Peer review [reviewer #54665] added 28th November, 2006 at 17:32:00
The quotation sounds familiar - it may not be original to the scintist working on Prozac either (although I haven’t checked this). In any case, the sentiment echoes Godel’s theorem, wherein the mathematician Kurt Godel showed that either:
1. Mathematics is inconsistent or
2. If it is internally consistent there must be things which are mathematically true that cannot be proved using mathematics.
The same result holds for any formal system (cf., the “halting problem” for Turing machines) and is essentially the basis of a number of arguments against the possibility of true artificial intelligence (e.g., Lucas, 1961; Penrose, 1989) although it has also been used as part of a well-known argument for how the brain, and artificial intelligences, might work (Hofstadter, 1979). The current sentiment is a little less sophisticated however and only goes as far as suggesting that there are things that are mentally (or neurally) true that may be mentally (or neurally) unknowable. This is certainly a possibility although, to my knowledge, no-one has proven that it MUST be the case.
Originality: 2, Importance: 2, Overall quality: 2