Equations are not being displayed properly on some articles. We hope to have this fixed soon. Our apologies.

Doglas, Y. (2006). The Paradox of the Brain. PHILICA.COM Observation number 23.

ISSN 1751-3030  
Log in  
Register  
  1137 Articles and Observations available | Content last updated 21 July, 05:24  
Philica entries accessed 3 154 989 times  


NEWS: The SOAP Project, in collaboration with CERN, are conducting a survey on open-access publishing. Please take a moment to give them your views

Submit an Article or Observation

We aim to suit all browsers, but recommend Firefox particularly:

The Paradox of the Brain

Yeo Doglasunconfirmed user (Singapore, Independent Researcher)

Published in neuro.philica.com

Observation
Note: The following quote is not by me, but by a scientist working on the antidepressant Prozac.

“If the human brain were simple enough for us to understand, we would be too simple to understand it”

This seemingly paradoxical statement has deep truths in it. If our brain were simple enough to understand, wouldn’t it also mean that we are “simple-minded”, and in turn mean that we will never understand our own brain completely, since understanding requires the use of our brain.

On the other hand, if our brain were complex enough to understand complex things, wouldn’t it also mean that our brain is difficult to understand?

This observation does not go into the biology of the brain, but is a philosophical argument, that shows that there is a fixed limit to how far neuroscience can progress.

As an analogy, it is like the speed of light in Physics, one can approach very near to the speed of light, but can never ever reach it.

For neuroscience, scientists can learn a lot about the human mind, but can never fully understand it, due to the above paradox.

Information about this Observation
Peer-review ratings (from 4 reviews, where a score of 100 represents the ‘average’ level):
Originality = 65.14, importance = 47.60, overall quality = 57.44
This Observation was published on 15th September, 2006 at 11:28:39 and has been viewed 13454 times.

Creative Commons License
This work is licensed under a Creative Commons Attribution 2.5 License.
The full citation for this Observation is:
Doglas, Y. (2006). The Paradox of the Brain. PHILICA.COM Observation number 23.


<< Go back Review this ObservationPrinter-friendlyReport this Observation


1 Author comment added 15th September, 2006 at 11:30:49

By the way, only the quote is by the scientist working on prozac. The rest of the argument and analysis is original, and thought of by me.


2 Peer review [reviewer #7116confirmed user] added 15th September, 2006 at 20:11:08

Essentially, this is a claim that a brain of complexity A cannot “understand” anything of complexity B, where B>A. The notion of “understanding,” however, leaves a great deal undefined. Certainly it is true that a human brain cannot consciously plot out and consider every neural pathway in itself, but nor can we consciously plot out, for example, every vector on a billiard ball. Yet we do understand Newtonian mechanics.

I think the appropriate avenue to further this observation is to ask: “can we even imagine a brain so complicated that it could, in fact, understand itself?” If so, how would test for that understanding? Do humans pass that test? But if we decide that we cannot even imagine such a situation, then the observation becomes, to some degree, recursive.

Originality: 2, Importance: 2, Overall quality: 3


3 Author comment added 16th September, 2006 at 06:08:53

To answer the question “can we even imagine a brain so complicated that it could, in fact, understand itself?”, I would use the fact that there is genetic variation between brains of different species of animals.

Using your notation of B vs A, as before, I would postulate that B can understand A if and only if B>A. B=A is excluded from that.

Hence, there are only 2 ways to understand the brain.

The first way is to increase B. To suggest an example, a super computer of “intelligence” higher than the human brain could be built, and hence be able to “comprehend” the human brain.

The second way is to decrease A. An example would be to study brains of other animals, like rats and apes first, and then use the information to fill in “missing gaps” of our knowledge of the human brain, if applicable.

Currently, neuroscience is generally using human minds to study the human brain, where B=A approximately. Hence, it is not really effective, as shown by the fact that mental illnesses remain one of the hardest illnesses to treat.


4 Peer review [reviewer #187confirmed user] added 17th September, 2006 at 17:04:02

What is this observation for? Simply taking somebody else’s words and rephrasing them is no basis for a piece of academic work: the author adds nothing to the original statement at all. In addition, the previous review is correct to note that there is no definition of what ‘understanding’ is here. And even leaving this issue aside, there is no evidence or argument presented to support the idea that a simple brain could not understand something complex. Turing showed us that an extremely simple computational device can in principle compute anything at all that can be computed. By analogy, one has to be very careful of assuming that a simple device is necessarily limited in what it can achieve and certainly should not make blanket statements like this.

Originality: 1, Importance: 1, Overall quality: 1


5 Peer review [reviewer #6621confirmed user] added 21st September, 2006 at 21:24:50

If cognition turns out to involve only a very small part of the brain’s activity, maybe understanding the the rest of the brain could be possiible

The rest of the brain may be able to then understand the cognitive part

Originality: 6, Importance: 5, Overall quality: 5


6 Peer review [reviewer #54665unconfirmed user] added 28th November, 2006 at 17:32:00

The quotation sounds familiar - it may not be original to the scintist working on Prozac either (although I haven’t checked this). In any case, the sentiment echoes Godel’s theorem, wherein the mathematician Kurt Godel showed that either:

1. Mathematics is inconsistent or
2. If it is internally consistent there must be things which are mathematically true that cannot be proved using mathematics.

The same result holds for any formal system (cf., the “halting problem” for Turing machines) and is essentially the basis of a number of arguments against the possibility of true artificial intelligence (e.g., Lucas, 1961; Penrose, 1989) although it has also been used as part of a well-known argument for how the brain, and artificial intelligences, might work (Hofstadter, 1979). The current sentiment is a little less sophisticated however and only goes as far as suggesting that there are things that are mentally (or neurally) true that may be mentally (or neurally) unknowable. This is certainly a possibility although, to my knowledge, no-one has proven that it MUST be the case.

Originality: 2, Importance: 2, Overall quality: 2




Website copyright © 2006-07 Philica; authors retain the rights to their work under this Creative Commons License and reviews are copyleft under the GNU free documentation license.
Using this site indicates acceptance of our Terms and Conditions.

This page was generated in 0.3878 seconds.