RSS 2.0 Feed

» Welcome Guest Log In :: Register

Pages: (167) < [1] 2 3 4 5 6 ... >   
  Topic: AFDave's UPDATED Creator God Hypothesis 2< Next Oldest | Next Newest >  
Drew Headley



Posts: 152
Joined: Mar. 2006

(Permalink) Posted: Oct. 27 2006,15:57   

Quote (afdave @ Oct. 27 2006,04:58)
AFDave ...    
Quote
Do you, Drew Headley--who is working on a PhD in neuroscience (I think)--think there is more information contained in white noise that in a Winston Churchill speech?

I'd like to have you on record with a simple YES or NO.  Thx!


Drew...    
Quote
Yes, I do think that white noise has more information than a speech by Winston Churchill, where information is defined by Shannon's metric. It is a consequence of the probability distributions, since whitenoise by definition has a flat distribution for all symbols and speech does not.

Yes, I am pursuing a Ph.D. in neuroscience.  


Nothing about zero noise here.  In fact, there is no such thing as a noiseless channel in the real world to my knowledge.  It was a very simple question.  Just "is there more info in the speech or the noise?"  Period.  

Information defined by Shannon's Metric:

   
Quote
The confusion comes from neglecting to do a subtraction:

Information is always a measure of the decrease of uncertainty at a receiver (or molecular machine).

If you use this definition, it will clarify all the confusion in the literature.

Note: Shannon understood this distinction and called the uncertainty which is subtracted the 'equivocation'. Shannon (1948) said on page 20:

R = H(x) - Hy(x)

"The conditional entropy Hy(x) will, for convenience, be called the equivocation. It measures the average ambiguity of the received signal."

The mistake is almost always made by people who are not actually trying to use the measure.


OK?  So you got the wrong answer according to Shannon's definition.  

It's OK, though.  Life goes on.

AFDave, you are wrong. You have not disproved what I said. Also, it is true in that quote of me I did not specify a noiseless channel, however I think by that point it was implied since I had said it before:
 
Quote
According to Dr. Schneider, if we are sending these two messages over a transmission line that does not introduce noise or corrupt the signal then the information on the recievers side after transmission will be the recievers uncertainty before the signal was sent.

http://www.antievolution.org/cgi-bin....p=36795

However, I apologize for the confusion.

As far as noiseless channels in the real world, I would guess they do not exist but I could be wrong. However, that has nothing to do with the point I am making.

Since you are so fond of citing R = H(x) - Hy(x) as evidence for your position, I figured it would be good to actually use this equation to show you how you are misunderstanding information theory.
First, R is defined as the rate of transmission over the channel, H(x) is the uncertainty about the message being sent, and Hy(x) is the conditional uncertainty due to noise introduced during transmission. This is not in any way related to the content of the message, only to the physical properties of our transmission line. It can be calculated by adding the uncertainty that the character received was correct and the uncertainty that the character received was wrong. This works out nicely for a binary system as so:
No transmission errors:
Hy(x) = -(1*log2(1) + 0*log2(0)) = 0
Thus, when we calculate R we get R=H(x) - 0 = H(x).  
Now, if all the characters in xr have an equal probability of occurring (the white noise case), then H(xr) is at its maximum and R will thus be the maximum value of H(xr) after transmission. If however, there is an unequal distribution then H(xu) will be less than the maximum possible uncertainty and so H(xr) > H(xu), and if this is sent over a noiseless channel then for the receiver Rr > Ru.
Note, I am assuming that each symbol set has the same number of symbols.
To demonstrate that a noisy channel would not change the outcome of this (assuming the noise is less than uncertainty of the message) I will do a transmission line with noise. It is trivial to show that if we were to transmit the white noise and unequal distribution message (speech) over the same line with the same noise properties that:
Given H(xr) > H(xu) then
H(xr) - N > H(xu) - N
Rr > Ru

Do you deny that the math works out like this? If you agree with this math, then you agree with us that a white noise source has more information per symbol than a source with unequal probabilities for each of its symbols occurring (e.g. speech).

   
  4989 replies since Sep. 22 2006,12:37 < Next Oldest | Next Newest >  

Pages: (167) < [1] 2 3 4 5 6 ... >   


Track this topic Email this topic Print this topic

[ Read the Board Rules ] | [Useful Links] | [Evolving Designs]