N.Wells
Posts: 1836 Joined: Oct. 2005
|
Quote (GaryGaulin @ May 30 2015,14:25) | The damn trolls keep making junk up as they go along, while the science defenders go on condoning it:
Quote | Finding Ground Truth from Above Catherine Clabby
The vegetation covering much of Earth makes it tough to survey the planet’s surface from above. In other words, it’s difficult to see the ground-level features for the trees. Airborne light detection and ranging (LiDAR) technology has changed that. Combining laser surveying instruments and GPS, researchers make bare-Earth maps of thousands of square kilometers with decimeter resolution. William E. Carter cofounded the National Center for Airborne Laser Mapping, which is funded by the National Science Foundation and operated by the University of Houston and the University of California, Berkeley. Carter discussed the promise of the technology with American Scientist associate editor Catherine Clabby |
http://www.americanscientist.org/issues.....m-above
Quote | Statistics and machine learning In machine learning, the term "ground truth" refers to the accuracy of the training set's classification for supervised learning techniques. This is used in statistical models to prove or disprove research hypotheses. The term "ground truthing" refers to the process of gathering the proper objective (provable) data for this test. Compare with gold standard (test).
Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm – inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts. |
http://en.wikipedia.org/wiki.......d_truth
None of the above has any relevance at all to unsupervised learning in cognitive systems that demonstrate how intelligence works. |
Bullshit.
Ground truthing is clearly understood as objectively testing your model / algorithm / output / data / analytical process against objective real data, against some known standard. It's the process of asking whether one's program / procedure properly produce real outcomes. It's fundamental to most work with models, and you don't do it. (We'll get to unsupervised learning in a moment.)
In my own research involving LiDAR data, I (occasionally) need to compare the processed output against real terrain (the ground), to understand when and how data processing produces artifacts: for example, LiDAR does fantastically in showing ploughing in flat fields, but the algorithms for removing houses and trees and bridges etc. are imperfect, so you may get some local odd effects and have difficulty distinguishing some types of rough ground from areas from which trees have been processed out. That's a nice example of necessary reality-checking.
If you are doing GPS surveying (particularly in the early days with less accurate systems), you needed to survey a known area or re-occupy specific points to get an understanding of precisely how crappy the output could be (which were typically worse than the estimates provided by the GPS unit's internal calculations). That's another form of ground-truthing.
In global climate models, ground-truthing means building the program with one set of data (for one year or one region or one set of conditions) and testing it to see if it can produce output that matches real measurements for another year or region or set of conditions). Once again, reality-checking in order to see that the program is producing something real, meaningful, correct, and useful.
Quote | "In machine learning, the term 'ground truth' refers to the accuracy of the training set's classification for supervised learning techniques. This is used in statistical models to prove or disprove research hypotheses. The term 'ground truthing' refers to the process of gathering the proper objective (provable) data for this test." | That's collecting real data (data that can be proven or data whose accuracy is known) and seeing whether your learning algorithm can process it correctly, which YOU ARE NOT DOING. https://www.cs.utexas.edu/~pstone....sh.html https://books.google.com/books?i....f=false
With respect to supervised learning by spam filters, one trains the system by providing it with examples of spam labelled as such, and one tests it by seeing whether it recognizes instances of spam (real or generated by the programmer, but different from the ones that it learned on) that the program creator wishes it to identify as spam (as opposed to passing spam through or blocking stuff that isn't spam). That's ground-truthing when making a spam filter. (For a more general discussion of training expert systems, see http://www.quora.com/When-es....ing-set .) If you've done a great job of making a program that is capable of machine-learning, the program will gradually improve its algorithms for recognizing spam. You don't just let it label stuff as spam and accept its decisions merely because it said so. Ground-truthing provides an excellent and reliable route to improved models: Quote | [from ]http://ibmdatamag.com/2014.......14....] Machine learning has a critical dependency on learned humans. Without a baseline set of training data labeled by one or more human experts, many machine-learning algorithms can’t get off square one. They search for data patterns that are consistent with those previously tagged and flagged by a human in the know. This description is a well-established machine-learning approach called supervised learning. |
You are correct that unsupervised learning is a little different, because there one is allowing the computer to make its own associations without regard to whether we think it is making the correct associations in the right way ( http://venturebeat.com/2014.......d-truth ). However, a) this stuff is still eventually ground-truthed, just much later in the process, by whether it actually produces real answers (testing whether Watson could handle Jeopardy was an excellent demonstration of ground-truthing). People are examining alternatives to ground-truthing for intermediate-level testing out systems (e.g. http://eturwg.c4i.gmu.edu/?q=node........6),. However, that gives you no comfort because you aren't operating at that level. Neither you nor your program are doing much in the way of unsupervised learning: you are indeed unsupervised, but you demonstrably aren't learning very much, and your program is not doing modern machine learning, supervised or not. While your "model bug" accumulates experiences and benefits from that it doesn't improve its own algorithms for processing those experiences, so it's a fairly piss-poor example of modern machine learning.
Worse, you aren't "demonstrating how intelligence works". Your program does not identify known examples of intelligence, and it cannot quantify intelligence because you still lack a valid operational definition. You are still modelling insects with a hippocampus - the simply bit of reality-checking would show you that that is wrong. (Yes, they have "mushroom bodies" that are vaguely functionally equivalent in some respects, but they don't have hippocampi.) You don't do anything even remotely approaching any version of ground-truthing, because you are foolish and don't know what you are doing or how to do it. You would be far better off if you did do some ground-truthing rather than screwing around in some at best marginally related areas trying to justify your incompetence.
Over on Sandwalk at http://sandwalk.blogspot.com/2015.......t-form, Chris B tells you Quote | Gary, Your use of the word "guesses" implies some sort of conscious agency to the process of somatic hypermutation and B cell affinity maturation. This is not implied at all in the video you link. However, affinity maturation shows in principle how random mutation and natural selection can work, and is evidence against ID creationist claims that random mutation cannot improve protein binding sites. | and Quote | If you want to replace it with YOUR model, then it is YOU who has some explaining to do. Provide your one paragraph summary of the model you want to replace it with, and explain how it fits the data better than my description. That's how science works. Don't send me off to red herring websites. In addition, I don't need to provide you with a better scientific model for "intelligent cause". You have yet to provide evidence that "intelligent cause" has anything to do with affinity maturation or any other naturally observed phenomena. I am not going to chase that red herring in circles. |
Diogenes tells you, Quote | Gary, you have done nothing to explain the origin of intelligence. You merely allege a cause without evidence. Allegations of cause are not necessarily explanations. To count as scientific explanations, they must either 1. be deductions from generally observed principles or 2. lead to unusual, distinctive, predicted observable phenomena that match observations.
You never did either. You are what I call a "definitional crackpot", someone who gives idiosyncratic definitions of words and then insists that his definitions have to be treated as if empirically proven. Thus you claim human intelligence is based on "cellular intelligence" which is based on "molecular intelligence" etc., defining a bunch of terms that no one gives a shit about, and that you can't get into the peer-reviewed literature, because you have never demonstrated that such definitions are an essential part of a theory that makes testable predictions about observable phenomena.
Yet you insist upon acting as if your definitions have the status of empirical observations.
An allegation of cause unsupported by evidence is not an explanation. You have no explanation, and you have contributed nothing to "cognitive science", and none of us here give a shit about cognitive science anyway.
The question is affinity maturation in the immune system. That is based on random mutations, not "guesses". When you called them "guesses" you were lying. |
and all you've got to reply with are your usual insults and misunderstandings. At some point you ought to re-evaluate what you are doing.
|