|Wesley R. Elsberry
Joined: May 2002
|Quote (GaryGaulin @ Sep. 24 2017,21:52)|
|Quote (Wesley R. Elsberry @ Sep. 24 2017,19:51)|
|The block diagram isn't Gary's code. Gary's code defines what Gary is using. Gary specifically disavowed any need to actually use Heiserman "gamma" processes earlier, and also specifically disavowed actually having to implement *any* neural network, much less Trehub's specific model.|
I earlier explained that a "neural network" is just another RAM that can be optionally used,
Gary earlier communicated clearly that he has no clue what a neural network is, and confirms that again quite effectively just a few sentences on.
which David Heiserman simply added by using eight 2141 binary static RAM chips for "Main Memory". I do the same thing, by using code to dimension a digital RAM array on my PC.
Yeah, Heiserman used RAM. Not exactly his own notion, and Heiserman never asserted that RAM equated to a neural network.
It is hypocritical for you to suggest that I need to use an Artificial Neural Network, by changing the subject to Arnold Trehub who explained Synaptic Matrices not ANN's, anyway.
Gary doesn't understand what "hypocrisy" means, though he engages in it extensively.
Trehub, for what it is worth, was quite OK with putting his work in the context of neural models and "connectionism", the then-current phrasing for what is now called artificial neural networks. Trehub references Rumelhart, McClelland, Hopfield, and a variety of others in setting forth how his models corresponded to or differed from other connectionist models. The notion that because Trehub calls one of his models a "synaptic matrix" somehow takes it out of the realm of artificial neural systems would be just like saying that "perceptrons", "outstars", "BAMs", and "neocognitrons" were outside the field, too, because a name had been attached to a specific model. Just in case Gary is having a hard time with that, the notion is *ridiculous*, as in, "deserving of ridicule".
I modeled ANN's after modeling synaptic matrices per Arnold Trehub.
Reference to a fact never put in evidence. Given how little else Gary says actually holds up to scrutiny, asserting anyone should give the benefit of doubt is not justifiable.
Both are very different but still reduced down to what digital RAM can also do and without forgetting or needing reinforcement plus sleep to make a memory permanent.
If true, then the above accuses Gary of having done it wrong. It is, however, entirely consistent with the viewpoint that Gary is simply performing an animation, a system that graphically corresponds to another system with different operational dynamics.
Only difference is the critter has a memory that rivals rare people who can remember everything almost that well and never gets tired out then have to wait for it to wake up again.
"difference": Given that none of the rest of the supposed work is accessible or replicable, this is meaningless bafflegab.
I'm remaining true to what David modeled, which used digital RAM and a digital random generator to take a "guess" when "confidence level" goes to zero.
Heiserman didn't use "educated guess" to describe just that. Heiserman reserved that for his "gamma" memory update method, which was a prospective setting of memory for as-yet unexperienced conditions which was based on behavior acquired by experience. This is entirely absent from Gary's code, and shows that Gary is far from "true" to Heiserman.
Heiserman (1981, Robot Intelligence with Experiments, pp.20-21):
So in the evolutionary scheme of things, what comes after adding some memory to a system that is capable of responding to changes in its environment? Look at it this way. A purely Alpha-class machine exists in the moment. It has no way to work with events of the past or future. A Beta-class machine can call upon successful solutions to past problems in order to deal with the problems of the moment more effectively. What's missing? What is missing is teh[sic] creature's ability to anticipate events that might occur in the future.
A Gamma-class machine is one capable of generalizing from what it knows from first-hand experience to similar conditions not yet encountered in the environment. The machine works out sets of "educated guesses" regarding the nature of possible situations in the future, studies its own past experiences, and generalizes relevant elements of those experiences, saving them for a time when they might be needed.
Gary is not true to Heiserman. Beta-class is as far as Gary's code gets, which simply uses the Alpha-class mechanism when confidence is exhausted: choose a new response randomly. This is *not* what Heiserman uses "educated guess" to refer to, and Gary is wrong to represent himself as basing his "guess" terminology on Heiserman, when anyone can inspect Gary's code and determine that no Gamma-class operations are undertaken in it.
My adapting to the labeling that Arnold Trehub used to label the same overall underlying circuit does not obligate me to use synaptic matrix or other method of your choice in place of what a PC already has to store memories.
It also doesn't require anyone to consider that Gary is actually applying Trehub's concepts in his work, or that Gary's work should be given any greater consideration because Trehub was a serious scholar, since Gary isn't actually implementing Trehub's systems.
It works just fine for testing things like the spatial reasoning network, which seems to not even matter to you. For some reason your petty quibbling comes before important science progress. Why is that?
Gary ignores that I have previously given my critique on this, and has long had my answer. Short recap: Gary's code has no biological plausibility. I thus do not consider it to be "important science progress". So far, at best it qualifies as an animation with no connection to the system it presents an emulation of.
But, as usual with Gary, all of that is entirely a digression from the topic of his critique of people relying on "old junk" and his inability to show that he himself is not in his target group.
All I've requested is that Gary demonstrate that the specific ideas he relies upon are of current utility to others beside himself. Gary is having a very difficult time doing something that should be utterly simple, would that he were relying on things that could be described as something other than "old junk".
"You can't teach an old dogma new tricks." - Dorothy Parker