Sunday, June 24, 2012

Wittgen's Bio-Neurological Motivation


This post and the following posts will discuss the motivation for the creation of Wittgen and advantages that may arise through using it and thinking along the lines of its paradigms. 

One of the tidbits of wisdom of the computer industry, attributed to Alan Perlis in 1982, says:. “Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy”. Wittgen is not intended to be a Turing tar-pit. There seems little point in using it for general purpose standard applications.

Wittgen belongs to a family of programming languages, environments, virtual or real computers conceived of with a tiny instruction set. Gilreath and Laplante (2003) provide a detailed description of the One Instruction Set Computer (OISC). Wittgen has only two instructions and one special variable. The retrieve instruction has one argument - the entire string it contains - whereas the assign instruction takes two: the content to assign and the variable name.

There are a number of ways of creating a turing-complete programming system from two instructions. The design of Wittgen is based on the fact that both the human brain and your mind clearly seem to include these two. In fact, it is questionable whether the brain, for example, contains anything other than the ability to store a memory and retrieve a specific memory based on some key or some other form of memory instigator. (This assertion is exclusive of task-specific areas of the brain such as vision processing.) If the brain is a computer, or at least the part of it that is capable of discursive reason or other interesting intellectual achievements, then it is important to understand how these capabilities are achieved. If the brain is a two-instruction computer based on assign and retrieve, then Wittgen should be a very interesting experiment.

The current understanding of the human brain sees it as a network of some 1011 neurons. The biological neural network is often modeled using Artificial Neural Networks (ANNs). There are a large variety of such models. The details of the communication between Neurons are continuously being discovered. For example, Baslow (2009) shows that frequency-encoded language is used. This certainly paints a more complex picture than the simple weighted aggregation envisioned by early ANNs. Nevertheless, it may be a terrible simplification, but the functionality of a neural network taken as a whole is that in response to a specific input pattern, it will produce an output pattern. This may be seen as a fixed key-value response except that the response (a) can be learnt and (b) can be robust in the face of noise or input pattern incompleteness.

Traditionally there has been a conflict between advocates of ‘connectionist’ architectures and symbolic or classic computer architectures. Connectionist is a term that includes ANNs but also includes technologies that bear little resemblance to biological neural networks. More recently there has been an attempt to combine the connectionist and symbolic systems. At first this approach was referred to as “Hybrid” but in the last few years the term of choice seems to be Neuro-Symbolic Computing. Sun (1997) and Bader and Hitzler (2005) provide surveys of the research being conducted into these hybrids. Examples are Garcez et al. (2009) that explore the implementation of modal and other logics as ANNs and Siegelmann (1994) and Neto et al. (2003) that create programming languages that are designed to be compiled into neural networks.

Neural-symbolic research addresses the question, among others, of how symbolic systems might be implemented in a neural network implementation as opposed to a von Neumann architecture. The brain seems to be only a (set of) neural network(s). On the other hand, there certainly is some symbolic processing going on. Discursive reasoning, conscious planning, mathematical manipulation and language use seem to be the manipulation of symbolic tokens according to symbolic inference rules (logical or otherwise). Moreover, all this symbolic activity seems to take place in a sequential manner. Certainly, when considering these activities using internal intuitive reflection, they take place in some chronological order; one step at a time. How does the neural network of the brain produce such sequential symbolic capabilities?

This question was already recognized in Rumelhart et al. (1987). The model described there is of a sequence of neuronal activations where at each step an input pattern is fed to a neural net putting it into an excited state, then the net finds the corresponding output pattern, it settles into an energy minimum before another input pattern is provided thus initiating the next step. The input and output can be symbolic tokens and the sequence of activations corresponds to what we imagine is discursive thinking. There is, therefore, no symbolic processor as such, but nevertheless sequential thinking is a sequence of associative memory symbolic lookups.

Wittgen, approximately, uses this model. When Wittgen code is running, the effect is a sequence of associative memory lookups. If you write Wittgen code to do formal logic, say, then the formal logic is implemented in a sequence of associative memory operations. Therefore Wittgen provides a plausible model for how formal logic - a sequential symbolic process - can be executed by an ANN or a brain. This is because it is well understood how a neural network can implement associative memory. Wittgen, provides more than just a plausible model for human symbolic reasoning. It is not just an hypothesis for how reasoning in general might happen; it is a working language in which plausible models can be built for how humans think about specific problems. If used correctly, it can be used to explore how logic, arithmetic, planning, hypothesis formation, debate, strategy and many more fields that form part of what we understand as human reasoning.

Wittgen in its current form does not actually need a neural net to function. While neural nets are very capable of implementing an associative memory, there are other, well known technologies both in software (hash tables for example) or hardware (CAM chips) that can implement the core key-value lookup. Such technologies are extremely fast (a few nanoseconds) while the human neural network seems to take a significant fraction of a second (1-100ms intuitively) to retrieve a single symbolic memory. Nevertheless, since associative memory is a plausible bio-neurological option, it forms the core of the language.

It is possible to create long procedures with tens of items in Wittgen. Similarly, it is possible to create nested retrieve instructions with tens of retrieve levels. However, if the goal is plausible modeling of human reason, this should be avoided. Procedures can always be made very short, with at most two steps - one which does some assign and the other that refers to the next procedure to execute. By following these guidelines it is possible to create processes that are very psychologically plausible. The human mind is capable of storing a very large number of items in memory, each triggered by a different key. However, it is very difficult to remember accurately a long train of steps or to remember exactly step one is up to. (This changes when a page with writing or a scratch pad is used.)

Seen as a model of human symbolic reasoning, the flow of the calculation is controlled by the fact that the only item that the subject must keep track of actively is “Doing Now” which is simply the thought: what am I doing now. All the rest is responsive rather than proactive. For example: “What I am doing now is adding whatever’s in (the top number) to whatever’s in (the bottom number)”, “What I am doing now is adding 5 to 3”, “adding 5 to 3 is 5 plus 3”, “answer is 8”, “What was the next step?” “next step is remember whatever’s in (the answer) as the sum” and finally “the sum is 8”. Thus, following Rumelhart et al.’s model, there is a sequence of simple memory access activation states with no complex serial flow controller guiding the process.

Why then, the need for a neural net, biological or artificial? Isn't the associative memory a sufficient explanatory mechanism? The answer is that without changing the core “syntax” of the model, a neural net can replace an associative memory and extend its power. Firstly, a neural net can learn. As the programmed procedure is activated, the connection between initial starting point and final result can become stored. Thus alternatives to the full calculation arise which are equivalent to “knowing the answer right away”. For example, we could write a multiplication procedure that works on adding the first number to itself as many times as is specified by the second number. However, once I do 7 x 6 the hard way lots of times, I might simply learn that 7 x 6 = 42. It seems very intuitive that this is the way we learn and increase proficiency at whatever we practice repeatedly.

Secondly, a neural network, while learning, might miss certain differences between similar examples. Missing these differences can present a significant advantage. While associative memory will always keep “apples” as a separate string from “oranges”, if the outcome of apples and oranges is often the same, a neural network learning process might easily apply results learned from apples to oranges. One could say that it failed to learn the significance of the difference between the two or one could say that it has learned to create analogies between the two. Search algorithms, when applied to significant quantities of data suffer from the fact that the complexity is O(nD) where D is the dimensionality of the problem. By failing to distinguish between cases that need not be distinguished, one could say that the dimensionality is being collapsed.

Thirdly, outside experience may create the memory assignments. Wittgen includes the ability to assign content to a variable but the database/memory need not start empty nor is there a reason that some other process may not also be writing to memory. The content of memory, the collection of assignments may include associations that neural networks are classically known for. For example, the database may include a neural net trained extensively on good chess moves. The net thus recognizes patterns and excellent moves given specific situations or partial situations (where some of the information, such as where pawns are on the other side of the board, is considered irrelevant). In that case the role of a Wittgen program is to build the fragmentary one-move response knowledge into a play flow. Wittgen is the glue or wires that turns static associative knowledge into a sequentially ordered strategy. Again, it does this without introducing specialized serial hardware or components.

These three considerations are the reason the neural network basis for Wittgen is not simply replaced by associative memory. That does not mean that for performance reasons, the actual implementation of Wittgen might use an associative memory technology that is not a neural network.

This post has presented what may be called the neural network motivation for Wittgen. The next few posts will explore the philosophical aspects of the language. These for another motive for the design of Wittgen.

----

References:

d’Avila Garcez, Arthur; Lamb, L. C; Gabbay D. M., (2009) Neural-Symbolic Cognitive Reasoning. Cognitive Technologies. Springer-Verlag, ISBN 978-3-540-73245-7, 2009.

Bader, Sebastian; Hitzler, Pascal, (2005). "Dimensions of Neural-symbolic Integration - A Structured Survey". arXiv:cs/0511042v1

Baslow, Morris H. (2009). "The Languages of Neurons: An Analysis of Coding Mechanisms by Which Neurons Communicate, Learn and Store Information." Entropy 2009, 11, 782-797; doi:10.3390/e11040782

Gilreath, William F.; Laplante, Phillip A. (2003). Computer Architecture: A Minimalist Perspective. Springer Science+Business Media. ISBN 978-1-4020-7416-5

Neto, J. P. ; Siegelmann H. T.; Costa J. F. (2003). "Symbolic processing in neural networks." Journal of the Brazilian Computer Society, 8(3)

Rumelhart, D.E.; Hinton, G.E.; McClelland, J.L.; Smolensky P., (1987). "Schemata and Sequential Thought Processes." In D. E. Rumelhart, J. L. McClelland, and the PDP Research Group (Eds.). Parallel Distributed Processing, Vol, 2: Psychological and Biological Models. Cambridge, MA: MIT Press.

Siegelmann, Hava T., (1994). "Neural Programming Language." AAAI-94 Proceedings.

Sun, R., (1997). "An Introduction to Connectionist Symbolic Integration." R. Sun and F. Alexandre. (eds) Connectionist Symbolic Integration. Lawrence Erlbaum Associates, Hillsdale, NJ, 1997

No comments:

Post a Comment