Hopfield Networks Anns And Mind Maps

PAGES
5
WORDS
1749
Cite

Hopfield Network and Human Learning [Author Name(s), First M. Last, Omit Titles and Degrees]

The Hopfield network exists as an idealized, yet simple model of what is called attractor neural net dynamics. It translates well to mathematical examination. However, it is not compatible with practical computational intelligence or detailed neural modeling. Nevertheless, just like with everything, it can be modified. With modifications, the standard Hopfield net can then be used to implement continual learning through placing a cap on absolute values of link weights allowing effective functioning of the Hopfield net. There are of course some drawbacks, meaning it may be required for maintenance of the network for a large number of neurons to continually shift windows of memories.

A Hopfield network is a type of continuing artificial neural network made popular in 1982 by John Hopfield. Although Hopfield made it popular, it was described by Little, earlier in 1974. Hopfield nets, as mentioned earlier, actually serve as content-addressable memory systems that have binary threshold nodes. In order to understand how Hopfield nets can translate to continuous learning, it is important to briefly examine what they are and what they do.

Think of Hopfield nets as a collection C of what will be labeled as "training patterns." This is then defined as subsets of an N set of data elements. These are denoted from c1-c10 ... and so forth. The collection of weights signifies the Hopfield net labeled C over N and is denoted as w1,wj. These cause the neral net N to have elements of C as attractors with thresholds at each node under standard activation-spreading dynamic forces. For the training to begin, it starts with all weights beginning at base value zero (Maurer, Hersch & Billard, 2005).

Training patterns labeled as cj, are continuously cycled consecutively. The training patterns allow the nodes they contain to then turn on or activate. The weights existing in the links, through any standard Hopfield net learning version, are adjusted accordingly. R becomes the learning rate and when adjustments are made for all categories, a network turns into a trained network. Trainings patterns can then be received stimulating nodes and allowing activation to spread.

Standard Hopfield networks can operate with up to 85% of its connections deleted. This may lead to practical implementation and application of Hopfield net concepts when discussing brain or computational intelligence systems. Since only a little od the Hopfield network is required to activate the training patters, it in theory, can be applied to the neural network of the brain although the ratio can be .1 to 01. This means that Hopfield nets have limitations and cannot deal with a huge influx of information. The network cannot handle overloading.

In order for Hopfield networks to function as a continuous learning tool, Hopfield nets have to deal with a constantly changing environment. Some people simply flush links occasionally, retraining them. This does not work and is an inefficient strategy for use in a learning approach. The second is allowing the Hopfield network to occasional "unlearn" things. There is some literature that is for or against this. The anti-learning approach generates philosophical interest and may be connected to REM sleep within people.

Another approach is weight-capping meaning the binding of link weights below and above forcing the network to give precedence to the most recent memories and forgetting old memories. This may in theory, majorly reduce memory capacity. Literature suggest weight-capping could show the most promise as an approach to continuous learning utilizing ANN's.

The ANN module requires inputs act as sequences of key-points to the network. Such sequences are then stored within Hopfield networks that are linked together via a matrix of weights (W). The sequences are classified in accordance to sets of classes like c=1. This helps to store the correlation among the neuronal activities which are bounded and normalized. (Maurer, Hersch & Billard, 2005, p. 3).

Still this area of research is still limited and requires more information to actually see successful results in the experimental phase of research.

Neural networks are an important key in understanding and generating better learning techniques. Current research is aimed at mapping out human brain neuron connections. Memory and learning processes in the past, have been universally quantified for neural networks that are symmetrically connected. Nevertheless, in reality, neural networks are indeed asymmetrically connected. In a 2013 article, the researchers developed a nonequilibrium landscape. "-flux theory for asymmetrically connected neural networks. We found the landscape topography is critical in determining the global stability and function of the neural networks....

...

E4185). Here they try to observe the reality of neural networks rather than theorized and symmetrical manifestations.
Another article has researchers designing a spiking neural network for learning, mapping, and understanding STBD or spatio-temporal brain data.

Spatio- and spectro-temporal brain data (STBD) are the most commonly collected data for measuring brain response to external stimuli. An enormous amount of such data has been already collected, including brain structural and functional data under different conditions, molecular and genetic data, in an attempt to make a progress in medicine, health, cognitive science, engineering, education, neuro-economics, Brain -- Computer Interfaces (BCI), and games (Kasabov, 2014, p. 62).

The researchers sought to develop a model that can handle STBD because it learns from it and generates connections among clusters of neurons manifesting into chains of neuronal activity. The model is called NeuCube. When they applied learning to the NeuCube, it can reproduce aforementioned trajectories even with only partial representation of the stimuli data or STBD. This process is like an associative memory. To put it simply, NeuCube can act a predictive sytem of events and brain activities, helping to classify STBD. NeuCube can be a model that when linked or used with other models, may produce a way to understand human learning and replicate the phenomenon for application.

Authors like Andy Clark, publish books on connectionist networks that provide the framework for divergence from outdated representation of the human mind, to innovative and new ones. He begins with explanation of mobot bodies and how human development happens without control from a central executive that selects what the human does and when. Instead human learning is self-organized and emerges in time from various interactions consisting of several components. Clark also states that connectionist networks have important features that provide insight into learning which is they are parallel processors by supporting a perspective of cognition classified as highly decentralized.

This means no one is in charge. The brain's activity, much like the unconscious movements of the body, are self-organizing. The second feature is cognition within a connectionist network is interpreted as pattern completion and not classical reasoning. This means recognition of certain actions and patterns and a steer away from logic and mathematics. That is why people who are geniuses can often times learn through pattern recognition. They remember something that is connected to something else that is a part of a pattern like a song or a color shade.

This can also be seen as external scaffolding and epistemic action. It is like a shifting of a task from one that involves classical reasoning, to one that involves pattern recognition. "Think, for example, of moving Scrabble tiles around on their tray to see what words you can spell. Doing so changes the nature of the tasks to one of completing patterns" (Chemero, 1998, p. 3) External scaffolding can be very large as it can represent models, culture and language, maps, and tools. Problem solving for example, is pattern completion. Not logical inference. Clark offers like the Hopfield networks, a new way to understand information processing.

Recognition and regeneration of data is important any network, especially when it needs to be adapted for continuous learning. In the Cromley report, the report highlights the lack of overall improvement in intelligence and physical representations of intelligence, standardized test scores while attempting to use problem-solving and thinking programs that use logic problems in order to teach people to think better. The results lead to the conclusion that generalized thinking ability does not improve from short-term effort, but rather through incidental learning over a long period of time spent practicing (Cromley, 2000, p. 4). It is like when a person plays a piano. After practicing for years, the muscle memory allows the person to remember the patterns and repetitions of songs faster and faster, and then the person intuitively knows what keys plays and the pacing.

In order for models to work in relation to continuous learning, it has to implement pattern recognition and allow people to see and understand the process of getting from point A to point B. Analyzing and recognizing underlying patterns can then allow for improved learning and can then be adapted for other areas of knowledge. Hopfield networks through weight capping, can introduce information, set priority to new information, and allow for pattern recognition, which is integral and natural part of human learning. The autonomous nature of Hopfield networks mimics the natural state of the human brain as it works towards…

Sources Used in Documents:

References

Chemero, A. (1998). A Stroll Through the Worlds of Animats and Humans: Review of Being There: Putting Brain, Body and World Together Again by Andy Clark. The Philosophical Review, 107(4), 1-10. Retrieved from http://www.theassc.org/files/assc/2369.pdf

Cromley, J. (2000). Learning To Think, Learning To Learn: What the Science of Thinking and Learning Has To Offer Adult Education. NIFL Literacy Leader Fellowship, Program Reports, Volume IV, Number 1. ED Pubs, P.O. Box 1398, Jessup, MD 20794. Tel *** (Toll Free); E-Mail: - --; Web Site: Http://Www.Ed.Gov/Pubs/Edpubs.Html. For Full Text: Http://Www.Nifl.Gov/Activities/Cromleyb.Htm., 4(1), 1-226. Retrieved from http://eric.ed.gov/?id=ED440258

Kasabov, N. (2014). NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data. Neural Networks, 52, 62-76. http://dx.doi.org/10.1016/j.neunet.2014.01.006

Maurer, A., Hersch, M., & Billard, A. (2005). Extended Hopfield Network for Sequence Learning: Application to Gesture Recognition. Proceedings Of ICANN"05. Retrieved from http://infoscience.epfl.ch/record/60062
Yan, H., Zhao, L., Hu, L., Wang, X., Wang, E., & Wang, J. (2013). Nonequilibrium landscape theory of neural networks. Proceedings Of The National Academy Of Sciences, 110(45), E4185-E4194. http://dx.doi.org/10.1073/pnas.1310692110


Cite this Document:

"Hopfield Networks Anns And Mind Maps" (2015, December 08) Retrieved April 26, 2024, from
https://www.paperdue.com/essay/hopfield-networks-anns-and-mind-maps-2160125

"Hopfield Networks Anns And Mind Maps" 08 December 2015. Web.26 April. 2024. <
https://www.paperdue.com/essay/hopfield-networks-anns-and-mind-maps-2160125>

"Hopfield Networks Anns And Mind Maps", 08 December 2015, Accessed.26 April. 2024,
https://www.paperdue.com/essay/hopfield-networks-anns-and-mind-maps-2160125