Hierarchical Temporal Memory
페이지 정보
작성자 RO 작성일25-08-11 02:27 (수정:25-08-11 02:27)관련링크
본문
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence know-how developed by Numenta. Initially described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used at this time for anomaly detection in streaming information. The technology is based on neuroscience and the physiology and interplay of pyramidal neurons within the neocortex of the mammalian (in particular, human) brain. On the core of HTM are studying algorithms that may store, learn, infer, and recall high-order sequences. Not like most different machine learning strategies, HTM consistently learns (in an unsupervised process) time-based mostly patterns in unlabeled knowledge. HTM is robust to noise, and has excessive capability (it could possibly learn multiple patterns concurrently). A typical HTM network is a tree-formed hierarchy of levels (to not be confused with the "layers" of the neocortex, as described below). These levels are composed of smaller elements known as areas (or nodes). A single level within the hierarchy presumably contains a number of regions. Greater hierarchy ranges typically have fewer regions.
Increased hierarchy levels can reuse patterns learned at the lower levels by combining them to memorize extra complicated patterns. Each HTM region has the identical fundamental function. In learning and inference modes, sensory information (e.g. knowledge from the eyes) comes into bottom-degree areas. In era mode, the bottom stage areas output the generated pattern of a given class. When set in inference mode, a area (in each stage) interprets info coming up from its "child" regions as probabilities of the classes it has in memory. Every HTM region learns by identifying and memorizing spatial patterns-combos of enter bits that always occur at the identical time. It then identifies temporal sequences of spatial patterns which can be prone to occur one after one other. HTM is the algorithmic part to Jeff Hawkins’ Thousand Brains Concept of Intelligence. So new findings on the neocortex are progressively integrated into the HTM model, which changes over time in response. The new findings don't necessarily invalidate the earlier components of the model, so concepts from one generation should not essentially excluded in its successive one.
Throughout coaching, a node (or area) receives a temporal sequence of spatial patterns as its enter. 1. The spatial pooling identifies (within the input) frequently observed patterns and memorise them as "coincidences". Patterns which might be considerably comparable to one another are treated as the same coincidence. A lot of attainable input patterns are decreased to a manageable variety of known coincidences. 2. The temporal pooling partitions coincidences which can be prone to follow each other within the coaching sequence into temporal teams. Every group of patterns represents a "trigger" of the enter pattern (or "title" in On Intelligence). The concepts of spatial pooling and temporal pooling are still fairly vital in the current HTM algorithms. Temporal pooling is not but well understood, and its that means has changed over time (because the HTM algorithms evolved). Throughout inference, the node calculates the set of probabilities that a pattern belongs to each recognized coincidence. Then it calculates the probabilities that the enter represents every temporal group.
The set of probabilities assigned to the teams is called a node's "perception" concerning the enter sample. This perception is the result of the inference that is passed to a number of "guardian" nodes in the next larger degree of the hierarchy. If sequences of patterns are much like the training sequences, MemoryWave Guide then the assigned probabilities to the teams is not going to change as usually as patterns are acquired. In a extra normal scheme, the node's perception could be sent to the enter of any node(s) at any level(s), however the connections between the nodes are still fixed. The upper-level node combines this output with the output from different baby nodes thus forming its personal enter sample. Since resolution in area and time is misplaced in every node as described above, beliefs formed by increased-stage nodes signify a fair bigger vary of area and time. This is meant to reflect the organisation of the physical world as it's perceived by the human mind.
댓글목록
등록된 댓글이 없습니다.