Hyperdimensional computing

Hyperdimensional computing (HDC) is an approach to computation, particularly artificial intelligence. HDC is motivated by the observation that the cerebellum cortex operates on high-dimensional data representations.[1] In HDC, information is thereby represented as a hyperdimensional (long) vector called a hypervector. A hyperdimensional vector (hypervector) could include thousands of numbers that represent a point in a space of thousands of dimensions.[2] Vector Symbolic Architectures is an older name for the same broad approach.[2][failed verification]

Process

edit

Data is mapped from the input space to sparse HD space under an encoding function φ : X → H. HD representations are stored in data structures that are subject to corruption by noise/hardware failures. Noisy/corrupted HD representations can still serve as input for learning, classification, etc. They can also be decoded to recover the input data. H is typically restricted to range-limited integers (-v-v)[3]

This is analogous to the learning process conducted by fruit flies olfactory system. The input is a roughly 50-dimensional vector corresponding to odor receptor neuron types. The HD representation uses ~2,000-dimensions.[3]

Transparency

edit

HDC algebra reveals the logic of how and why systems makes decisions, unlike artificial neural networks. Physical world objects can be mapped to hypervectors, to be processed by the algebra.[2]

Performance

edit

HDC is suitable for "in-memory computing systems", which compute and hold data on a single chip, avoiding data transfer delays. Analog devices operate at low voltages. They are energy-efficient, but prone to error-generating noise. HDC's can tolerate such errors.[2]

Various teams have developed low-power HDC hardware accelerators.[3]

Nanoscale memristive devices can be exploited to perform computation. An in-memory hyperdimensional computing system can implement operations on two memristive crossbar engines together with peripheral digital CMOS circuits. Experiments using 760,000 phase-change memory devices performing analog in-memory computing achieved accuracy comparable to software implementations.[4]

Errors

edit

HDC is robust to errors such as an individual bit error (a 0 flips to 1 or vice versa) missed by error-correcting mechanisms. Eliminating such error-correcting mechanisms can save up to 25% of compute cost. This is possible because such errors leave the result "close" to the correct vector. Reasoning using vectors is not compromised. HDC is at least 10x more error tolerant than traditional artificial neural networks, which are already orders of magnitude more tolerant than traditional computing.[2]

Example

edit

A simple example considers images containing black circles and white squares. Hypervectors can represent SHAPE and COLOR variables and hold the corresponding values: CIRCLE, SQUARE, BLACK and WHITE. Bound hypervectors can hold the pairs BLACK and CIRCLE, etc.[2]

Orthogonality

edit

High-dimensional space allows many mutually orthogonal vectors. However, If vectors are instead allowed to be nearly orthogonal, the number of distinct vectors in high-dimensional space is vastly larger.[2]

HDC uses the concept of distributed representations, in which an object/observation is represented by a pattern of values across many dimensions rather than a single constant.[3]

Operations

edit

HDC can combine hypervectors into new hypervectors using well-defined vector space operations.

Groups, rings, and fields over hypervectors become the underlying computing structures with addition, multiplication, permutation, mapping, and inverse as primitive computing operations.[4] All computational tasks are performed in high-dimensional space using simple operations like element-wise additions and dot products.[3]

Binding creates ordered point tuples and is also a function ⊗ : H × H → H. The input is two points in H, while the output is a dissimilar point. Multiplying the SHAPE vector with CIRCLE binds the two, representing the idea “SHAPE is CIRCLE”. This vector is "nearly orthogonal" to SHAPE and CIRCLE. The components are recoverable from the vector (e.g., answer the question "is the shape a circle?").[3]

Addition creates a vector that combines concepts. For example, adding “SHAPE is CIRCLE” to “COLOR is RED,” creates a vector that represents a red circle.

Permutation rearranges the vector elements. For example, permuting a three-dimensional vector with values labeled x, y and z, can interchange x to y, y to z, and z to x. Events represented by hypervectors A and B can be added, forming one vector, but that would sacrifice the event sequence. Combining addition with permutation preserves the order; the event sequence can be retrieved by reversing the operations.

Bundling combines a set of elements in H as function ⊕ : H ×H → H. The input is two points in H and the output is a third point that is similar to both.[3]

History

edit

Vector symbolic architectures (VSA) provided a systematic approach to high-dimensional symbol representations to support operations such as establishing relationships. Early examples include holographic reduced representations, binary spatter codes, and matrix binding of additive terms. HD computing advanced these models, particularly emphasizing hardware efficiency.[3]

In 2018, Eric Weiss showed how to fully represent an image as a hypervector. A vector could contain information about all the objects in the image, including properties such as color, position, and size.[2]

In 2023, Abbas Rahimi et al., used HDC with neural networks to solve Raven's progressive matrices.[2]

In 2023, Mike Heddes et Al. under the supervision of Professors Givargis, Nicolau and Veidenbaum created a hyper-dimensional computing library[5] that is built on top of PyTorch.

Applications

edit

Image recognition

edit

HDC algorithms can replicate tasks long completed by deep neural networks, such as classifying images.[2]

Classifying an annotated set of handwritten digits uses an algorithm to analyze the features of each image, yielding a hypervector per image. The algorithm then adds the hypervectors for all labeled images of e.g., zero, to create a prototypical hypervector for the concept of zero and repeats this for the other digits.[2]

Classifying an unlabeled image involves creating a hypervector for it and comparing it to the reference hypervectors. This comparison identifies the digit that the new image most resembles.[2]

Given labeled example set   is the class of a particular xi.[3]

Given query xq ∈ X the most similar prototype can be found with  . The similarity metric ρ is typically the dot-product.[3]

Reasoning

edit

Hypervectors can also be used for reasoning. Raven's progressive matrices presents images of objects in a grid. One position in the grid is blank. The test is to choose from candidate images the one that best fits.[2]

A dictionary of hypervectors represents individual objects. Each hypervector represents an object concept with its attributes. For each test image a neural network generates a binary hypervector (values are 1 or −1) that is as close as possible to some set of dictionary hypervectors. The generated hypervector thus describes all the objects and their attributes in the image.[2]

Another algorithm creates probability distributions for the number of objects in each image and their characteristics. These probability distributions describe the likely characteristics of both the context and candidate images. They too are transformed into hypervectors, then algebra predicts the most likely candidate image to fill the slot.[2]

This approach achieved 88% accuracy on one problem set, beating neural network–only solutions that were 61% accurate. For 3-by-3 grids, the system was 250x faster than a method that used symbolic logic to reason, because of the size of the associated rulebook.[2]

Other

edit

Other applications include bio-signal processing, natural language processing, and robotics.[3]

See also

edit

References

edit
  1. ^ Zou, Zhuowen; Alimohamadi, Haleh; Imani, Farhad; Kim, Yeseong; Imani, Mohsen (2021-10-01), Spiking Hyperdimensional Network: Neuromorphic Models Integrated with Memory-Inspired Framework, arXiv:2110.00214
  2. ^ a b c d e f g h i j k l m n o p Ananthaswamy, Anan (April 13, 2023). "A New Approach to Computation Reimagines Artificial Intelligence". Quanta Magazine.
  3. ^ a b c d e f g h i j k Thomas, Anthony; Dasgupta, Sanjoy; Rosing, Tajana (2021-10-05). "A Theoretical Perspective on Hyperdimensional Computing" (PDF). Journal of Artificial Intelligence Research. 72: 215–249. doi:10.1613/jair.1.12664. ISSN 1076-9757. S2CID 239007517.215-249&rft.date=2021-10-05&rft_id=https://api.semanticscholar.org/CorpusID:239007517#id-name=S2CID&rft.issn=1076-9757&rft_id=info:doi/10.1613/jair.1.12664&rft.aulast=Thomas&rft.aufirst=Anthony&rft.au=Dasgupta, Sanjoy&rft.au=Rosing, Tajana&rft_id=https://redwood.berkeley.edu/wp-content/uploads/2021/08/Thomas2021.pdf&rfr_id=info:sid/en.wikipedia.org:Hyperdimensional computing" class="Z3988">
  4. ^ a b Karunaratne, Geethan; Le Gallo, Manuel; Cherubini, Giovanni; Benini, Luca; Rahimi, Abbas; Sebastian, Abu (June 2020). "In-memory hyperdimensional computing". Nature Electronics. 3 (6): 327–337. arXiv:1906.01548. doi:10.1038/s41928-020-0410-3. ISSN 2520-1131. S2CID 174797921.327-337&rft.date=2020-06&rft_id=info:arxiv/1906.01548&rft_id=https://api.semanticscholar.org/CorpusID:174797921#id-name=S2CID&rft.issn=2520-1131&rft_id=info:doi/10.1038/s41928-020-0410-3&rft.aulast=Karunaratne&rft.aufirst=Geethan&rft.au=Le Gallo, Manuel&rft.au=Cherubini, Giovanni&rft.au=Benini, Luca&rft.au=Rahimi, Abbas&rft.au=Sebastian, Abu&rft_id=https://www.nature.com/articles/s41928-020-0410-3&rfr_id=info:sid/en.wikipedia.org:Hyperdimensional computing" class="Z3988">
  5. ^ Heddes, Mike; Nunes, Igor; Vergés, Pere; Kleyko, Denis; Abraham, Danny; Givargis, Tony; Nicolau, Alexandru; Veidenbaum, Alexander (2022-05-18). "Torchhd: An Open Source Python Library to Support Research on Hyperdimensional Computing and Vector Symbolic Architectures". arXiv:2205.09208 [cs.LG].
edit
  • Neubert, Peer; Schubert, Stefan; Protzel, Peter (2019-12-01). "An Introduction to Hyperdimensional Computing for Robotics". KI – Künstliche Intelligenz. 33 (4): 319–330. doi:10.1007/s13218-019-00623-z. ISSN 1610-1987. S2CID 202642163.319-330&rft.date=2019-12-01&rft_id=https://api.semanticscholar.org/CorpusID:202642163#id-name=S2CID&rft.issn=1610-1987&rft_id=info:doi/10.1007/s13218-019-00623-z&rft.aulast=Neubert&rft.aufirst=Peer&rft.au=Schubert, Stefan&rft.au=Protzel, Peter&rft_id=https://doi.org/10.1007/s13218-019-00623-z&rfr_id=info:sid/en.wikipedia.org:Hyperdimensional computing" class="Z3988">
  • Neubert, Peer; Schubert, Stefan (2021-01-19). "Hyperdimensional computing as a framework for systematic aggregation of image descriptors". arXiv:2101.07720v1 [cs.CV].
  • "HD/VSA". www.hd-computing.com. 2023-03-13. Retrieved 2023-04-15.