Jump to content

Massively parallel processor array

From Wikipedia, the free encyclopedia

A massively parallel processor array, also known as a multi purpose processor array (MPPA) is a type of integrated circuit which has a massively parallel array of hundreds or thousands of CPUs and RAM memories. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips. MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Architecture

[edit]

MPPA is a MIMD (Multiple Instruction streams, Multiple Data) architecture, with distributed memory accessed locally, not shared globally. Each processor is strictly encapsulated, accessing only its own code and memory. Point-to-point communication between processors is directly realized in the configurable interconnect.[1]

The MPPA's massive parallelism and its distributed memory MIMD architecture distinguishes it from multicore and manycore architectures, which have fewer processors and an SMP or other shared memory architecture, mainly intended for general-purpose computing. It's also distinguished from GPGPUs with SIMD architectures, used for HPC applications.[2]

Programming

[edit]

An MPPA application is developed by expressing it as a hierarchical block diagram or workflow, whose basic objects run in parallel, each on their own processor. Likewise, large data objects may be broken up and distributed into local memories with parallel access. Objects communicate over a parallel structure of dedicated channels. The objective is to maximize aggregate throughput while minimizing local latency, optimizing performance and efficiency. An MPPA's model of computation is similar to a Kahn process network or communicating sequential processes (CSP).[3]

Applications

[edit]

MPPAs are used in high-performance embedded systems and hardware acceleration of desktop computer and server applications, such as video compression,[4][5] image processing,[6] medical imaging, network processing, software-defined radio and other compute-intensive streaming media applications, which otherwise would use FPGA, DSP and/or ASIC chips.

Examples

[edit]

MPPAs developed in companies include ones designed at: Ambric, PicoChip, Intel,[7] IntellaSys, GreenArrays, ASOCS, Tilera, Kalray, Coherent Logix, Tabula, and Adapteva. Aspex (Ericsson) Linedancer differs in that it was a Massive wide SIMD Array rather than an MPPA. Strictly speaking it could qualify as SIMT due to all 4096 of the 3,000 gate cores having its own Content-Addressable Memory.[8][9]

Fabricated MPPAs developed in universities include: 36-core[10] and 167-core[11] Asynchronous Array of Simple Processors (AsAP) arrays from the University of California, Davis, 16-core RAW[12] from MIT, and 16-core[13] and 24-core[14] arrays from Fudan University.

The Chinese Sunway project developed their own 260-core SW26010 manycore chip for the TaihuLight supercomputer, which is as of 2016 the world's fastest supercomputer.[15][16]

Anton 3 processors, designed by D. E. Shaw Research for molecular dynamics simulations, contain arrays of 576 processors arranged in a 12×24 tiled grid of pairs of cores; a routed network links these tiles together and extends off-chip to other nodes in a full system.[17][18]

See also

[edit]

References

[edit]
  1. ^ Mike Butts, "Synchronization through Communication in a Massively Parallel Processor Array", IEEE Micro, vol. 27, no. 5, September/October 2007, IEEE Computer Society
  2. ^ Mike Butts, "Multicore and Massively Parallel Platforms and Moore's Law Scalability", Proceedings of the Embedded Systems Conference - Silicon Valley, April 2008
  3. ^ Mike Butts, Brad Budlong, Paul Wasson, Ed White, "Reconfigurable Work Farms on a Massively Parallel Processor Array", Proceedings of FCCM, April 2008, IEEE Computer Society
  4. ^ Laurent Bonetto, "Massively parallel processing arrays (MPPAs) for embedded HD video and imaging (Part 1)", Video/Imaging DesignLine, May 16, 2008 http://www.eetimes.com/document.asp?doc_id=1273823
  5. ^ Laurent Bonetto, "Massively parallel processing arrays (MPPAs) for embedded HD video and imaging (Part 2)", Video/Imaging DesignLine, July 18, 2008 http://www.eetimes.com/document.asp?doc_id=1273830
  6. ^ Paul Chen, "Multimode sensor processing using Massively Parallel Processor Arrays (MPPAs)", Programmable Logic DesignLine, March 18, 2008 http://www.pldesignline.com/howto/206904379
  7. ^ Vangal, Sriram R., Jason Howard, Gregory Ruhl, Saurabh Dighe, Howard Wilson, James Tschanz, David Finan et al. "An 80-tile sub-100-w teraflops processor in 65-nm cmos." Solid-State Circuits, IEEE Journal of 43, no. 1 (2008): 29-41.
  8. ^ Krikelis, A. (1990). "Artificial Neural Network on a Massively Parallel Associative Architecture". International Neural Network Conference. p. 673. doi:10.1007/978-94-009-0643-3_39. ISBN 978-0-7923-0831-7.
  9. ^ https://core.ac.uk/download/pdf/25268094.pdf [bare URL PDF]
  10. ^ Yu, Zhiyi, Michael Meeuwsen, Ryan Apperson, Omar Sattari, Michael Lai, Jeremy Webb, Eric Work, Tinoosh Mohsenin, Mandeep Singh, and Bevan Baas. "An asynchronous array of simple processors for DSP applications." In IEEE International Solid-State Circuits Conference,(ISSCC’06), vol. 49, pp. 428-429. 2006
  11. ^ Truong, Dean, Wayne Cheng, Tinoosh Mohsenin, Zhiyi Yu, Toney Jacobson, Gouri Landge, Michael Meeuwsen et al. "A 167-processor 65 nm computational platform with per-processor dynamic supply voltage and dynamic clock frequency scaling." In Symposium on VLSI Circuits, pp. 22-23. 2008
  12. ^ Michael Bedford Taylor, Jason Kim, Jason Miller, David Wentzlaff, Fae Ghodrat, Ben Greenwald, Henry Hoffmann, Paul Johnson, Walter Lee, Arvind Saraf, Nathan Shnidman, Volker Strumpen, Saman Amarasinghe, and Anant Agarwal, "A 16-issue multiple-program-counter microprocessor with point-to-point scalar operand network," Proceedings of the IEEE International Solid-State Circuits Conference, February 2003
  13. ^ Yu, Zhiyi, Kaidi You, Ruijin Xiao, Heng Quan, Peng Ou, Yan Ying, Haofan Yang, and Xiaoyang Zeng. "An 800MHz 320mW 16-core processor with message-passing and shared-memory inter-core communication mechanisms." In Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2012 IEEE International, pp. 64-66. IEEE, 2012.
  14. ^ Ou, Peng, Jiajie Zhang, Heng Quan, Yi Li, Maofei He, Zheng Yu, Xueqiu Yu et al. "A 65nm 39GOPS/W 24-core processor with 11 Tb/s/W packet-controlled circuit-switched double-layer network-on-chip and heterogeneous execution array." In Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2013 IEEE International, pp. 56-57. IEEE, 2013.
  15. ^ Dongarra, Jack (June 20, 2016). "Report on the Sunway TaihuLight System" (PDF). www.netlib.org. Retrieved June 20, 2016.
  16. ^ Fu, Haohuan; Liao, Junfeng; Yang, Jinzhe; et al. (2016). "The Sunway TaihuLight Supercomputer: System and Applications". Sci. China Inf. Sci. 59 (7). doi:10.1007/s11432-016-5588-7.
  17. ^ Shaw, David E.; Adams, Peter J.; Azaria, Asaph; Bank, Joseph A.; Batson, Brannon; Bell, Alistair; Bergdorf, Michael; Bhatt, Jhanvi; Butts, J. Adam; Correia, Timothy; Dirks, Robert M.; Dror, Ron O.; Eastwood, Michael P.; Edwards, Bruce; Even, Amos (2021-11-14). "Anton 3". Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. St. Louis Missouri: ACM. pp. 1–11. doi:10.1145/3458817.3487397. ISBN 978-1-4503-8442-1. S2CID 239036976.
  18. ^ Adams, Peter J.; Batson, Brannon; Bell, Alistair; Bhatt, Jhanvi; Butts, J. Adam; Correia, Timothy; Edwards, Bruce; Feldmann, Peter; Fenton, Christopher H.; Forte, Anthony; Gagliardo, Joseph; Gill, Gennette; Gorlatova, Maria; Greskamp, Brian; Grossman, J.P. (2021-08-22). "The ΛNTON 3 ASIC: A Fire-Breathing Monster for Molecular Dynamics Simulations". 2021 IEEE Hot Chips 33 Symposium (HCS). Palo Alto, CA, USA: IEEE. pp. 1–22. doi:10.1109/HCS52781.2021.9567084. ISBN 978-1-6654-1397-8. S2CID 239039245.