Jump to content

Dimitri Bertsekas

From Wikipedia, the free encyclopedia
(Redirected from Dimitri P. Bertsekas)
Dimitri P. Bertsekas[2]
Born1942
NationalityGreek
CitizenshipAmerican, Greece
Alma materNational Technical University of Athens(1968)[3]
Known forNonlinear programming
Convex optimization
Dynamic programming
Approximate dynamic programming
Stochastic systems and Optimal control
Data communication network optimization
Awards1997 INFORMS Computing Society (ICS) Prize
1999 Greek National Award for Operations Research
2001 John R. Ragazzini Award
2001 Member of the United States National Academy of Engineering
2009 INFORMS Expository Writing Award
2014 AACC Richard E. Bellman Control Heritage Award
2014 INFORMS Khachiyan Prize
2015 SIAM/MOS Dantzig Prize
2018 INFORMS John von Neumann Theory Prize
2022 IEEE Control Systems Award
Scientific career
FieldsOptimization, Mathematics, Control theory, and Data communication networks
InstitutionsThe George Washington University
Stanford University
University of Illinois at Urbana-Champaign
Massachusetts Institute of Technology
ThesisControl of Uncertain Systems with a Set-Membership Description of the Uncertainty (1971)
Doctoral advisorIan Burton Rhodes[1]
Other academic advisorsMichael Athans
Doctoral studentsSteven E. Shreve
Paul Tseng
Asuman Özdağlar[1]

Dimitri Panteli Bertsekas (born 1942, Athens, Greek: Δημήτρης Παντελής Μπερτσεκάς) is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and also a Fulton Professor of Computational Decision Making at Arizona State University, Tempe.

Biography

[edit]

Bertsekas was born in Greece and lived his childhood there. He studied for five years at the National Technical University of Athens, Greece and studied for about a year and a half at The George Washington University, Washington, D.C., where he obtained his M.S. in electrical engineering in 1969, and for about two years at MIT, where he obtained his doctorate in system science in 1971. Prior to joining the MIT faculty in 1979, he taught for three years at the Engineering-Economic Systems Dept. of Stanford University, and for five years at the Electrical and Computer Engineering Dept. of the University of Illinois at Urbana-Champaign. In 2019, he was appointed a full-time professor at the School of Computing and Augmented Intelligence at Arizona State University, Tempe, while maintaining a research position at MIT.[4][5]

He is known for his research work, and for his twenty textbooks and monographs in theoretical and algorithmic optimization and control, in reinforcement learning, and in applied probability. His work ranges from theoretical/foundational work, to algorithmic analysis and design for optimization problems, and to applications such as data communication and transportation networks, and electric power generation. He is featured among the top 100 most cited computer science authors[6] in the CiteSeer search engine academic database[7] and digital library.[8] He is also ranked within the top 40 scientists in the world (top 20 in the USA) in the field of Engineering and Technology, and also ranked within the top 50 scientists in the world (top 30 in the USA) in the field of Mathematics.[9][10] In 1995, he co-founded a publishing company, Athena Scientific, that among others, publishes most of his books.

In the late 1990s Bertsekas developed a strong interest in digital photography. His photographs have been exhibited on several occasions at MIT.[11]

Awards and honors

[edit]

Bertsekas was elevated to the grade of IEEE fellow in 1984 for contributions to optimization, data communications networks, and distributed control.[12] Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science[13] for his book "Neuro-Dynamic Programming" (co-authored with John N. Tsitsiklis); the 2000 Greek National Award for Operations Research; and the 2001 John R. Ragazzini Award for outstanding contributions to education.[14] In 2001, he was elected to the US National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks".[15] In 2009, he was awarded the 2009 INFORMS Expository Writing Award for his ability to "communicate difficult mathematical concepts with unusual clarity, thereby reaching a broad audience across many disciplines."[16] In 2014 he received the Richard E. Bellman Control Heritage Award from the American Automatic Control Council,[17][18] the Khachiyan Prize for life-time achievements in the area of optimization from the INFORMS Optimization Society.[19] Also he received the 2015 Dantzig prize from SIAM and the Mathematical Optimization Society,[20] the 2018 INFORMS John von Neumann Theory Prize (jointly with Tsitsiklis) for the books "Neuro-Dynamic Programming" and "Parallel and Distributed Algorithms",[16] and the 2022 IEEE Control Systems Award for “fundamental contributions to the methodology of optimization and control”, and “outstanding monographs and textbooks”.[21]

Selected publications

[edit]

Textbooks

[edit]
  • Dynamic Programming and Optimal Control (1996)
  • Data Networks (1989, co-authored with Robert G. Gallager)
  • Nonlinear Programming (1996)
  • Introduction to Probability (2003, co-authored with John N. Tsitsiklis)
  • A Course in Reinforcement Learning (2023)

Monographs

[edit]
  • "Stochastic Optimal Control: The Discrete-Time Case" (1978, co-authored with S. E. Shreve), a mathematically complex work, establishing the measure-theoretic foundations of dynamic programming and stochastic control.
  • "Constrained Optimization and Lagrange Multiplier Methods" (1982), the first monograph that addressed comprehensively the algorithmic convergence issues around augmented Lagrangian and sequential quadratic programming methods.
  • "Parallel and Distributed Computation: Numerical Methods" (1989, co-authored with John N. Tsitsiklis), which among others established the fundamental theoretical structures for the analysis of distributed asynchronous algorithms.
  • "Linear Network Optimization" (1991) and "Network Optimization: Continuous and Discrete Models" (1998), which among others discuss comprehensively the class of auction algorithms for assignment and network flow optimization, developed by Bertsekas over a period of 20 years starting in 1979.
  • "Neuro-Dynamic Programming" (1996, co-authored with Tsitsiklis), which laid the theoretical foundations for suboptimal approximations of highly complex sequential decision-making problems.
  • "Convex Analysis and Optimization" (2003, co-authored with A. Nedic and A. Ozdaglar) and "Convex Optimization Theory" (2009), which provided a new line of development for optimization duality theory, a new connection between the theory of Lagrange multipliers and nonsmooth analysis, and a comprehensive development of incremental subgradient methods.
  • "Abstract Dynamic Programming" (2013), which aims at a unified development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. A 3rd edition of this monograph, which extends the framework for applications to sequential zero-sum games and minimax problems, was published in 2022.
  • "Reinforcement Learning and Optimal Control" (2019), which aims to explore the common boundary between dynamic programming/optimal control and artificial intelligence, and to form a bridge that is accessible by workers with background in either field.
  • "Rollout, Policy Iteration, and Distributed Reinforcement Learning" (2020), which focuses on the fundamental idea of policy iteration, its one iteration counterpart, rollout, and their distributed and multiagent implementations. Some of these methods have been the backbones for high-profile successes in games such as chess, Go, and backgammon.[22][23][24]
  • “Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control" (2022), which introduces a new conceptual framework for reinforcement learning, based on off-line training and on-line play algorithms, which are designed independently of each other but operate in synergy through the powerful mechanism of Newton's method.

See also

[edit]

References

[edit]
  1. ^ a b Dimitri Bertsekas at the Mathematics Genealogy Project
  2. ^ Dimitri Bertsekas was elected in 2001 as a member of National Academy of Engineering in Electronics, Communication & Information Systems Engineering for pioneering contributions to fundamental research, practice, and education of optimization/control theory, and especially its application to data communication networks.
  3. ^ Dimitri P. Bertsekas' biography
  4. ^ Biography from Bertsekas' MIT Home Page
  5. ^ Biography from Bertsekas' ASU Home Page
  6. ^ One of the top 100 most cited computer science authors
  7. ^ Citeseer Most cited authors in Computer Science - August 2006
  8. ^ Google Scholar citations
  9. ^ "Research.com - Leading Academic Research Portal". Research.com. Retrieved 2022-03-30.
  10. ^ "Research.com - Leading Academic Research Portal". Research.com. Retrieved 2022-03-30.
  11. ^ Photo exhibition Archived 2010-06-21 at the Wayback Machine at MIT
  12. ^ "IEEE Fellows 1984 | IEEE Communications Society".
  13. ^ Election citation Archived 2006-06-20 at the Wayback Machine of 1997 INFORMS ICS prize
  14. ^ 2001 ACC John R. Ragazzini Award
  15. ^ Election citation Archived 2010-05-28 at the Wayback Machine by National Academy of Engineering
  16. ^ a b "2009 Saul Gass Expository Writing Award". informs. The Institute for Operations Research and the Management Sciences.
  17. ^ "Bellman award to Bertsekas". Archived from the original on 2014-10-19. Retrieved 2014-10-23.
  18. ^ Acceptance speech for Bellman award
  19. ^ "Khachiyan Prize Citation". Archived from the original on 2016-03-04. Retrieved 2014-11-02.
  20. ^ Dantzig Prize Citation
  21. ^ "Current IEEE Corporate Award Recipients". IEEE Awards. Retrieved 2021-07-11.
  22. ^ Tesauro, Gerald (1995-03-01). "Temporal difference learning and TD-Gammon". Communications of the ACM. 38 (3): 58–68. doi:10.1145/203330.203343. ISSN 0001-0782. S2CID 8763243.
  23. ^ Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian (October 2017). "Mastering the game of Go without human knowledge". Nature. 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN 1476-4687. PMID 29052630. S2CID 205261034.
  24. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy (2017-12-05). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv:1712.01815 [cs.AI].
[edit]