MemComputing is a new physics-based approach to computation that employs time non-locality (memory) to both process and store information on the same physical location [1]. After a brief introduction to this computing paradigm, I will discuss its application in the field of Machine Learning, by showing efficient supervised and unsupervised training of neural networks, demonstrating its advantages over traditional sampling methods. Work supported by DARPA, DOE, NSF, CMRR, and MemComputing, Inc. (http://memcpu.com/).
[1] M. Di Ventra, MemComputing: Fundamentals and Applications (Oxford University Press, 2022).
Massimiliano Di Ventra obtained his undergraduate degree in Physics summa cum laude from the University of Trieste (Italy) in 1991 and did his PhD studies at the Swiss Federal Institute of Technology in Lausanne in 1993-1997. He is professor of Physics at the University of California, San Diego since 2004. Di Ventra's research interests are in condensed-matter theory and unconventional computing. He has been invited to deliver more than 350 talks worldwide on these topics including 16 plenary/keynote presentations. He has published more than 300 papers in refereed journals, 5 textbooks, and has 11 granted patents (7 foreign). He is a fellow of the American Association for the Advancement of Science, the American Physical Society, the Institute of Physics, the IEEE and a foreign member of Academia Europaea. In 2018 he was named Highly Cited Researcher by Clarivate Analytics, he is the recipient of the 2020 Feynman Prize for theory in Nanotechnology and is a 2022 IEEE Nanotechnology Council Distinguished Lecturer. He is the co-founder of MemComputing, Inc. (http://memcpu.com/).
MemComputing is a new physics-based approach to computation that employs time non-locality (memory) to both process and store information on the same physical location [1]. After a brief introduction to this computing paradigm, I will discuss its application in the field of Machine Learning, by showing efficient supervised and unsupervised training of neural networks, demonstrating its advantages over traditional sampling methods. Work supported by DARPA, DOE, NSF, CMRR, and MemComputing, Inc. (http://memcpu.com/).
[1] M. Di Ventra, MemComputing: Fundamentals and Applications (Oxford University Press, 2022).
Massimiliano Di Ventra obtained his undergraduate degree in Physics summa cum laude from the University of Trieste (Italy) in 1991 and did his PhD studies at the Swiss Federal Institute of Technology in Lausanne in 1993-1997. He is professor of Physics at the University of California, San Diego since 2004. Di Ventra's research interests are in condensed-matter theory and unconventional computing. He has been invited to deliver more than 350 talks worldwide on these topics including 16 plenary/keynote presentations. He has published more than 300 papers in refereed journals, 5 textbooks, and has 11 granted patents (7 foreign). He is a fellow of the American Association for the Advancement of Science, the American Physical Society, the Institute of Physics, the IEEE and a foreign member of Academia Europaea. In 2018 he was named Highly Cited Researcher by Clarivate Analytics, he is the recipient of the 2020 Feynman Prize for theory in Nanotechnology and is a 2022 IEEE Nanotechnology Council Distinguished Lecturer. He is the co-founder of MemComputing, Inc. (http://memcpu.com/).