[MERL Seminar Series 2024] Na Li presents talk titled Close the Loop: From Data to Actions in Complex Systems The explosive growth of machine learning and data-driven methodologies have revolutionized numerous fields. Yet, translating these successes to the domain of dynamical, physical systems remains a significant challenge, hindered by the complex and often unpredictable nature of such environments. Closing the loop from data to actions in these systems faces many difficulties, stemming from the need for sample efficiency and computational feasibility amidst intricate dynamics, along with many other requirements such as verifiability, robustness, and safety. In this talk, we bridge this gap by introducing innovative approaches that harness representation-based methods, domain knowledge, and the physical structures of systems. We present a comprehensive framework that integrates these components to develop reinforcement learning and control strategies that are not only tailored for the complexities of physical systems but also achieve efficiency, safety, and robustness with provable performance. Read more: https://lnkd.in/eE7KVExk #merl #mitsubishielectricresearchlabs #merlresearch #machinelearning
Mitsubishi Electric Research Laboratories’ Post
More Relevant Posts
-
Chief Revenue Officer | Head of Cloud Strategy & AI Development | Multi-Cloud & Cyber Security Architect | AWS SAA SAP | Azure*9 AI DS CAE SAE | GCP ACE PCA | DS Professional | Passionate About ML Fintech Blockchain
📊 Update on My Massachusetts Institute of Technology MITx MicroMasters® Programs Journey 📊 Over the course of one and a half years of rigorous study and application, I've successfully completed the remaining MITx 18.6501x: Fundamentals of Statistics and MITx 6.431x: Probability - The Science of Uncertainty and Data. This brings me another step closer to finishing the MITx MicroMaster® in Statistics and Data Science. Diving deep into statistical foundations and the intricacies of probability has not only elevated my academic prowess but has also significantly enhanced my capabilities in AI system development at FlexSystem Limited | BusinessPlus CFO Cash Flow Optimization. Harnessing these advanced skills, I've been able to drive robust, data-informed decisions and methodologies in our AI initiatives, further aligning with the company's vision of achieving unparalleled excellence in the tech domain. A heartfelt shoutout to the MITx and IDSS teams for curating such a comprehensive program that masterfully blends academic rigour with real-world applications. As I approach the conclusion of this course, I'm gearing up for the final capstone exam. Beyond this, I'm excitedly looking forward to continuing my academic journey, ensuring I stay at the forefront of data science and AI advancements. #DataScience #MITx #AIInnovation #StatisticalMastery #ProbabilityInAction #RealWorldApplications #TechProfessionals #AIDevelopment
To view or add a comment, sign in
-
-
📢 Exciting Webinar Alert! 📢 MatSQ 129 Webinar: "Materials Informatics: Moving Beyond Screening via Generative Machine Learning Models" 📅 Date: Sep 27th, 2023 ⏰ Time: 13:00 ~ 14:00 KST | Online 🎙 Speaker: Dr. Taylor D. Sparks Associate Professor of Materials Science & Engineering, University of Utah 🔍 About the Webinar: Machine learning has transformed the way we discover new materials. Yet, there's a recurring critique: often, the new materials suggested are too similar to known ones. Can machine learning traverse beyond the known and delve into uncharted territories of chemistries and materials families? This session will explore how new generative machine learning models, including variational autoencoders, GANs, and diffusion models, are pushing the boundaries. Discover how these tools, paired with DiSCoVeR and SMACT, can generate not just feasible but also intriguing periodic crystalline structures. 🔗 Register Now: https://lnkd.in/g8cUQTy4 Dive into the future of materials informatics and join the conversation on how generative machine learning is reshaping our understanding and discovery of materials. Secure your spot now! #MatSQ #MaterialsInformatics #GenerativeMachineLearning #Webinar
To view or add a comment, sign in
-
-
📢 𝗣𝘂𝗯𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗔𝗹𝗲𝗿𝘁: How do we monitor the evolution of a system's health condition without full time-to-failure trajectories to train our models on? In such cases, we propose using unsupervised health indicators through contrastive learning, as detailed in our recently accepted paper: "𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐈𝐧𝐟𝐨𝐫𝐦𝐚𝐭𝐢𝐯𝐞 𝐇𝐞𝐚𝐥𝐭𝐡 𝐈𝐧𝐝𝐢𝐜𝐚𝐭𝐨𝐫𝐬 𝐓𝐡𝐫𝐨𝐮𝐠𝐡 𝐔𝐧𝐬𝐮𝐩𝐞𝐫𝐯𝐢𝐬𝐞𝐝 𝐂𝐨𝐧𝐭𝐫𝐚𝐬𝐭𝐢𝐯𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠" in IEEE Transactions on Reliability in collaboration with SBB CFF FFS and in collaboration with Katharina Rombach (as the last and finally accepted paper of her dissertation), Gabriel MICHAU, Wilfried Bürzle & Stefan Koller. 𝗠𝗮𝗶𝗻 𝗖𝗼𝗻𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻𝘀 / 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀: - 𝗦𝗲𝗹𝗳-𝗦𝘂𝗽𝗲𝗿𝘃𝗶𝘀𝗶𝗼𝗻: In the absence of labels, operational time is used as a proxy for the state of degradation, enabling self-supervised learning. - 𝗛𝗲𝗮𝗹𝘁𝗵 𝗜𝗻𝗱𝗶𝗰𝗮𝘁𝗼𝗿 𝗖𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻: A robust feature representation is learned using contrastive learning and triplet loss, and a health indicator is constructed by measuring the distance to the decision boundary of a one-class support vector machine (OC-SVM). - 𝗦𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗮𝗻𝗱 𝗥𝗼𝗯𝘂𝘀𝘁 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: Our health indicators are highly sensitive to slight degradation changes (informative) while being resilient to noise, changes in operating conditions, and variations in the monitored fleet or system (reliable). - 𝗩𝗲𝗿𝘀𝗮𝘁𝗶𝗹𝗶𝘁𝘆: The approach is neither application nor task-specific, making it versatile for use with condition monitoring (CM) data from different systems that is either continuously measured or collected at distinct time intervals. - 𝗖𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻: The method is evaluated on two datasets with different characteristics: 𝙬𝙚𝙖𝙧 𝙖𝙨𝙨𝙚𝙨𝙨𝙢𝙚𝙣𝙩 𝙤𝙛 𝙢𝙞𝙡𝙡𝙞𝙣𝙜 𝙢𝙖𝙘𝙝𝙞𝙣𝙚𝙨 𝙖𝙣𝙙 𝙛𝙖𝙪𝙡𝙩 𝙙𝙚𝙩𝙚𝙘𝙩𝙞𝙤𝙣 𝙤𝙛 𝙧𝙖𝙞𝙡𝙬𝙖𝙮 𝙬𝙝𝙚𝙚𝙡𝙨. The results show a strong correlation with the actual wear of milling machines and improved fault detection performance for railway wheels compared to state-of-the-art methods. - 𝗥𝗼𝗯𝘂𝘀𝘁 𝘁𝗼 𝗩𝗮𝗿𝗶𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗢𝗽𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗖𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝘀: Experiments conducted on real CM data of operational assets demonstrate that the proposed method is robust to variations in operating conditions. Check out the full paper: 🔗 https://lnkd.in/e8AgxNd3 🔗https://lnkd.in/eHgeKAH4 IMOS Lab - EPFL
To view or add a comment, sign in
-
-
In a recent paper, researchers from Apple examine the question: How can we leverage the knowledge from a large VFM to effectively train a small task-specific model for a new target task with limited labeled training data, given most real-world resource restrictions? Their proposed method for task-oriented knowledge transfer outperforms task-specific VFM distillation, web-scale CLIP pre-training, supervised ImageNet pre-training, and self-supervised DINO pre-training, with up to a 1500% reduction in pre-training compute cost. Their findings include: ▪ Task-oriented knowledge transfer is preferred for small task-specific model training, since small models may not be able to inherit the vast knowledge of large models, and also because it outperforms task-agnostic transfer in both performance and cost. ▪ Performance further improves when large task-related unlabeled datasets are used as transfer sets. ▪ Retrieval-augmented transfer sets outperform generic CC3M transfer sets and should be used when a large task-related transfer set is not readily available. 👇 Read the full paper, accepted into #ICML2024, linked in the comments below. 👋 Make sure to stop by exhibitor booth #310 to say hi to the Comet team! #ArtificialIntelligence #MachineLearning #DeepLearning #Technology #Innovation [ICML] Int'l Conference on Machine Learning Research by: Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, Oncel Tuzel
To view or add a comment, sign in
-
-
Fueled by Innovation, Driven By Research: Edmund Yeh - Part 3 of 5 Gain insight into the real-world impact of Edmund's work in networked machine learning. Tune in all week to learn more about Dr. Edmund Yeh Northeastern University College of Engineering #techtransfer #innovation #electricalengineering #computerengineering
To view or add a comment, sign in
-
Design of Experiments (#DOE) remains a powerful way to optimize processes. But in one big area, DOEs fall short: DOEs don't adequately predict complex response surfaces common in engineering and science. #SVEM introduces a machine learning method built specifically: 1. For ease of use by non-analyst scientists and engineers 2. For small experimental data sets Evan Macedo's new article below shows how this revolutionary technology challenges conventions 👇 https://lnkd.in/ew7FiYgy 👀 The result? High-accuracy predictions that accelerate your experimental runs in the lab, or your process engineering. Thank you, Evan Macedo, for building a community of scientists and engineers harnessing the power of easy, action-ready statistics. --- Follow #predictum to empower your science with easy statistics and data analysis. Want to see the future of data and knowledge management? Check out CoBaseKRM. Message Austin Nann for a no-obligation demo.
To view or add a comment, sign in
-
-
💪 Time for some #MondayMotivation with Oilfield Technology ☀ In our Summer 2023 issue, Venkatesh Anantharamu, Ikon Science, evaluated the benefits of machine learning in the upstream sector. 📚 "Overall, machine learning offer opportunities to enhance prediction and interpretation by automating processes, integrating data, improving accuracy, and enabling data-driven decision making. However, it is essential to note that successfully implementing a machine learning solution requires high-quality am diverse training data, appropriate feature engineering, and careful validation and calibration of models in collaboration with domain experts" Read the rest of this article, and register for your FREE issue here ⬇ https://lnkd.in/dKc9dM9
To view or add a comment, sign in
-
-
With more than 100 participants, the 6th European Machine Vision Forum in Wageningen was a great success. Prof. Dr.-Ing. Michael Heizmann, Chair of the European Machine Vision Forum, KIT and Fraunhofer IOSB in Karlsruhe, was impressed by the consistently high quality of the presentations this year. Machine Learning was again a frequently used technology at the European Machine Vision Forum. Particularly in unstructured environments, such as in agriculture or industrial production, machine learning methods help to deal with variability. However, the presentations on machine learning also made very clear what prerequisites must be in place for its use. First and foremost, this includes suitable training data, which can now also be generated with the help of artificial intelligence, but which requires corresponding (human) expertise. All these contributions were particularly relevant to this year’s focus topic “Real-world Machine Vision Challenges – Coping with Variability and Uncontrolled Environments”. Thanks to all participants for having been there and to all of you having contributed to this forum. We are already looking forward to the European Machine Vision Forum in 2024. #EMVA #EMVF #EMVAforum #machinevision #machinelearning #imageprocessing #computervision #robustsystems #solutions #network
To view or add a comment, sign in
-
Federated Learning (FL) has gained attention for addressing data scarcity and privacy concerns. While parallel FL algorithms like FedAvg exhibit remarkable performance, they face challenges in scenarios with diverse network speeds and concerns about centralized control, especially in multi-institutional collaborations like the medical domain. Serial FL presents an alternative solution, circumventing these challenges by transferring model updates serially between devices in a cyclical manner. Nevertheless, it is deemed inferior to parallel FL in that (1) its performance shows undesirable fluctuations, and (2) it converges to a lower plateau, particularly when dealing with non-IID data. The observed phenomenon is attributed to catastrophic forgetting due to knowledge loss from previous sites. In this paper, to overcome fluctuation and low efficiency in the iterative learning and forgetting process, we introduce cyclical weight consolidation (CWC), a straightforward yet potent approach specifically tailored for serial FL. CWC employs a consolidation matrix to regulate local optimization. This matrix tracks the significance of each parameter on the overall federation throughout the entire training trajectory, preventing abrupt changes in significant weights. During revisitation, to maintain adaptability, old memory undergoes decay to incorporate new information. Our comprehensive evaluations demonstrate that in various non-IID settings, CWC mitigates the fluctuation behavior of the original serial FL approach and enhances the converged performance consistently and significantly. The improved performance is either comparable to or better than the parallel vanilla. #FederatedLearning #DataPrivacy #SerialFL #CyclicalWeightConsolidation #NonIIDData
To view or add a comment, sign in