Marie-Christine Fluet
Zürich, Schweiz
545 Follower:innen
500 Kontakte
Gemeinsame Kontakte mit Marie-Christine Fluet anzeigen
Neu bei LinkedIn? Mitglied werden
Gemeinsame Kontakte mit Marie-Christine Fluet anzeigen
Neu bei LinkedIn? Mitglied werden
Aktivitäten
-
15 years of changing the world - one startup a time: we celebrate all the brave entrepreneurs, the teams, investors, and supporters for the…
15 years of changing the world - one startup a time: we celebrate all the brave entrepreneurs, the teams, investors, and supporters for the…
Beliebt bei Marie-Christine Fluet
Weitere Aktivitäten von Marie-Christine Fluet
-
healthbank and ReHaptix are happy to announce a partnership on a joint project. The project aims to offer access to ReHaptix’ tests via the…
healthbank and ReHaptix are happy to announce a partnership on a joint project. The project aims to offer access to ReHaptix’ tests via the…
Beliebt bei Marie-Christine Fluet
Marie-Christine Fluets vollständiges Profil ansehen
Weitere ähnliche Profile
-
Murat Akhmedov
Co-founder and CEO at BigOmics Analytics
LuganoVernetzen -
Andy Christen
3x Founder | ex-CEO | Psychologist (PhD) | Board Member. Follow for insights on how to scale yourself, your team, and your business.
GenfVernetzen -
Christian Schüller
Developing Avatar AI technology at NVIDIA
Zürich, SchweizVernetzen -
Konstantinos Dermitzakis
Zürich, SchweizVernetzen -
Daniel Raimundo
DSP Engineer at Sonova
ZürichVernetzen -
Kenneth Funes Mora
MartignyVernetzen -
Florian Fallegger
Postdoc at Institut de la Vision // Co-Founder at Neurosoft Bioelectronics
ParisVernetzen -
Andreas Hugi
Project Leader @ Sensirion | Co Founder @ IRsweep
Zürich, SchweizVernetzen -
Felix Schill
Co-Founder of Hydromea SA
LausanneVernetzen -
Djen Kühnel
IT Consultant | Robotics Researcher | Regenerative Agriculture Advocate
BonnVernetzen -
Francisco Rincon
CTO at SmartCardia
LausanneVernetzen -
Urs Frey
ZürichVernetzen -
Anurag Sai Vempati
Zürich, SchweizVernetzen -
Nino Antulov-Fantulin
Cofounder & Head of Research and Development @ Aisot Technologies AG (ETH Spin-off). Senior Researcher in Complexity, Finance & Machine Learning
Zürich, SchweizVernetzen -
Fadri Furrer
Empowering everyone to build everything with incon.ai
ZürichVernetzen -
Charles Finsterwald
LausanneVernetzen -
Frank Bonnet
LausanneVernetzen -
Berend Snijder
ZürichVernetzen -
Pier Rubesa
Sound Innovator | Transforming lives through revolutionary sound and music technology
LausanneVernetzen -
Kevin Kleber
ZürichVernetzen
Weitere Beiträge entdecken
-
Bishesh Khanal
Our paper accepted as oral in MIDL 2024, Paris, France. https://midl.io/ #AI #MedicalImaging - Ability to provide text prompts as auxiliary information during semantic segmentation of medical images can be very powerful in clinical applications (interactive, robust to out-of-distributions, explainability, …) - Vision Language Models (VLMs), recently built at scale for open domain image and text pairs have been adapted to develop Vision Language Segmentation Models (VLSMs) for open domain images. Some of them have been further fine-tuned to medical images. - But, how much these VLSMs really leveraging language information in segmentation outputs? Does it really understand semantics? What prompts work, how to generate prompts, zero shot performance vs fine-tuning, … - This paper provides benchmark and experimental setups to probe these questions, and show some interesting results on how some models better leverage language inputs compared to others. Seems models overfit to image information more than the language, and there is lot of open questions in this direction. Very proud of my students for their hard work. It’s pleasing to see the undergrads with no prior experience in research work, to come this far working at NAAMII and now publishing papers in conferences like MIDL Kanchan Poudel Manish Dhakal Prasiddha Bhandari Rabin Adhikari Safal Thapaliya Openreview link: https://lnkd.in/geVAPsfP Arxiv link: https://lnkd.in/gyQ--SQH
2289 Kommentare -
Brittany P.
A Year of Journal Articles (Day 262/365) Wang, Junze, Wenjun Zhang, Dandan Li, Chao Li, and Weipeng Jing. "HGSNet: A hypergraph network for subtle lesions segmentation in medical imaging." IET Image Processing (2024). Summary: Problem: - Lesion segmentation in medical images is crucial, especially for hard-to-detect subtle lesions. - Convolutional Neural Networks (CNNs), a popular method, often struggle to capture the relationships between lesions, leading to errors in their spatial arrangement (topology) during training. Proposed Solution: The paper introduces a novel method that shifts from using pixel-level information to a representation called a hypergraph. - In a hypergraph, lesions are represented as vertices (points), and their connections are represented by hyperedges. This allows the model to capture the topological relationships between lesions. - A new strategy called Dynamic Hypergraph Learning Strategy (DHLS) is proposed. DHLS can dynamically construct hypergraphs based on the specific variations present in the input lesions. - The core network, called HGSNet (Hypergraph Global-aware Segmentation Network), is built upon this concept. HGSNet can capture the high-level structural information of the lesions, improving the overall understanding of their topology. - To further enhance performance, a composite loss function is introduced. This function focuses on both the global aspects of the segmentation and the accuracy of the boundaries around the lesions. Evaluation: - The paper compares HGSNet with other advanced models on publicly available medical image datasets from various organs. - The results demonstrate that HGSNet outperforms existing methods and achieves state-of-the-art performance in lesion segmentation. - Overall, this approach presents a promising technique for accurate lesion segmentation in medical images, particularly for subtle lesions where traditional CNN methods might struggle.
-
PiEEG
Some technical details about the JNEEG device are available in our paper in Arxiv from today Deep learning in real-time with EEG with Nvidia Jetson Nano(Nvidia) JNEEG was created especially for the task of using Machine learning and Deep learning to make signal processing and feature extraction for EEG and other biosignals. https://lnkd.in/e5hnnuTk #EEG #EMG #Nvidia #pieeg
3 -
es/iode
📃Scientific paper: Machine learning and EEG can classify passive viewing of discrete categories of visual stimuli but not the observation of pain Abstract: Previous studies have demonstrated the potential of machine learning (ML) in classifying physical pain from non-pain states using electroencephalographic (EEG) data. However, the application of ML to EEG data to categorise the observation of pain versus non-pain images of human facial expressions or scenes depicting pain being inflicted has not been explored. The present study aimed to address this by training Random Forest (RF) models on cortical event-related potentials (ERPs) recorded while participants passively viewed faces displaying either pain or neutral expressions, as well as action scenes depicting pain or matched non-pain (neutral) scenarios. Ninety-one participants were recruited across three samples, which included a model development group (n = 40) and a cross-subject validation group (n = 51). Additionally, 25 participants from the model development group completed a second experimental session, providing a within-subject temporal validation sample. The analysis of ERPs revealed an enhanced N170 component in response to faces compared to action scenes. Moreover, an increased late positive potential (LPP) was observed during the viewing of pain scenes compared to neutral scenes. Additionally, an enhanced P3 response was found when participants viewed faces displaying pain expressions compared to neutral expressions. Subsequently, three RF models were developed to classify images into faces and scenes, neutral and pain scenes, and neutral and pain expr... Continued on ES/IODE ➡️ https://etcse.fr/QRS ------- If you find this interesting, feel free to follow, comment and share. We need your help to enhance our visibility, so that our platform continues to serve you.
-
EXPERIENCE H2020 Project
[scientific paper] “Intracortical brain-heart interplay: An EEG model source study of sympathovagal changes” in HUMAN BRAIN MAPPING. Read it here! https://lnkd.in/dEuBmd8E #artificialintelligence #neurotech #3D #research #biomedicalsignal #emotion #EEG #brain #timeperception #H2020 #hapticinterfaces #crossmodalintegration #VirtualReality #depression
Weitere Mitglieder, die Marie-Christine Fluet heißen
Es gibt auf LinkedIn 1 weitere Person, die Marie-Christine Fluet heißt.
Weitere Mitglieder anzeigen, die Marie-Christine Fluet heißen