<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="https://www.w3.org/2005/Atom">
  <channel>
    <title>SiLab</title>
    <description>Stroke Innovation Lab is the lab website of Dr. Khosravani, highlighting our initiatives, projects, and blog site related to stroke and critical care/neurocritical care.
</description>
    <link>https://neuroccm.org/</link>
    <atom:link href="https://neuroccm.org/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sun, 01 Mar 2026 19:30:40 +0000</pubDate>
    <lastBuildDate>Sun, 01 Mar 2026 19:30:40 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
      <item>
        <title>Pal-MASA Project: Next Phase of AI for QI</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pal-MASA Project: The Next Phase of AI for Quality Improvement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are pleased to announce that the next phase of our work in AI for Quality Improvement is the Pal-MASA Project. Building on our lab’s ongoing efforts to apply machine learning and AI techniques to improve clinical outcomes, Pal-MASA represents an important step forward in translating our research into actionable quality improvement tools.&lt;/p&gt;

&lt;p&gt;We have now completed the data recording phase of the project, a significant milestone that lays the groundwork for the analytical and modeling work ahead. Active development and analysis will begin in Summer 2026. Stay tuned for updates as we progress.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Sat, 28 Feb 2026 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ai,/qi,/stroke/2026/02/28/Pal-MASA.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ai,/qi,/stroke/2026/02/28/Pal-MASA.html</guid>
        
        
        <category>AI,</category>
        
        <category>QI,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Project APEX: A Framework for AI in Senior Promotion at U of T</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project APEX: A Framework for AI in Senior Promotion at the University of Toronto&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We are excited to announce Project APEX (AI for Promotion EXcellence), a collaborative initiative between Dr. Khosravani and Dr. Brian Wong aimed at developing a principled framework for the use of artificial intelligence in the senior academic promotion process at the University of Toronto. As AI tools become increasingly integrated into academic workflows, there is a growing need to establish clear, transparent, and equitable guidelines for how these technologies can be leveraged in the evaluation and advancement of faculty members.&lt;/p&gt;

&lt;p&gt;The promotion process at a research-intensive institution like U of T is multifaceted, involving the assessment of research impact, teaching contributions, and service to the academic community. Project APEX seeks to explore how AI can assist in synthesizing and evaluating these complex portfolios in a way that is fair, consistent, and aligned with institutional values. Our goal is not to replace human judgment, but to augment the process with tools that can help reduce bias, improve efficiency, and ensure that candidates are evaluated holistically.&lt;/p&gt;

&lt;p&gt;Together, we are working to build a framework that addresses key questions around transparency, accountability, and the responsible deployment of AI in high-stakes academic decisions. We believe that by proactively engaging with these challenges, the University of Toronto can lead the way in establishing best practices for AI-assisted academic governance. We look forward to sharing more about this work as it develops.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Fri, 30 Jan 2026 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ai,/academic,/qi/2026/01/30/Project-APEX.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ai,/academic,/qi/2026/01/30/Project-APEX.html</guid>
        
        
        <category>AI,</category>
        
        <category>Academic,</category>
        
        <category>QI</category>
        
      </item>
    
      <item>
        <title>See-2-Sound @ SIGGRAPH 2025</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See-2-Sound @ SIGGRAPH&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Work led by: Rishit Dagli, CS, University of Toronto, also Intern at NVIDIA&lt;/p&gt;

&lt;p&gt;Authors: Rishit Dagli, Shivesh Prakash, Robert Wu, Houman Khosravani&lt;/p&gt;

&lt;p&gt;We were so pleased that Rishit’s work was accepted to SIGGRAPH 2025, as a &lt;a href=&quot;https://dl.acm.org/doi/suppl/10.1145/3721250.3742965/suppl_file/see2sound_poster.pdf&quot;&gt;poster&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Generating combined visual and auditory sensory experiences is critical for immersive content. We introduce SEE-2-SOUND, a training-free pipeline that turns an image, GIF, or video into 5.1 spatial audio. SEE-2-SOUND sequentially: (i) segments visual sound sources; (ii) estimates their 3-D positions from monocular depth; (iii) synthesises mono audio for every source; and (iv) renders the mix with room acoustics. Built entirely from off-the-shelf models, the method needs no fine-tuning and runs in zero-shot mode on real or generated media. We demonstrate compelling results for generating spatial audio from videos, images, dynamic images, and media generated by learned approaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relevant Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Paper:&lt;/strong&gt; &lt;a href=&quot;https://dl.acm.org/doi/10.1145/3721250.3742965&quot;&gt;SIGGRAPH 2025&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Paper:&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2406.06612&quot;&gt;arxiv.com&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Project Website:&lt;/strong&gt; &lt;a href=&quot;https://see2sound.github.io/&quot;&gt;github.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Sun, 10 Aug 2025 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech,/stroke/2025/08/10/See-2-Sound-SIGGRAPH.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech,/stroke/2025/08/10/See-2-Sound-SIGGRAPH.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>See-2-Sound: How Spatial Audio has potential for clincial applications</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;See-2-Sound: How Spatial Audio has potential for clincial applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Work led by: Rishit Dagli, CS, University of Toronto&lt;/p&gt;

&lt;p&gt;The world of generative AI is constantly expanding, with models now capable of creating high-resolution content across multiple modalities, including images, text, speech, and video. However, one area that has lagged behind is the generation of high-quality spatial audio that complements these visuals. This is where SEE-2-SOUND comes in, a novel approach that generates spatial audio from images, animated images, and videos.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging the Gap Between Visuals and Immersive Audio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SEE-2-SOUND is designed to fill the gap in generating spatial audio, which is crucial for creating truly immersive experiences. Current audio generation models often excel in producing natural audio, speech, or music, but they often fall short in integrating the spatial cues needed for realistic sound perception. The ability to pinpoint the location of a sound source is a key element of human perception, and SEE-2-SOUND aims to replicate this in generated audio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Does SEE-2-SOUND Work?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The SEE-2-SOUND method works by breaking down the process into several key stages:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Source Estimation:&lt;/strong&gt; The model first identifies regions of interest within the input visual content (image or video). It then estimates the 3D positions of these regions on a viewing sphere. This process includes using a monocular depth map to refine the spatial information.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Mono Audio Generation:&lt;/strong&gt; For each identified region of interest, the model generates a mono audio clip using a pre-trained CoDi model. The audio can also be conditioned on a text prompt.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Spatial Audio Integration:&lt;/strong&gt; The generated mono audio clips are combined with the spatial information to create a 4D representation for each region. The model then places these sound sources in a virtual room and computes Room Impulse Responses (RIRs) for each source-microphone pair. The microphones are positioned according to the 5.1 channel configuration, ensuring compatibility with common audio systems. This generates a 5.1 surround sound spatial audio output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Zero-Shot Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A key advantage of SEE-2-SOUND is that it is a &lt;strong&gt;zero-shot approach&lt;/strong&gt;. This means that it can generate spatial audio without needing specific training data for every type of visual input. This makes it highly versatile and applicable to a wide range of content, including images from the web, videos generated by models like OpenAI’s Sora, and other dynamic visuals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Evaluation and Results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Evaluating spatial audio generation is challenging, as there are no direct metrics to measure its quality. Therefore, the researchers employed a combination of methods to assess their approach:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Human Evaluation:&lt;/strong&gt; Human evaluators rated the realism, immersion, and accuracy of generated audio when paired with visual content using semantic differential scales. They also performed tasks such as identifying the direction and distance of sounds and matching audio clips to their corresponding images or videos.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Marginal Scene Guidance:&lt;/strong&gt; A new evaluation protocol was developed to measure how well the generated audio is guided by the visual scene. This protocol uses another model, AViTAR, to modify audio to match the image, and then assesses the similarity between the modified audio and the original generated audio.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The results from these evaluations indicate that SEE-2-SOUND performs well in generating compelling spatial audio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Directions and Potential Applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While SEE-2-SOUND shows promising results, there are several avenues for future improvement:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Fine Details:&lt;/strong&gt; The model may not detect all fine details in images and videos and does not produce audio for every detail.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Motion Cues:&lt;/strong&gt; Currently, the model does not generate audio based on motion cues and adding motion backbones might improve the results.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Real-Time Capabilities:&lt;/strong&gt; The method does not currently work in real time on an Nvidia A100-80 GPU. However, using other models to solve the subproblems might bring it to real-time capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite these limitations, the potential applications of SEE-2-SOUND are vast:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Enhancing Generated Visuals:&lt;/strong&gt; It can add spatial audio to images and videos generated by AI models, making them more immersive.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Interactive Real Images:&lt;/strong&gt; It can make real images interactive through sound.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Human-Computer Interaction:&lt;/strong&gt; It can improve human-computer interaction by adding realistic spatial audio cues.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Accessibility:&lt;/strong&gt; It can enhance accessibility by providing audio information about visual content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A Step Towards Complete Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SEE-2-SOUND is a step towards truly complete generation, bridging the gap between visual and auditory experiences. By enabling the creation of spatial audio from visual content, it opens up exciting new possibilities for immersive content creation and interaction. To the best of the authors’ knowledge, this approach is the first to generate spatial audio from images and videos. The team hopes to inspire future work that will lead to the generation of truly immersive digital content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relevant Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Paper:&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2406.06612&quot;&gt;arxiv.com&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Project Website:&lt;/strong&gt; &lt;a href=&quot;https://rishit-dagli.github.io/2024/06/18/s2s.html&quot;&gt;github.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Thu, 01 Aug 2024 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech,/stroke/2024/08/01/See-2-Sound.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech,/stroke/2024/08/01/See-2-Sound.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Tuning In: How Audio Analysis is Revolutionizing Clinical Diagnostics</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tuning In: How Audio Analysis is Revolutionizing Clinical Diagnostics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Main contributors: Hamza Mahdi * , Eptehal Nashnoush * , Rishit Dagli (*Authors contributed equally)&lt;/p&gt;

&lt;p&gt;Post By: (NotebookLM, summarized) work by team MASA&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/2402.10100&quot;&gt;Read the full arXiv paper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the realm of healthcare, audio biomarkers are emerging as powerful tools for diagnosis and monitoring. From detecting respiratory problems to assessing neurological conditions, sound analysis is proving to be a versatile and non-invasive approach. Recent advancements in machine learning are enhancing the capabilities of audio classification, but how do these models perform in real-world clinical settings with limited data? A recent study, “Tuning In: Analysis of Audio Classifier Performance in Clinical Settings with Limited Data,” delves into this very question, providing valuable insights into the nuances of audio-based clinical diagnostics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge of Limited Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the major hurdles in applying machine learning to clinical data is the scarcity of large, high-quality datasets. This is especially true for rare diseases or when collecting data prospectively. This study addresses this challenge by analyzing the performance of various deep learning models on &lt;strong&gt;two novel, prospectively collected audio datasets from stroke patients&lt;/strong&gt;. These datasets include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Dataset NIHSS:&lt;/strong&gt; Captures continuous speech, sentences, and words based on the National Institutes of Health Stroke Scale (NIHSS), a standard neurological assessment tool.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Dataset Vowel:&lt;/strong&gt; A unique dataset of sustained vowel sounds from patients, aiding in the analysis of swallowing disorders.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These datasets are first-of-their-kind, addressing a critical gap in research related to disease state classification with limited real-world data. Due to patient privacy regulations, this clinical data is not publicly available at this time, but the researchers are working to make it available in the near future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Selection and Preprocessing: Key Factors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The study compares various models, including Convolutional Neural Networks (CNNs) like &lt;strong&gt;DenseNet&lt;/strong&gt; and &lt;strong&gt;ConvNeXt&lt;/strong&gt;, and transformer models such as &lt;strong&gt;ViT&lt;/strong&gt; and &lt;strong&gt;SWIN&lt;/strong&gt;. It also includes pre-trained audio models like &lt;strong&gt;AST, YAMNet&lt;/strong&gt;, and &lt;strong&gt;VGGish&lt;/strong&gt;. A key focus of the study is the impact of preprocessing techniques on model performance. The researchers explored three primary methods:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Mel RGB:&lt;/strong&gt; Spectrograms converted to RGB images using color maps.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Mel mono:&lt;/strong&gt; Grayscale spectrograms.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Superlet:&lt;/strong&gt; A relatively recent method for transforming time-series data into spectrograms that preserves both time and frequency resolution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Findings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The research revealed several interesting findings:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;CNNs Can Compete with Transformers:&lt;/strong&gt; In small dataset contexts, CNNs like DenseNet and ConvNeXt can match or even exceed the performance of transformer models. Specifically, DenseNet-Contrastive and AST models showed notable performance.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Pretraining is Crucial:&lt;/strong&gt; Pretraining on large datasets such as ImageNet, AudioSet, US8K, and ESC50 is essential for enhancing the performance of models on smaller, specific clinical datasets.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Preprocessing Matters:&lt;/strong&gt; The study found that RGB and grayscale spectrogram transformations affect model performance differently depending on the priors they learn from pretraining.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The Role of Color:&lt;/strong&gt;  Surprisingly, RGB pre-processing, often used with ImageNet pretraining, outperformed grayscale triple-channel approaches, likely because the convolutional layers of models pretrained on ImageNet are more attuned to features in RGB images.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;AST Model Efficiency:&lt;/strong&gt; The AST model achieved optimal results with only 6 epochs of training, highlighting the potential for efficient training of transformer models.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Model Specific Performance&lt;/strong&gt;:
    &lt;ul&gt;
      &lt;li&gt;&lt;strong&gt;ConvNeXt (Mel RGB)&lt;/strong&gt;: Showcased robust performance across metrics (AUC of 0.91, sensitivity of 0.78, and specificity of 0.89).&lt;/li&gt;
      &lt;li&gt;&lt;strong&gt;DenseNet (Mel RGB)&lt;/strong&gt;: Achieved high sensitivity (0.89), but with a slightly lower precision compared to ConvNeXt.&lt;/li&gt;
      &lt;li&gt;&lt;strong&gt;DenseNet Contrastive US8K (Mel mono)&lt;/strong&gt;:  Demonstrated exceptional performance with perfect specificity and precision, resulting in the highest F1 score of 0.88.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implications for Clinical Practice&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This study underscores the significance of strategic model selection, pretraining, and preprocessing for audio-based diagnostics. The findings suggest that in clinical settings where data is often limited, a &lt;strong&gt;standardized, contextually tailored approach to preprocessing can significantly improve the performance of deep learning models&lt;/strong&gt;. Moreover, the robustness of transformer models in handling limited training epochs opens avenues for refining audio classification frameworks. By carefully choosing preprocessing techniques and models, healthcare providers can enhance diagnostic accuracy and efficiency in various clinical environments.&lt;/p&gt;

&lt;p&gt;This research has implications for a variety of conditions, including stroke, other neurological conditions, and rare diseases, where data scarcity is an intrinsic challenge. The use of audio as a biomarker for swallowing status, as demonstrated in this study, highlights the potential of this approach for broader applications in clinical settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Directions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The researchers emphasize the need to consider various confounding factors, such as age, gender, stroke severity, and other medical conditions, in future studies. Stratified analysis can provide more detailed insights into the performance of preprocessing techniques across varied patient demographics, refining the clinical utility of audio classifiers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The study “Tuning In: Analysis of Audio Classifier Performance in Clinical Settings with Limited Data” provides valuable insights into the effective application of deep learning models for clinical audio classification. By focusing on the significance of pretraining, preprocessing, and model selection, this work paves the way for more accurate and efficient audio-based diagnostic tools in healthcare. This is a promising step towards harnessing the power of sound to improve patient outcomes.&lt;/p&gt;

&lt;p&gt;This blog post is based on the provided research article and does not include any outside information.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Mon, 01 Jul 2024 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech,/stroke/2024/07/01/TuneIn.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech,/stroke/2024/07/01/TuneIn.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Routine integration of palliative care into stroke unit care</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This manuscript presents a retrospective analysis examining the integration of palliative care into the care of stroke patients admitted to a regional stroke center. The study is important because despite the high morbidity and mortality associated with stroke, there is often a delay in initiating palliative care for these patients until death appears imminent. Early integration of palliative care has been shown to improve quality of life and symptom management in other serious illnesses like cancer.&lt;/p&gt;

&lt;p&gt;Key findings and take-away messages:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Only 28.8% of stroke patients who died in the hospital received a palliative medicine consultation (PMC), with a median time to consultation of 6 days from admission. This highlights missed opportunities for early palliative care integration.&lt;/li&gt;
  &lt;li&gt;Factors associated with a higher likelihood of receiving PMC included older age, female gender, absence of stroke diagnosis on admission, ischemic stroke type, and comorbidities of cancer or dementia.
Admission from another acute care hospital and lower Glasgow Coma Scale scores (indicating coma) were associated with a lower likelihood of PMC.&lt;/li&gt;
  &lt;li&gt;In multivariate analysis, only coma was significantly associated with a higher incidence of death, while no factors remained significantly associated with receiving PMC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, the results demonstrate an underutilization and delay in palliative care consultation for patients with severe strokes, even among those with the highest risk of death. The authors conclude that prospective studies in various stroke care settings are needed to better understand barriers and optimize the integration of palliative care into the management of acute stroke patients&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://pubmed.ncbi.nlm.nih.gov/38725344/&quot;&gt;Read the full paper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Sun, 12 May 2024 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/palliativecare,/stroke/2024/05/12/StrokePalliativeCare.html</link>
        <guid isPermaLink="true">https://neuroccm.org/palliativecare,/stroke/2024/05/12/StrokePalliativeCare.html</guid>
        
        
        <category>Palliativecare,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Deep Learning and Voice Analysis: New Frontiers in Stroke Diagnosis Insights from the Stroke Innovation Lab</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep Learning and Voice Analysis: New Frontiers in Stroke Diagnosis Insights from the Stroke Innovation Lab&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Our first paper, work led by: Rami Saab and Arjun Balachandar&lt;/p&gt;

&lt;p&gt;Stroke is a leading cause of mortality and can result in long-term functional changes. A frequent and serious complication of stroke is dysphagia, or swallowing dysfunction, which occurs in around 55% of stroke patients. Dysphagia increases the risk of aspiration pneumonia, which can be fatal. Thus, screening for swallowing issues is a critical part of stroke patient care. Current screening methods have limitations, such as subjectivity and resource requirements, so there is a need for more efficient and objective tools.&lt;/p&gt;

&lt;p&gt;A new study published in &lt;em&gt;Frontiers in Neuroscience&lt;/em&gt; explores using &lt;strong&gt;machine learning to screen for post-stroke dysphagia&lt;/strong&gt; using vocal samples. This innovative approach uses deep learning to analyze voice changes, a key indicator of dysphagia.&lt;/p&gt;

&lt;p&gt;Here’s a breakdown of how the study was conducted and its key findings:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Data Collection&lt;/strong&gt;: Researchers recorded the speech of 68 patients who had experienced a stroke, including sustained vowel sounds and speech samples from the National Institutes of Health Stroke Scale (NIHSS). The NIHSS is a validated tool for assessing neurological deficits in stroke patients.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Audio Processing&lt;/strong&gt;: The audio was segmented into 0.5-second clips and transformed into Mel-spectrogram images, which represent sound frequencies over time, and are designed to mimic the human ear’s perception of sound.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Deep Learning Models&lt;/strong&gt;: The team used convolutional neural networks (CNNs) including DenseNet and ConvNext, which are effective for image-based classification tasks. They used transfer learning, which means the models were pre-trained on large image datasets and then fine-tuned to classify the audio-based spectrogram images. An ensemble approach was used to combine the results of both models.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Outcomes&lt;/strong&gt;: The models were trained to classify patients as either “pass” or “fail” based on the Toronto Bedside Swallowing Screening Test (TOR-BSST©).
    &lt;ul&gt;
      &lt;li&gt;At the audio clip level, the ensemble model achieved a &lt;strong&gt;sensitivity of 71% and specificity of 77%&lt;/strong&gt;.&lt;/li&gt;
      &lt;li&gt;At the participant level, the ensemble model achieved a &lt;strong&gt;sensitivity of 89% and a specificity of 79%&lt;/strong&gt;.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Key Findings:&lt;/strong&gt; The study showed that deep learning can classify vocalizations to detect post-stroke dysphagia. The use of both vowel sounds and the speech components of the NIHSS improved classification performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This study is the first to show the feasibility of using deep learning to classify vocalizations for post-stroke dysphagia detection. This technology could improve dysphagia screening in several ways:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Reduced Subjectivity:&lt;/strong&gt; Machine learning can provide a more objective assessment compared to traditional methods.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Improved Access:&lt;/strong&gt; This technology could allow for screening in settings with limited access to specialists, such as speech language pathologists (SLPs).&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Remote Screening&lt;/strong&gt;: The use of voice analysis opens up the possibility of telehealth applications, which are especially relevant for remote patient care.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Limitations and Future Directions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The authors note some limitations to the study including the small dataset size, which could limit the generalizability of the models. However, the data was collected in a real-world clinical setting, which enhances its applicability to other centers. The code developed for the project is also open source, which will facilitate wider use. Future work will involve using larger and more diverse datasets and also exploring automated methods for segmenting the audio data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This research demonstrates that machine learning, using deep learning models, offers a promising avenue for the development of non-invasive, objective, and rapid tools for post-stroke dysphagia screening. This technology has the potential to improve patient care and increase the availability of dysphagia screening.&lt;/p&gt;

&lt;p&gt;This blog post summarizes the key points of the research using information from the sources you provided, highlighting the importance and potential impact of this work in an accessible manner.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1302132/full&quot;&gt;Machine-learning assisted swallowing assessment: a deep learning-based quality improvement tool to screen for post-stroke dysphagia&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CHIL 2024 ACCEPTED PAPER&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Post By: Eptehal Nashnoush, Hamza Mahdi, Rishit Dagli, Houman Khosravani&lt;/p&gt;

&lt;p&gt;Our lab (SiLab, Stroke Innovation Lab) has recently conducted an important comparative study demonstrating the potential of deep learning models to utilize voice as a biomarker for diagnosing stroke-related conditions. This research navigates the complex challenge of analyzing small datasets in clinical settings, offering promising solutions for neurology and other medical fields.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Study at a Glance&lt;/em&gt;
Our research centered on analyzing audio data from stroke patients, employing advanced deep learning techniques to uncover new diagnostic possibilities. By focusing on two innovative datasets—one based on the National Institutes of Health Stroke Scale (NIHSS) speech segments and another on sustained vowel sounds—we sought to improve the understanding and diagnosis of swallowing disorders and assess stroke severity more accurately.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Leveraging Pre-training and Fine-tuning&lt;/em&gt;
A crucial aspect of our study was the strategic use of pre-training on extensive, publicly available datasets before fine-tuning on our specific clinical data. This approach significantly improved the accuracy of our models, showcasing the value of deep learning in clinical diagnostics. Below is a figure demonstrating how each technique works. We evaluated various neural network architectures, including Convolutional Neural Networks (CNNs) like DenseNet and ConvNeXt, and transformer models, to find the most effective method for audio classification with limited data.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Innovations in Spectrogram Analysis&lt;/em&gt;
We explored several spectrogram preprocessing techniques, including Mel RGB, Mel mono, and Superlet, to convert audio signals into visual formats for model analysis. Our findings indicate that while RGB preprocessing benefits from ImageNet pre-training, the Mel mono approach—especially when pre-trained on large audio datasets—outperforms RGB. This highlights the nuanced role of preprocessing in enhancing model performance.&lt;/p&gt;

&lt;p&gt;Mel RGB: This approach uses color in spectrogram representations and was particularly effective for the ConvNeXt model, which achieved an AUC of 0.91, ST of 0.78, and SP of 0.89, indicating a strong balance between accurately identifying both conditions and healthy controls.&lt;/p&gt;

&lt;p&gt;Mel mono: A grayscale single-channel representation, where the DenseNet model demonstrated a strong performance, notably with the DenseNet Contrastive pre-trained on US8K dataset, showing an AUC of 0.89, ST of 0.78, and a perfect SP of 1.00. Superlet: A newer method for spectrogram transformation, which generally resulted in lower performance across the models compared to the other preprocessing techniques.&lt;/p&gt;

&lt;p&gt;The ConvNeXt and DenseNet models outperformed other models like ViT and SWIN Transformer in certain conditions, indicating that the choice of model and preprocessing technique can significantly influence diagnostic accuracy in clinical audio analysis. This information is critical for developing effective tools for medical professionals and improving patient outcomes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Clinical Applications and Future Directions&lt;/em&gt;
The research from the Stroke Innovation Lab not only demonstrates the effectiveness of CNN architectures in clinical audio classification but also opens up new avenues for using voice as a diagnostic tool across a range of medical conditions. Our findings represent a step towards more non-invasive, efficient, and accessible diagnostic methods, offering hope for both patients and healthcare professionals.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://arxiv.org/abs/2402.10100&quot;&gt;Read the full arXiv paper&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manuscript accepted at &lt;a href=&quot;https://chilconference.org/&quot;&gt;CHIL 2024&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next Steps&lt;/em&gt;
The Stroke Innovation Lab is committed to further exploring the potential of audio classification in clinical settings. Our goal is to deepen our understanding of diseases and improve diagnostic accuracy through ongoing innovation in deep learning. We believe that voice analysis can play a crucial role in the future of healthcare diagnostics.&lt;/p&gt;

&lt;p&gt;Stay connected with the Stroke Innovation Lab for more updates on our journey to harness the power of science and technology in transforming medical diagnostics.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Mon, 19 Feb 2024 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech,/stroke/2024/02/19/arXivMASA.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech,/stroke/2024/02/19/arXivMASA.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Advancing dysphagia screening: a deep learning approach</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advancing dysphagia screening: a deep learning approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Introduction&lt;/em&gt;
Post-stroke dysphagia, a prevalent complication in stroke patients, significantly increases morbidity and mortality risks. Traditional screening methods have a subjective element, resource-intensive, and not always accessible. This article delves into an innovative study that leverages deep learning to advance dysphagia screening, enhancing efficiency and objectivity. The foundation brought forward by this assistive technology opens the door to bringing dysphagia screening forward to more stroke patients and other patient populations, in addition to reducing subjectivity. At present it does not address all types of dysphagia, which as a complex disease process has many nuances. However, with machine learning, improving models and approaches, it is clear that this technology - taken to the limit - can address this quality gap and improve access to dysphagia screening with many benefits for patients. Our study is a proof of concept, and just the start of our journey to further explore voice as a biomarker in this realm.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Deep Learning Meets Dysphagia Screening&lt;/em&gt;
The study, conducted at a comprehensive stroke center, involved 68 patients. We developed a proof-of-concept model based on DenseNet and ConvNext variants, which are deep learning architectures. We trained these models on Mel-spectrogram images of vocal recordings, aiming to distinguish between patients with and without dysphagia.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Approach&lt;/em&gt;
In this proof-of-concept study, participants were stroke patients capable of following commands.
Data Collection: Audio clips of standardized vowel sounds and sentences, transformed into Mel-spectrogram images.
Model Training: Employing DenseNet and ConvNext, alongside an ensemble method to integrate results from both models.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Key Findings&lt;/em&gt;&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;The models demonstrated promising results.&lt;/li&gt;
  &lt;li&gt;These outcomes suggest potential in reducing subjectivity in dysphagia screening and improving clinical efficiency.&lt;/li&gt;
  &lt;li&gt;Overview of the Models’ Performance - The study employed two machine learning models, DenseNet-121 and ConvNext-Tiny, along with an ensemble fusion of both, to assess their effectiveness in dysphagia screening post-stroke. The performance was evaluated at two levels: clip level and participant level.&lt;/li&gt;
  &lt;li&gt;For additional details, see the methods sections, references, and our &lt;a href=&quot;https://github.com/UofTNeurology/masa-open-source&quot;&gt;Github repo&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Clip-Level Performance&lt;/em&gt;
DenseNet-121: Exhibited a sensitivity of 77%, specificity of 69%, precision of 56%, an F1 score of 0.70, and an AUC (Area Under the Curve) of 0.79.
ConvNext-Tiny: Showed a sensitivity of 63%, specificity of 77%, precision of 58%, an F1 score of 0.63, and an AUC of 0.78.
Ensemble Fusion Model: This combined approach reached a sensitivity of 71%, specificity of 77%, precision of 62%, an F1 score of 0.73, and an AUC of 0.80​​.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Participant-Level Performance&lt;/em&gt;
DenseNet-121: Achieved a sensitivity of 89%, specificity of 79%, precision of 67%, an F1 score of 0.81, and an AUC of 0.89.
ConvNext-Tiny: Delivered a sensitivity of 78%, specificity of 89%, precision of 78%, an F1 score of 0.84, and an AUC of 0.911.
Ensemble Fusion Model: Demonstrated a sensitivity of 89%, specificity of 79%, precision of 67%, an F1 score of 0.81, and achieved the highest AUC of 0.912​​.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Ensemble Method Utilized&lt;/em&gt;
The ensemble method integrates multiple classifiers trained via transfer learning, each using different base models (DenseNet-121 and ConvNext-Tiny). This approach was chosen to mitigate model variance caused by random parameter initialization, thereby enhancing the robustness of the model. The ensemble strategy adopted unweighted averaging to aggregate classifier outputs, prioritizing transparency and interpretability, especially critical in clinical AI applications. This simpler method was preferred over more complex strategies like weighted majority voting to maintain clarity in the decision-making process.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Architectural Advantages of DenseNet and ConvNext&lt;/em&gt;
DenseNet: Utilizes feed-forward connections between each layer, leading to improved feature propagation and a lower number of parameters. This architecture has shown superior performance in applications involving computer vision for audio signals.
ConvNext: Aims to integrate some of the advantages of vision transformers, like larger kernel sizes and improved training techniques, but with fewer parameters. It offers classification performance comparable to vision transformers but is more suitable for smaller datasets due to reduced computational complexity and overfitting risks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Discussion: A Step Towards Futuristic Screening&lt;/em&gt;
This study is pioneering in applying deep learning to vocalization analysis for dysphagia screening post-stroke. The method shows potential in being integrated into resource-limited environments, in addition to telehealth services, and other assistive technologies. We believe in open source as it may have an application for low-resource settings.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Limitations of the Study&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Dataset Size and Overfitting Concerns - The study faced limitations primarily due to the small dataset size, which potentially introduces overfitting risks and limits the generalizability of the models. While the use of real-world audio data collected in clinical settings does enhance some aspects of generalizability and facilitates adoption by other centers, the small dataset size remains a significant concern. To address these issues, the study implemented robust model evaluation strategies, including early stopping during model training and using chronologically separated training and test datasets to better mimic real-world multi-cohort testing. Future work aims to include larger datasets with more diverse patient groups, such as non-English speakers, to enhance the models’ applicability and reliability.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Challenges with CNNs and Spectrograms - Another significant limitation recognized in this study is the challenges associated with applying Convolutional Neural Networks (CNNs), originally trained on image datasets, to spectrograms. Spectrograms and images operate within fundamentally different parameter spaces, characterized by axes of frequency, time, and power. This distinction poses unique challenges, as spectrograms embody the non-local spectral properties of sound and its inherent temporal nature. Although CNNs are extensively used for audio signal analysis, the fundamental differences between these data types add to the complexity of the task. The study acknowledges the inherent variability and limitations of real-world patient data, CNNs, and their ability to classify complex pathologies such as dysphagia within these constraints.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Implications and Future Directions&lt;/em&gt;
Non-invasive and Rapid: Offers a scalable, non-invasive approach to screening, reducing the reliance on subjective assessments. This can be deployed on a mobile device.
Telehealth and Remote Applications: Ideal for telehealth scenarios, particularly in the post-pandemic healthcare landscape.
Future Research: Larger, more diverse datasets and refined methodologies could further enhance the model’s accuracy and applicability.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Conclusion&lt;/em&gt;
This innovative approach to dysphagia screening post-stroke using deep learning models presents a significant leap in medical technology. It holds the promise of democratizing access to efficient, less subjective screening methods, potentially transforming patient management and care outcomes into stroke rehabilitation.&lt;/p&gt;

&lt;p&gt;*&lt;a href=&quot;https://www.frontiersin.org/articles/10.3389/fnins.2023.1302132/full&quot;&gt;Link to the full manuscript: Machine-learning assisted swallowing assessment: a deep learning-based quality improvement tool to screen for post-stroke dysphagia&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Thu, 23 Nov 2023 14:00:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech,/stroke/2023/11/23/project-masa.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech,/stroke/2023/11/23/project-masa.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech,</category>
        
        <category>Stroke</category>
        
      </item>
    
      <item>
        <title>Mentorship</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had the privilege of participating in the &lt;a href=&quot;https://ml4h.cc/2023/career_mentorship.html&quot;&gt;ML4H Mentorship session&lt;/a&gt;, as one of 3 panelists&lt;/p&gt;

&lt;p&gt;It is about the human capital in your organization. It’s always about people and empowering them. When selecting a career path, remember:
1) the search starts within - your core values; then reflecting on those values select a path forward
2) regardless of the path selected, a) all paths ultimately have value, wins &amp;amp; fails equally helpful, b) allow oneself chapters in life - no obligation to always keep doing the same thing; it’s ok to have chapters in life #reflections on #mentorship&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Academia vs Industry Careers for PhD Students&lt;/em&gt;
Pursuing an academic career versus one in industry are two common options for PhD students. There are several key differences to consider when deciding between these paths.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Academic Careers&lt;/em&gt;
Academic careers typically involve research and teaching roles at universities or research institutes. The path includes postdoctoral positions before becoming a professor. There is high competition for tenured faculty jobs, with less than 1/4 of life sciences PhD holders ultimately getting one. This is different of course for machine learning/AI researcher and PhDs - these are interesting times!&lt;/p&gt;

&lt;p&gt;The pros of an academic career include the ability to focus on research, work on problems you find intellectually compelling, have flexibility in your schedule, interact with students, and collaborate easily with other researchers. However, there is pressure to continually publish and apply for grants to fund your research. Careers also typically involve longer working hours and relatively lower pay compared to industry.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Industry Careers&lt;/em&gt;
Industry careers include roles at companies ranging from large pharmaceutical/biotech corporations to startups. Common positions include research scientist, project manager, medical science liaison, and consulting roles. The work is more focused on developing products and innovations.
Industry careers tend to offer higher pay, better job security, and more structured work schedules compared to academia. However, you have less flexibility and control over the research problems you work on. There can also be intense pace and pressure to meet deadlines. Publishing research is less critical for advancement compared to academia.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why a PhD?&lt;/em&gt;
One of the key benefits of pursuing a PhD is the unique opportunity it provides to go incredibly deep on a specific topic over an extended period. This level of depth is rarely feasible at other times in one’s career.&lt;/p&gt;

&lt;p&gt;Principles of picking a supervisor - before starting a PhD:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;in-person meeting, human factor/connection&lt;/li&gt;
  &lt;li&gt;ask former lab members, interview back a few generations&lt;/li&gt;
  &lt;li&gt;objectively - look at the lab’s publication record - is everyone flourishing or just a few folks&lt;/li&gt;
  &lt;li&gt;culture of the lab - balance between work and life&lt;/li&gt;
  &lt;li&gt;when checking on previous graduates/lab members, see if they landed jobs - how did they do&lt;/li&gt;
  &lt;li&gt;does the lab/PI have funding, a rising tide can lift all boats!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During my/the PhD, IMHO, some recommendations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;work hard&lt;/li&gt;
  &lt;li&gt;stay focused&lt;/li&gt;
  &lt;li&gt;have fun&lt;/li&gt;
  &lt;li&gt;be collaborative&lt;/li&gt;
  &lt;li&gt;get papers out&lt;/li&gt;
  &lt;li&gt;use your papers to write a sandwich-type write-up; Intro/Discussion, and papers in between&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally - it’s the journey and not the destination. In life, you are allowed to have chapters! Go write your book.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Wed, 22 Nov 2023 04:30:00 +0000</pubDate>
        <link>https://neuroccm.org/ml,/tech/2023/11/22/mentorship2.html</link>
        <guid isPermaLink="true">https://neuroccm.org/ml,/tech/2023/11/22/mentorship2.html</guid>
        
        
        <category>ML,</category>
        
        <category>Tech</category>
        
      </item>
    
      <item>
        <title>Watch the beta-blockers</title>
        <description>&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;

&lt;blockquote class=&quot;twitter-tweet&quot;&gt;
  &lt;p lang=&quot;en&quot; dir=&quot;ltr&quot;&gt;...&lt;/p&gt;&amp;mdash; ...
  &lt;a href=&quot;https://twitter.com/neuroccm/status/1629906707005440014&quot;&gt;...&lt;/a&gt;
&lt;/blockquote&gt;
&lt;script async=&quot;&quot; src=&quot;https://platform.twitter.com/widgets.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;

&lt;p&gt;I posted about this previously - did not get much traction :)
I am sure someone will go ahead and refute or prove this in a trial or looking back at the data.&lt;/p&gt;

&lt;p&gt;My clinical observation has been that the presence of beta-blockers, used for another indication (e.g. HTN), lowers the probability of detecting A Fib in a patient that presents with ESUS. Therefore, given the results of the recent ARCADIA trial &lt;a href=&quot;https://www.medscape.com/viewarticle/992525&quot;&gt;read more here&lt;/a&gt;, there is a suggestion that some markers of atrialcardiopathy may not be sufficient to detect the propensity of identifying those that may benefit from anticoagulation - note I don’t mean all biomarkers., some more useful than others, and some may cloud detection ability in a clinical trial. However, I suspect, the presence of BBs also modulates the detection of A Fib, and those with LA dilatation (best via the LAVI, volume index), and embolic stroke, who are on BB, I suspect these patients actually have a high probability of A fib, and thus the search should be on to find it. BB can reduce the probability of detection - I cannot find any papers that directly show this - but I have found this to be the case clinically over and over again. I also cannot reliably find reporting of BB use in A Fib large detection trials. There is some biologic plausability. Here is some “smoke” looking at the HR distributions with BB on-board, &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9996284/&quot;&gt;see this&lt;/a&gt;. The awesome point about this paper and other papers that are cited here are the fact that they show the complexity that we have with A Fib and how it is more than just atriopathy, but certainly involves HFpEF, chronic HTN, size and shape of the LAA, time in/out of A Fib, and even how medications used for HTN can modify intra-cardiac pressues to promote the development of A Fib - the cool-factor is that it is very intriguing and shows how no algebreic simple score can capture true risk of A Fib and risk of stroke from A Fib, as always, our models are approximations to what actually occures in nature. I am sure we will see more on this topic in the future! Given how complex A Fib is, it is not surprising its complexity cannot be captured via a simple algebreic sum of terms.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;/posts/index.html&quot;&gt;Back to posts…&lt;/a&gt;&lt;/p&gt;
</description>
        <pubDate>Mon, 10 Jul 2023 01:23:00 +0000</pubDate>
        <link>https://neuroccm.org/computing/2023/07/10/Watch_the_beta-blockers.html</link>
        <guid isPermaLink="true">https://neuroccm.org/computing/2023/07/10/Watch_the_beta-blockers.html</guid>
        
        
        <category>Computing</category>
        
      </item>
    
  </channel>
</rss>
