<p dir="ltr">Parkinson’s Disease (PD) is one of the fastest-growing neurodegenerative disorders globally, with early detection remaining a significant challenge due to the absence of clinically visible symptoms in the initial stages. In real-world healthcare settings, leveraging artificial intelligence (AI) for early diagnosis requires careful handling of sensitive medical and identity data, which often restricts the deployment of traditional machine learning solutions. To address these challenges, we present an industry-focused, scalable framework that combines Federated Learning (FL) with Explainable AI (XAI) to enable privacy-preserving and interpretable diagnostic support for PD using MRI data. By distributing model training across multiple healthcare sites without sharing patient data, the proposed system ensures compliance with privacy regulations while maintaining high diagnostic performance. Additionally, the integration of XAI empowers medical professionals with transparent, model-driven insights, fostering greater trust and adoption in clinical environments. We evaluate multiple deep learning models like VGG16, InceptionV3, and EfficientNet and achieve a top accuracy of 96.84% with a fine-tuned VGG16 in the federated setting. This work demonstrates the practical potential of deploying privacy-preserving and explainable AI systems in realworld healthcare scenarios, paving the way for responsible and scalable AI adoption across medical institutions.</p>