Wickramaarachchi, C.KPerera, S.KThelijjagoda, S2026-03-202025979-833152326-8https://rda.sliit.lk/handle/123456789/4864The reliability and fault tolerance of ETL (Extract, Transform, Load) pipelines are essential for maintaining data integrity in corporate environments. Traditional ETL systems often depend on manual interventions to resolve data inconsistencies, leading to errors, inefficiencies, and increased operational costs. This study introduces an AI-driven framework designed to improve the fault tolerance of ETL processes by automating data cleaning, standardization, and integration tasks. Using machine learning models, the framework reduces the need for human intervention, enhances data quality, and supports scalability across various data formats. Using real-world data sets, the proposed solution demonstrates its ability to improve operational efficiency and reduce errors within corporate data pipelines. This research addresses a crucial gap in ETL automation, offering a scalable and proactive approach to robust data integration in large-scale corporate settings. The findings highlight the ability of the framework to improve fault tolerance, improve data quality, and offer organizations a competitive advantage in managing complex data ecosystems.enAI-driven data integrationdata cleaningdata standardizationETLfault toleranceAI-Driven Fault-Tolerant ETL Pipelines for Enhanced Data Integration and QualityArticleDOI: 10.1109/SCSE65633.2025.11031076