Machine Learning (ML) models have demonstrated outstanding performance in predicting essential parameters in the carbon capture process system and support a better understanding of the relationships among the parameters. Their effectiveness in accurately processing and analyzing large volumes of data is well-established. However, these models often function as "black boxes," and the reasoning or processes of the models are often unknown. Practitioners often struggle to understand how specific inputs result in particular outputs. This lack of transparency is a barrier to the wider adoption of ML approaches in sectors such as healthcare, finance, heavy industry, and law, where decisions often need to be transparent and justifiable.Therefore, increasing transparency in ML models is essential for enhancing the adoption of the ML approach. One possible solution is to incorporate Explainable Artificial Intelligence (XAI) techniques, which aim to clarify the decision-making processes of the models. For instance, feature importance metrics and attention mechanisms can identify the most critical inputs in decision-making. By shedding light on the inner mechanisms of ML models, practitioners can enhance their understanding and build confidence in using ML technologies. This paper presents a method designed to enhance the transparency of ML models. Our approach uses advanced modelling techniques and model explanation methods, namely the Decision Trees Ensemble (DTE) and Tree-Based Local Interpretable Model-agnostic Explanation (LIMETree), to make predictions more understandable for practitioners. To validate the approach, a new dataset was generated from a ProMax simulation, in which specifications for the contactors are derived from existing carbon dioxide capture units in North America. Using the same method Wang et al. proposed [1], we first developed an accurate correlation model of the relationships of parameters in the carbon capture process system with the help of a DTE, GAN (Generative Adversarial Network) and PFA (Principal Feature Analysis). Then, we applied the LIMETree method to interpret the DTE model's predictions.