An initiative of HKV line in water and Rijkswaterstaat
The use of machine learning in the water sector has increased significantly in recent years, providing opportunities to solve key questions in the water sector. At the same time, many people question the reliability of the results. How can you trust a model that is not transparent in how it makes calculations? The DigiShape seedmoney 2023 project ‘Explainable AI’ aims to change this.
In a project on salt intrusion that is being carried out at Rijkswaterstaat, a number of ways have been discovered to make the outcome of a machine learning model more transparent. For example, it is important to visualize not only the results, but also the way of calculating.
We use Explainable AI techniques so that you can see which key factors are most important. This can be done, for example, by changing one variable at a time and seeing how this affects the outcomes. This way you discover how large and in which direction a key factor influences the results of the model, and whether that relationship can deviate in certain periods. By displaying this in graphs, you get more insight into what is happening in the black box.
This seedmoney project is looking at whether there are more methods to provide insight, for example for situations with highly correlated factors where Explainable AI techniques become more complex. Based on these techniques, a tool has been developed with which you as a user can quickly and interactively go from complex model results to direct insights. You don’t need any knowledge of AI for that.
More information in the interview with Paula Lambregts and Thomas Stolp from HKV.
Report and code
- The interactive tool from Salty Solutions is available on request for expert sessions.