Natural and Artificial Intelligence Lab

Marharyta Domnich

Generating explanations from AI models is essential to build trust in the model decision, especially for domains such as healthcare finance, and criminal justice, where the consequences of incorrect decisions are severe. Explanations can help to identify problems or biases in the model and help the model debugging directly.

Marharyta is participating in a TRUST-AI project funded by the EU that aims to build a Transparent, Reliable and Unbiased Smart Tool that produces model decisions together with interactive explanations. Together, they are building a platform with explainable decisions that serve healthcare, online retail and energy cases.

marharyta.domnich@ut.ee

Explainable AI and Counterfactual Explanations