iit_cnr_logo
Trust, Security and Privacy RESEARCH UNIT
Who we are People Research PROJECTS Education Conferences Collaborations
cnr_logo
Trust, Security and Privacy RESEARCH UNIT
Who we are People Research PROJECTS Education Conferences Collaborations

Trustworthy AI

Trustworthy AI encompasses all aspects related to privacy, security, trust and ethics of the applications of Artificial Intelligence in real life contexts. Following the EU-Commission proposal for “Harmonised Rules on Artificial Intelligence” (Artificial Intelligence Act, 21 April 2021), this emerging topic is getting increasing relevance as AI is being extensively used in many contexts concerning sensitive data and applications. 

Our research activities focus on privacy preserving and explainable data analysis. In particular, we design innovative models for enabling effective, efficient and accurate AI-based data analysis which minimize the disclosure of privacy sensitive information to the parties involved in the analysis. Moreover, we model for different kinds of applications and data types, mechanisms able to trade-off between privacy, accuracy and explainability of the performed analysis. Here the explainability becomes essential to ensure that the proposed analysis models do not base their decisions on unethical parameters.

Together with privacy preservation, this ethical awareness aims at increasing the adoptability of AI-based data analysis solutions, in compliance with EU directives and the human-centered AI paradigm. Our research also covers the aspects related to trust, in particular when AI is used in distributed Peer-to-Peer environments, and classification decisions depend on collaborative evaluations. To this end we study and design dynamic distributed trust mechanisms which are adaptable to Machine-to-Machine (M2M) zero trust environments, hierarchical trust architectures and human-in-the-loop decision architectures.