Explainable AI

Interpretable and Accountable Machine Learning

Many neural network based artificial intelligence applications are effectively “black boxes” that lack the ability to “explain” the reasoning behind the results they provide. They can make important decisions without being able to provide detailed information regarding the reasoning that leads to those decisions or predictions. In critical applications, such as healthcare, any AI based decisions will need to interpretable, trustworthy and traceable.

Consequently, it will be essential that the next generation of Artificial Intelligence (AI) systems have a high degree of transparency, accountability, and trustworthiness. This functionality is referred to as Explainable AI (XAI) and will be mandatory in next generation/emerging AI systems in order to meet ethical, legal and regulatory standards.

Neuromorphic computing represents the third generation of neural networks, is biologically plausible and is fundamentally different from conventional neural networks and related AI accelerators. Compared to conventional Artificial Neural Networks (ANNs), they can provide higher computational power and can learn and compute in ways closely resembling a biological brain.

Designed from the ground up, features of Cyceera’s Neuromorphic technology provide the building blocks for implementing a comprehensive range of Explainable Artificial Intelligence (XAI) functionality.