AI can be powerful, AI can be disruptive, and AI can be incredibly dangerous. It’s about making AI more trustworthy to enable proper human-machine interaction for the future — and here comes QuantPi.
Today, AI engages us in every conceivable industry, whether it’s taking over a significant part of autonomous driving or serving as the basis for decisions when hiring new talent. But what if a model’s decision leads to fatal consequences, including financial or physical harm?
Recent examples such as a racist chatbot, a crashing autopilot or a discriminatory mortgage algorithm show how diverse and threatening the problems can become. It is necessary to establish a responsible approach to artificial intelligence. The magic word in this context: eXplainable AI (XAI).
Explaining the unknown
XAI sheds light on the black box of advanced machine learning algorithms. Solving these problems is the essence of “explainability” by testing AI models with countless variables to get explanations on what happens within the AI. The modeling techniques used in many AI applications today, such as deep learning and neural networks, are inherently more difficult for humans to understand. The solution is not simply to better communicate how a system works. Rather, it is to develop a solution that also helps experts understand the result and then explain it to others.
Eliminating uncertainty with QuantPi
QuantPi has developed the world leading XAI technology, PiCrystal, with years of research. Their algorithms are well calibrated and allow for quadratic gains in efficiency. By enabling organizations to bring transparency to AI decisions, it can be ensured that legal, commercial, ethical and reputational risks associated with their AI solutions are identified, assessed and mitigated. QuantPi’s platform integrates seamlessly with modern ML and BI tools.
First, the QuantPi platform collects and calculates metrics within key audit dimensions such as data quality, model performance, fairness, explainability, and robustness. Secondly, this information is translated into project summaries, technical documentation, and risk and audit readiness reports.
Acuteness and greatness of the problem
While scientific work by Google Deepmind researchers is already painting horror scenarios for the future in terms of the potential power of AI, far-reaching regulations are currently being established by various government apparatuses in very concrete terms.
Specifically, the EU Commission’s AI Liability Directive was recently introduced. Given that the use of AI often has a direct impact on end users, these directives are incredibly relevant. After all, the new directive gives EU citizens the ability to sue companies in the event of damage.
The team that makes all this happen
After seven years of research, QuantPi was founded in 2020 by leading mathematicians, computer scientists and economists as a spin-off of the renowned CISPA — Helmholtz Center for Information Security and Saarland University. The team around Philipp (CEO), Antoine (Chief Scientist), and Artur (CTO) complements each other with content excellence and execution mentality.
Philipp successfully founded his first company back in 2018. In addition, Chief Scientist Antoine’s world-leading experience and reputation has helped create high barriers to entry due to QuantPi’s excellent and defensible technology.
Why we invested in QuantPi
Despite the early stage of QuantPi, we are convinced of the explosiveness of the topic and the structured approach of the team to target individual industries. Although initially targeting industries where the pressure is highest, such as in the financial or healthcare sector, there are no limits to the market in terms of size development. AI is currently one of the fastest growing meta-markets in the world. The XAI security component will be a huge submarket of the overall space. With QuantPi, we have the opportunity to build a category leader with a defensible and leading technology from Germany.
We at Capnamic are very happy to be part of a truly deep tech subject and delighted to accompany the talented founding team on their way to establish a safer future in the world of AI.