What is explainable AI (XAI)? | The Markets Café
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
Sunday, May 11, 2025
No Result
View All Result
Subscribe
  • Login
The Markets Café
  • News
  • Politics
  • Markets
    • Stocks
    • Futures
    • Commodities
  • Crypto
    • News
    • Markets
    • NFT
    • DeFi
    • Explained
  • Economy
  • Finance
  • Investing
  • Forex
  • Real Estate
  • Tech
  • VideosHOT
  • Community
  • Charts
  • News
  • Politics
  • Markets
    • Stocks
    • Futures
    • Commodities
  • Crypto
    • News
    • Markets
    • NFT
    • DeFi
    • Explained
  • Economy
  • Finance
  • Investing
  • Forex
  • Real Estate
  • Tech
  • VideosHOT
  • Community
  • Charts
No Result
View All Result
The Markets Café
No Result
View All Result
  • News
  • Politics
  • Markets
  • Crypto
  • Economy
  • Finance
  • Forex
  • Investing
  • Tech
  • Videos
  • Community
Home Crypto Explained

What is explainable AI (XAI)?

by Press Room
May 9, 2023
in Explained
98 5
A A
0
21
SHARES
687
VIEWS
FacebookTwitter

XAI involves designing AI systems that can explain their decision-making process through various techniques. XAI should enable external observers to understand better how the output of an AI system comes about and how reliable it is. This is important because AI may bring about direct and indirect adverse effects that can impact individuals and societies. 

Just as explaining what comprehends AI, explaining its results and functioning can also be daunting, especially where deep-learning AI systems come into play. For non-engineers to envision how AI learns and discovers new information, one can hold that these systems utilize complex circuits in their inner core that are shaped similarly to neural networks in the human brain. 

The neural networks that facilitate AI’s decision-making are often called “deep learning” systems. It is debated to what extent decisions reached by deep learning systems are opaque or inscrutable, and to which extent AI and its “thinking” can and should be explainable to ordinary humans.

There is debate among scholars regarding whether deep learning systems are truly black boxes or completely transparent. However, the general consensus is that most decisions should be explainable to some degree. This is significant because the deployment of AI systems by state or commercial entities can negatively affect individuals, making it crucial to ensure that these systems are accountable and transparent.

For instance, the Dutch Systeem Risico Indicatie (SyRI) case is a prominent example illustrating the need for explainable AI in government decision-making. SyRI was an automated decision-making system using AI developed by Dutch semi-governmental organizations that used personal data and other tools to identify potential fraud via untransparent processes later classified as black boxes.

The system came under scrutiny for its lack of transparency and accountability, with national courts and international entities expressing that it violated privacy and various human rights. The SyRi case illustrates how governmental AI applications can affect humans by replicating and amplifying biases and discrimination. SyRi unfairly targeted vulnerable individuals and communities, such as low-income and minority populations. 

SyRi aimed to find potential social welfare fraudsters by labeling certain people as high-risk. SyRi, as a fraud detection system, has only been deployed to analyze people in low-income neighborhoods since such areas were considered “problem” zones. As the state only deployed SyRI’s risk analysis in communities that were already deemed high-risk, it is no wonder that one found more high-risk citizens there (respective to other neighborhoods that are not considered “high-risk”). 

This label, in turn, would encourage stereotyping and reinforce a negative image of the residents who lived in those neighborhoods (even if they were not mentioned in a risk report or qualified as a “no-hit”) due to comprehensive cross-organizational databases in which such data entered and got recycled across public institutions. The case illustrates that where AI systems produce unwanted adverse outcomes such as biases, they may remain unnoted if transparency and external control are lacking.

Besides states, private companies develop or deploy many AI systems with transparency and explainability outweighed by other interests. Although it can be argued that the present-day structures enabling AI wouldn’t exist in their current forms if it were not for past government funding, a significant proportion of the progress made in AI today is privately funded and is steadily increasing. In fact, private investment in AI in 2022 was 18 times higher than in 2013.

Commercial AI “producers” are primarily responsible to their shareholders, thus, may be heavily focused on generating economic profits, protecting patent rights and preventing regulation. Hence, if commercial AI systems’ functioning is not transparent enough, and enormous amounts of data are privately hoarded to train and improve AI, it is essential to understand how such a system works. 

Ultimately, the importance of XAI lies in its ability to provide insights into the decision-making process of its models, enabling users, producers, and monitoring agencies to understand how and why a particular outcome was created. 

This arguably helps to build trust in governmental and private AI systems. It increases accountability and ensures that AI models are not biased or discriminatory. It also helps to prevent the recycling of low-quality or illegal data in public institutions from adverse or comprehensive cross-organizational databases intersecting with algorithmic fraud-detection systems.



Read the full article here

Related Articles

Explained

Replace-by-fee (RBF), explained

November 30, 2023
Explained

What are sniper bots, and how to stop token sniping exploits?

November 28, 2023
Explained

Decentralized file sharing, explained

November 25, 2023
Explained

What is the CryptoNight mining algorithm, and how does it work?

November 16, 2023
Explained

Wrapped Crypto Tokens, Explained

November 16, 2023
Explained

Long and short positions, explained

November 16, 2023

About Us

The Markets Café

The Markets Cafe is your one stope Finance, Politics and bussines news website, follow us to get the latest news and updates from around the world.

Sections

  • Commodities
  • Crypto Markets
  • Crypto News
  • DeFi
  • Economy
  • Explained
  • Finance
  • Forex
  • Futures
  • Investing
  • Markets
  • News
  • NFT
  • Politics
  • Real Estate
  • Stocks
  • Tech
  • Videos

Site Links

  • Contact
  • Advertise
  • DMCA
  • Submit Article
  • Forum
  • Site info
  • Newsletter

Newsletter

THE MOST IMPORTANT FINANCE NEWS AND EVENTS OF THE DAY

Subscribe to our mailing list to receives daily updates direct to your inbox!

  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

© 2022 The Markets Café - All rights reserved.

No Result
View All Result
  • News
  • Politics
  • Markets
    • Stocks
    • Futures
    • Commodities
  • Crypto
    • News
    • Markets
    • NFT
    • DeFi
    • Explained
  • Economy
  • Finance
  • Investing
  • Forex
  • Real Estate
  • Tech
  • Videos
  • Community
  • Charts

© 2022 The Markets Café - All rights reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.