Question: 1

What is the primary goal of Explainable AI (XAI)?

Question: 2

Which of the following techniques is NOT commonly used in Explainable AI?

Question: 3

What does "model-agnostic" mean in the context of Explainable AI techniques?

Question: 4

Why is it important to consider the audience when designing explainable AI solutions?

Question: 5

What is the purpose of a feature visualization technique?

Question: 6

What is the purpose of a decision tree?

Question: 7

What is the purpose of a prototype instance?

Question: 8

What is a model-agnostic explanation?

Question: 9

What is a saliency map?

Question: 10

What is the purpose of a confusion matrix?

Question: 11

What is the purpose of feature importance?

Question: 12

What is a gradient boosting machine (GBM)?

Question: 13

What is a random forest?

Question: 14

What is a global surrogate model?

Question: 15

What is the purpose of a partial dependence plot?

Question: 16

What is a decision boundary?

Question: 17

What is a prototype explanation?

Question: 18

What is a surrogate model?

Question: 19

What is an anchor explanation?

Question: 20

What is LIME?

Question: 21

What is SHAP?

Question: 22

What is the purpose of a saliency map?

Question: 23

What is a counterfactual explanation?

Question: 24

What is a white-box model?

Question: 25

What is a gray-box model?

Question: 26

What is model interpretability?

Question: 27

What is a decision tree?

Question: 28

What is the importance of XAI?

Question: 29

What is a black-box model?

Question: 30

What is Explainable AI (XAI)?