Machine Learning Quiz Questions

1. What is the purpose of one-hot encoding in supervised learning?

view answer: A. To convert categorical variables into a binary format that can be used by machine learning algorithms
Explanation: One-hot encoding is used to convert categorical variables into a binary format that can be used by machine learning algorithms. This is necessary because most machine learning algorithms cannot directly handle categorical data in their raw form.
2. What is the role of a bias term in linear models, such as linear regression or logistic regression?

view answer: B. To shift the decision boundary or regression line
Explanation: The bias term in linear models, such as linear regression or logistic regression, is used to shift the decision boundary or regression line away from the origin. This is necessary because in many cases, the data is not centered around the origin.
3. Which of the following is an example of an ensemble method?

view answer: A. Random forests
Explanation: Random forests are an example of an ensemble method in supervised learning. They are a collection of decision trees that are trained on random subsets of the data and features, and their outputs are combined to make a final prediction. The goal is to reduce overfitting and improve the accuracy of the predictions.
4. What is the purpose of using a confusion matrix in classification problems?

view answer: A. To measure the performance of a model by comparing true and predicted class labels
Explanation: The purpose of using a confusion matrix in classification problems is to measure the performance of a model by comparing true and predicted class labels. The confusion matrix shows the number of true positives, false positives, true negatives, and false negatives, which can be used to calculate metrics such as accuracy, precision, recall, and F1-score.
5. What is the purpose of using feature scaling in supervised learning?

view answer: A. To ensure that all input features have a similar scale, so that the model can learn more effectively
Explanation: The purpose of using feature scaling in supervised learning is to ensure that all input features have a similar scale, so that the model can learn more effectively. This can prevent features with larger scales from dominating the model and allow it to converge faster during training. Common methods of feature scaling include min-max scaling and standardization (z-score).
6. Which of the following is an example of a parametric supervised learning algorithm?

view answer: C. Linear regression
Explanation: Linear regression is an example of a parametric supervised learning algorithm because it makes assumptions about the underlying distribution of the data and seeks to fit a linear relationship between the input and output variables.
7. Which of the following is an example of a non-parametric supervised learning algorithm?

view answer: A. Decision trees
Explanation: K-nearest neighbors is an example of a non-parametric supervised learning algorithm because it makes no assumptions about the underlying distribution of the data and instead memorizes the training data to make predictions.
8. In the context of supervised learning, what is an ensemble method?

view answer: A. A technique that combines multiple models to make a single prediction
Explanation: An ensemble method is a technique that combines multiple models to make a single prediction. This can improve the accuracy and robustness of the predictions, particularly when the individual models have different strengths and weaknesses.
9. Which supervised learning algorithm is based on the concept of entropy and information gain?

view answer: A. Decision trees
Explanation: The supervised learning algorithm based on the concept of entropy and information gain is Decision Trees. Decision Trees are constructed by recursively splitting a dataset into subsets based on the most discriminative attributes, and the information gain measures the effectiveness of each split in classifying the examples. Among the given options, the correct answer is A.
10. Which supervised learning algorithm is based on the idea of maximizing the margin between classes?

view answer: B. Support vector machines
Explanation: The supervised learning algorithm based on the idea of maximizing the margin between classes is Support Vector Machines (SVMs). SVMs find the hyperplane that maximizes the distance (margin) between the closest data points of different classes. Among the given options, the correct answer is B.
11. What is the main difference between a parametric and a non-parametric supervised learning algorithm?

view answer: C. The assumption of a fixed functional form for the underlying relationship between input and output variables
Explanation: The main difference between a parametric and a non-parametric supervised learning algorithm is the assumption of a fixed functional form for the underlying relationship between input and output variables. Parametric algorithms make strong assumptions about the functional form, while non-parametric algorithms make no or weak assumptions. Among the given options, the correct answer is C.
12. Which of the following is a common loss function for classification problems?

view answer: B. Cross-entropy loss
13. Which of the following is a common loss function for regression problems?

view answer: A. Mean squared error
14. What is the purpose of regularization in supervised learning?

view answer: A. To reduce the complexity of the model and prevent overfitting
15. Which of the following is an example of regularization?

view answer: B. L1 regularization

© aionlinecourse.com All rights reserved.