- Supervised Learning
- Classification
- Regression
- Time Series Forecasting
- Unsupervised Learning
- Clustering
- K-Means Clustering
- Hierarchical Clustering
- Semi-Supervised Learning
- Reinforcement Learning(ML)
- Deep Learning(ML)
- Transfer Learning(ML)
- Ensemble Learning
- Explainable AI (XAI)
- Bayesian Learning
- Decision Trees
- Support Vector Machines (SVMs)
- Instance-Based Learning
- Rule-Based Learning
- Neural Networks
- Evolutionary Algorithms
- Meta-Learning
- Multi-Task Learning
- Metric Learning
- Few-Shot Learning
- Adversarial Learning
- Data Pre Processing
- Natural Language Processing(ML)

#### Decision Trees

Gini impurity

Cross-validation

Gradient descent

Principal component analysis

They are prone to overfitting

They cannot handle categorical variables

They cannot model non-linear relationships

They are computationally expensive

To reduce the depth of the tree and prevent overfitting

To optimize the tree's parameters

To handle missing data

To improve the tree's interpretability

ID3

k-Nearest Neighbors

Support Vector Machines

Naive Bayes

Classification trees predict categorical variables, while regression trees predict continuous variables

Classification trees use Gini impurity as the splitting criterion, while regression trees use information gain

Classification trees can handle missing data, while regression trees cannot

Classification trees are computationally expensive, while regression trees are computationally inexpensive

To combine multiple decision trees to improve prediction performance

To optimize the parameters of a single decision tree

To handle missing data in decision trees

To visualize the decision boundaries of a decision tree

It reduces overfitting by averaging the predictions of multiple trees

It improves the interpretability of decision trees

It reduces the computational complexity of decision trees

It allows decision trees to handle missing data

Bagging trains multiple trees independently, while boosting trains trees sequentially

Bagging improves interpretability, while boosting improves predictive accuracy

Bagging reduces computational complexity, while boosting increases it

Bagging handles missing data, while boosting does not

To combine multiple decision trees to improve prediction performance

To optimize the parameters of a single decision tree

To handle missing data in decision trees

To visualize the decision boundaries of a decision tree

Gini impurity

Information gain

Mean squared error

Cross-validation

A hyperplane that separates different classes in the feature space

A set of conditions that lead to a particular decision

The point at which a decision tree splits the data

A measure of the complexity of a decision tree

A measure of disorder or impurity in a node

A measure of the complexity of a decision tree

The difference between the predicted and actual values in a node

The rate at which information is gained in a decision tree

Reaching a maximum depth

Achieving a minimum information gain

Achieving a minimum Gini impurity

Both A and B

To handle missing data

To convert categorical variables into binary variables

To normalize continuous variables

To reduce the dimensionality of the feature space

By discretizing the continuous variables into intervals

By using one-hot encoding

By normalizing the continuous variables

By ignoring the continuous variables

It leads to overfitting

It reduces the interpretability of the tree

It increases the computational complexity of the tree

It causes the tree to underfit the data

Pruning

Bagging

Boosting

All of the above

To combine multiple decision trees to improve prediction performance

To optimize the parameters of a single decision tree

To handle missing data in decision trees

To visualize the decision boundaries of a decision tree

Medical diagnosis

Credit risk assessment

Image recognition

Customer segmentation

Decision trees cannot handle continuous variables

Decision trees are prone to overfitting

Decision trees are sensitive to small changes in the data

Both B and C

The contribution of a feature to the overall performance of the tree

The number of times a feature is used in the tree

The impact of a feature on the tree's complexity

The correlation between a feature and the target variable

Decision trees cannot handle categorical variables

Decision trees are prone to overfitting

Decision trees cannot model non-linear relationships

Decision trees are computationally expensive

Random Forest

k-Nearest Neighbors

Support Vector Machines

Naive Bayes

By increasing the maximum depth of the tree

By using a smaller minimum samples per leaf

By using ensemble techniques like bagging or boosting

By removing features with low importance

A decision tree has multiple levels, while a decision stump has only one level

A decision tree can handle continuous variables, while a decision stump cannot

A decision tree can handle missing data, while a decision stump cannot

A decision tree is computationally expensive, while a decision stump is computationally inexpensive

Decision trees

k-Nearest Neighbors

Support Vector Machines

All of the above

To represent the class label or value to be predicted

To store the conditions for splitting the data

To indicate the importance of a feature

To represent the depth of the tree

Pruning

Bagging

Boosting

Both B and C

A graphical representation of a set of decisions based on certain conditions

A tree-like structure used to make predictions based on input features

A method for optimizing model parameters

A technique for finding the optimal solution in a search space

They are computationally inexpensive

They are easy to interpret and visualize

They can handle missing data

They have high predictive accuracy