- Supervised Learning
- Classification
- Regression
- Time Series Forecasting
- Unsupervised Learning
- Clustering
- K-Means Clustering
- Hierarchical Clustering
- Semi-Supervised Learning
- Reinforcement Learning(ML)
- Deep Learning(ML)
- Transfer Learning(ML)
- Ensemble Learning
- Explainable AI (XAI)
- Bayesian Learning
- Decision Trees
- Support Vector Machines (SVMs)
- Instance-Based Learning
- Rule-Based Learning
- Neural Networks
- Evolutionary Algorithms
- Meta-Learning
- Multi-Task Learning
- Metric Learning
- Few-Shot Learning
- Adversarial Learning
- Data Pre Processing
- Natural Language Processing(ML)

#### Support Vector Machines (SVMs)

To find the decision boundary that maximizes the margin between classes

To find the decision boundary that minimizes the margin between classes

To find the decision boundary that maximizes the accuracy of the classifier

To find the decision boundary that minimizes the computational complexity

The vectors that define the decision boundary

The vectors that maximize the margin between classes

The vectors that lie on the margin boundaries

The vectors that minimize the margin between classes

A technique to transform input data into a higher-dimensional space

A technique to simplify the computation of support vectors

A technique to improve the interpretability of SVMs

A technique to handle missing data in SVMs

Linear kernel

Polynomial kernel

Radial basis function (RBF) kernel

All of the above

They are less prone to overfitting

They are computationally efficient

They can handle missing data

They can handle large datasets

They are sensitive to the choice of kernel function

They do not work well with large datasets

They are difficult to interpret

All of the above

To control the trade-off between maximizing the margin and minimizing classification errors

To control the complexity of the decision boundary

To control the degree of the kernel function

To control the size of the support vectors

A hard-margin SVM allows no classification errors, while a soft-margin SVM allows some classification errors

A hard-margin SVM is computationally efficient, while a soft-margin SVM is computationally expensive

A hard-margin SVM

n handle missing data, while a soft-margin SVM cannot

Classification

Regression

Clustering

Both A and B

They require multiple binary classifiers to be trained

They cannot handle non-linearly separable data

They are computationally expensive

They are sensitive to noise in the data

By modifying the decision boundary to predict continuous values

By changing the kernel function to predict continuous values

By changing the loss function to minimize the squared error between predictions and actual values

Both A and C

Cross-validation

Grid search

Random search

Using the highest degree polynomial kernel

A problem that seeks to maximize the margin between classes

A problem that is equivalent to the original SVM problem but has a more convenient form for optimization

A problem that seeks to minimize classification errors

A problem that seeks to minimize the computational complexity of the SVM

It can model non-linear relationships between features

It is computationally efficient

It can handle large datasets

Both B and C

It cannot model non-linear relationships between features

It is sensitive to the choice of hyperparameters

It is computationally expensive

Both B and C

It controls the trade-off between maximizing the margin and minimizing classification errors

It controls the shape of the decision boundary

It controls the width of the RBF kernel

It controls the complexity of the decision boundary

Oversampling the minority class

Undersampling the majority class

Using a different kernel function

Both A and B

By measuring the accuracy of the classifier

By measuring the area under the receiver operating characteristic (ROC) curve

By measuring the F1 score

All of the above

Yes, directly by training a single SVM

Yes, by training multiple binary SVMs, one for each label

No, SVMs can only be used for binary classification problems

No, SVMs can only be used for multi-class classification problems

The margin between classes will become larger

The margin between classes will become smaller

The number of support vectors will increase

The number of support vectors will decrease

They can handle categorical features directly

They require categorical features to be encoded as numerical values

They require categorical features to be transformed using a kernel function

They cannot handle categorical features

They can handle missing data directly

They require missing data to be imputed before training

They require missing data to be transformed using a kernel function

They cannot handle missing data

A linear SVM uses a linear kernel function, while a non-linear SVM uses a non-linear kernel function

A linear SVM can handle only linearly separable data, while a non-linear SVM can handle non-linearly separable data

A linear SVM is computationally efficient, while a non-linear SVM is computationally expensive

All of the above

A loss function that measures the distance between data points and the decision boundary

A loss function that measures the number of misclassified data points

A loss function that measures the margin between classes

A loss function that measures the error between predicted and actual values for regression problems

The decision boundary will become more flexible

The decision boundary will become less flexible

The number of support vectors will increase

The number of support vectors will decrease

It can model non-linear relationships between features

It is computationally efficient

It can handle large datasets

Both A and B

It cannot handle non-linear relationships between features

It is sensitive to noise in the data

It is computationally expensive

All of the above

Using a linear kernel

Reducing the number of support vectors

Using an RBF kernel with a small gamma value

Both A and B

By using a non-linear kernel function

By increasing the C parameter

By increasing the gamma parameter in an RBF kernel

Both A and C

Cross-validation

Grid search

Random search

Using the same hyperparameters as another model