Overview
In this project, we build deep learning models to be able to categorize blood cells from image data. There are several blood cells which include red blood cells, white blood cells, and platelets and each has its function in our body. Identifying these cells may not be easy and that’s where deep learning for image classification comes into the picture. Through training the model on different types of cell images we train it to correctly recognize each cell type.
In this project, we will discuss Data Pre-Processing, Selection of a Model, Training the Model, and assessing the performance. It will also demonstrate how it is possible to generalize it to real medical analysis where precision is necessary. So at the end of the project, one will understand how to employ machine learning to make blood cell analysis smarter, faster, and efficient.
Prerequisites
Before we jump into the code, here’s what you’ll need:
- An understanding of Python programming and usage of Google Colab
- Basic knowledge about deep learning and medical images.
- Comfortable using frameworks like Tensorflow, Keras, Numpy, OpenCV, and Seaborn to handle data and build models and visualize data and performance of models
- Blood cell dataset.
Once you organize these tools, you will notice how almost all of them can be used in the following step. Also, do not stress if you are not a Python master—through the tutorial, you will understand every line of the code!
Approach
In this Blood Cell Classification project, first, we collected the dataset from Kaggle. Then we load a labeled dataset of blood cell images, each tagged with its respective cell type. After exploring the dataset, we preprocess the images by using resizing, normalization, and augmentation techniques to improve model performance. Then we build three different deep-learning models to classify the blood cells from images.
After training the model, we evaluate the model performance using different techniques like precision, recall, and confusion matrix to ensure that models work perfectly on unseen data
Finally, we test the model on new images to confirm its ability to classify unseen samples accurately, showcasing the model’s real-world potential in medical diagnostics.
Workflow and Methodology
This project can be divided into the following basic steps:
- Data Collection: We collected the blood cell dataset labeled with different cell types from Kaggle.
- Data preprocess: To improve the model performance and achieve higher accuracy, we applied different preprocessing techniques. First, we augmented the dataset to create a balanced dataset. Then we resized and normalized the images in 0 to 1 pixel values.
- Model Selection: In this project, there are three models used (Custom CNN, EfficientNetB4, and VGG16).
- Training and Testing: Each of the Models has been trained on the preprocessed dataset and later, tested on the dataset that was not used during training.
- Model Evaluation: The evaluation of the model's performance is done by evaluating accuracy, precision, recall, confusion matrix, etc.
- Prediction and Testing: Test the models on new images to confirm their effectiveness in classifying unseen samples accurately.
The methodology includes
Data Preprocessing: The images are resized, normalized, and augmented to improve the performance of models based on them.
Model Training: Each model is trained with 100 epochs to enhance the level of performance.
Evaluation: Standard metrics (accuracy, working of confusion matrix) are applied to assess the efficiency of the models.
Dataset Collection
The project sourced a dataset from Kaggle. This is a popular repository of various datasets for machine learning projects. This dataset consists of images of blood cells. In this dataset, each image is labeled with a specific blood cell type. Each image is tagged with the respective cell type, which is quite critical in supervised learning. This assists our model in learning the individual characteristics of each cell type, which is essential for accurate classification as a result.
Data Preparation
The dataset was pre-processed by resizing the images to a size of 128 * 128 pixels and scaling the pixels to the range 0 to 255. To increase the variability of the dataset, primarily data augmentation techniques were applied.
Data Preparation Workflow
- Load Dataset from Google Drive
- Rotation, flipping, and changes in contrast, among others, are employed to increase the diversity of the datasets.
- Process and Resize as per Standards used in the model. This helps to standardize the input of the models.
- Further, the collected dataset has to be split into training and validation sets.