We trained the model on a large dataset of annotated cassava leaf images. These leaf images were divided into both cropped and uncropped images.We also managed to filter the dataset and remove the images that were not cropped properly.
The classification module uses transfer learning which enabled us achieve a higher accuracy using an adapted model,VGG16.
The results from the experiments are as follows;
CbbvsCmdvsCbsdvsCgmvsHealthy
Number of cassava images
Healthy images: 100
CBSD images: 100
CGM: 92
CBB: 75
CMD: 150
Cropped
Validation Categorical accuracy: 71.59%
Validation loss: 0.81
Uncropped
Validation Categorical accuracy: 69.89%
Validation loss: 0.79
Healthy vs CBSD
Number of cassava images
Healthy images: 1474
CBSD images: 1754
Cropped
Validation Categorical accuracy: 75.87%
Validation loss: 0.4903
Uncropped
Validation Categorical accuracy: 83.44%
Validation loss: 0.3702
Healthy vs CMD
Number of cassava images
Healthy images: 1474
CMD images: 3018
Cropped
Validation Categorical accuracy: 85.62%
Validation loss: 0.3291
Uncropped
Validation Categorical accuracy: 90.21%
Validation loss: 0.2399
Healthy vs CBB
Number of cassava images
Healthy images: 1474
CBB images: 455
Cropped
Validation Categorical accuracy: 80.81%
Validation loss: 0.4176
Uncropped
Validation Categorical accuracy: 84.97%
Validation loss: 0.3914
Healthy vs CGM
Number of cassava images
Healthy images: 1474
CBB images: 722
Cropped
Validation Categorical accuracy: 73.82%
Validation loss: 0.5056
Uncropped
Validation Categorical accuracy: 81.37%
Validation loss: 0.4272
Diseased leaves vs Healthy
Number of cassava images
Healthy images: 100
Diseased images: 417
Cropped
Validation Categorical accuracy: 82.95%
Validation loss: 0.2848
Uncropped
Validation Categorical accuracy: 86.02%
Validation loss: 0.4606