Electronic nose system
Frame by Frame is my laboratory for micro‑cinema: bite‑size films that capture slices of life and imagination in under two minutes. These visual vignettes are exercises in economy—telling bigger stories with fewer frames.

The development of artificial intelligence (AI) and electronic nose (eNose) offers a promising embedded system to replicate human olfactory functions. This project aimed to translate and join the numerical eNose signal array into a 2-dimensional image representation, thus allowing pre-trained CNN to discern the features present in odor signatures at an accuracy of over 90%.
An image representation of the eNose multichannel sensor signals was successfully produced using mathematical toolkits available from Seaborn in a Jupyter Lab interface as well as in MATLAB. Transfer learning was successfully carried out using GoogLeNet, a pre-trained image classifier. The final training accuracy that the model achieved was 95.8%. The model successfully predicted unseen jasmine samples with a high prediction probability of 92.8±3.5% and oolong samples at 99.6±1.3% (95 percent confidence interval).
The results of the testing dataset revealed a precision of 0.94, a recall of 1.0, and an F1-score of 0.97, indicating a highly accurate and reliable model. The data was also classified using traditional machine learning techniques such as Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Ensemble classification, which produced poor accuracies.