Evaluation
(Note : The codes pertaining this page are "collection.py" and "evaluation.py" on our submitted codes)
Which one is better, KNN or Nearest Center ?
We compared the accuracy for this two by using about twenty testing data set into each one. The accuracy of Nearest Centroid model is roughly 45% while the KNN model is bit higher with 60%. Therefore, we adopted KNN for our final user evaluation model.
To make evaluation efficient, there's another database table named "Airinfo", you can reach it by logging the database we mentioned at the RESULT root page, and typed "SELECT * FROM Airinfo;" in the command line, all the data point used for user evaluation would be stored there.
Here's the flow chart of our evaluation model :
Which one is better, KNN or Nearest Center ?
We compared the accuracy for this two by using about twenty testing data set into each one. The accuracy of Nearest Centroid model is roughly 45% while the KNN model is bit higher with 60%. Therefore, we adopted KNN for our final user evaluation model.
To make evaluation efficient, there's another database table named "Airinfo", you can reach it by logging the database we mentioned at the RESULT root page, and typed "SELECT * FROM Airinfo;" in the command line, all the data point used for user evaluation would be stored there.
Here's the flow chart of our evaluation model :
Of course, to use this model the user needs to have the project parts. After running collect.py on Edison board and evaluation.py on user's local machine, it will look like this :
For the example above , I collected the data from kitchen without cooking anything, so the temperature, air quality and dust is like what it was in my reading room. Even though the model classified it correctly, he was not very certain to this output, only 40% confidence.