-
Notifications
You must be signed in to change notification settings - Fork 1
/
notes.txt
24 lines (14 loc) · 2.39 KB
/
notes.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
augmenting artifact images to completely balance out classes worsens performance.
same goes for per_image_standardization
recall is a better indicator of proportion of correctly identified positives than pure number of false negative.
instead of recall, f beta score can be used to adjust f1score in favor of recall so it also takes into account precision as it also has to be relatively high, not just recall.
when class balancing is applied, model learns much faster.
lenet model reaches peak performance in 2 epochs, then training and validation diverge. maybe with layer parameter optimization it can improve.
kerastuner's prebuilt hypermodels wont be used for now as they are computationally expensive to tune which isn't worth when goal here is binary classification hile those models are more suitable for complex classification tasks.
hyperband finds sligtly better models than bayesian optimization but it takes it 2 times more time than for bayesop.
commented line 88 in .venv/lib/python3.10/site-packages/tensorboard/plugins/scalar/summary_v2.py as it prevented tensorboard from working with keras tuner.
Fbeta score varies depending on the image padding and whether the newest dataset images were used or were they picked at random. So the performance data should be formatted in some kind of table in the future to better analyze the best model.
Fbeta score is currently used with beta=2. It is the approximation that recall is twice as important as precision.
for current hypermodel configuration, hyperband peaks at around 0.993 fbeta score even when given additional ammount of resources. Additional changes to the configuration will be required in order to increase performance.
When taking newest images from the dataset instead of at random up until now, it seems that recall stays about the same, but the problem arises with a drop in precision. Probably because of the higher presence of insects on camera footage.
an attempt was made to replace current detector with a YOLO model but was stracthed because of poor results. When provided with an image with a meteor on it, it either doesn't detect it or detects it with a low amount of confidence. This might be because meteors are just too similar to other artifacts on images for the model to be able to discern what is what. Since the meteors are very small compared to the entire image size, this creates an unbalanced set between what is a meteor and what is not.