-
Notifications
You must be signed in to change notification settings - Fork 342
Description
In the assign_labels function under evaluation.py, we have the following code snippet:
# Compute proportions of spike activity per class.
proportions = rates / rates.sum(1, keepdim=True)
proportions[proportions != proportions] = 0 # Set NaNs to 0
# Neuron assignments are the labels they fire most for.
assignments = torch.max(proportions, 1)[1]
Does this introduce a first-index bias? That is, if proportions is a row of all zeros (which happens if a neuron hasn't fired for any class yet), torch.max doesn't throw an error, but identifies the maximum value as 0 and returns the index of its first occurrence, which is always index 0.
This potentially creates a synthetic over-representation of the first class. Specifically, we can get:
- Skewed Voting: inactive neurons contribute "votes" for Label 0 during inference, leading to an artificial boost in accuracy for the first class and decreased accuracy for others.
- Training Misinterpretation: Early in training, before neurons have specialized, the model will appear to have a strong preference for Label 0, misleading anyone interested about the learning progress.
Potential fix is maybe mask out the neurons with zero activity?