Non-negative predictions
Hi together,
from your experience, what is the best method to ensure non-negative predictions?
1. _post-processing_: replace <0 predicted values by zero,
2. _loss-function_: applying a loss function that penalizes <0 predicted values,
3. _non-negative activation_: adding a final layer that enforces a non-negative value (e.g., ReLU),
4. _log-transformation_: transforming the target variable to log-scale prior to prediction and re-transform later to original scale.
5. or entirely different?
Eager to hear your thoughts.
Best,
Steffen