Non-negative predictions Hi together, from your e...
# neural-forecast
s
Non-negative predictions Hi together, from your experience, what is the best method to ensure non-negative predictions? 1. _post-processing_: replace <0 predicted values by zero, 2. _loss-function_: applying a loss function that penalizes <0 predicted values, 3. _non-negative activation_: adding a final layer that enforces a non-negative value (e.g., ReLU), 4. _log-transformation_: transforming the target variable to log-scale prior to prediction and re-transform later to original scale. 5. or entirely different? Eager to hear your thoughts. Best, Steffen
o
My order of trying out would be: 1. Simply clip to zero (this usually works quite well already, especially with neural networks) 2. Use a loss that only allows positive predictions (no penalization - it should enforce positivity) 3. Enforce positivity in the architecture (but this is more involved, so start with the first) 4. Log-transforming data - I don't like this, as it creates all other sorts of issues down the line.
🙌 1