#neural-forecast

Title

# neural-forecast

m

Martin Bel

02/23/2023, 1:38 PMHi all,
Here is a takeaway from some experiments I made with NHiTS.
From all the parameters I made experiments with, the scaler_type was the one that had the largest effect on the results.
With the default (scaler_type="identity") type I get this kind of result:

if I use any of the other scaler_type options such as "robust" or "invariant" I get very different results.
This is with scaler="robust"

To me the first one looks off BUT the MAE, MAPE are actually similar.
Perhaps a different

`scaler_type`

should be set as default. Any thoughts on this?f

Farzad E

02/23/2023, 2:32 PMc

Cristian (Nixtla)

02/23/2023, 2:57 PM`robust`

as the default.👍 1

m

Martin Bel

02/24/2023, 6:45 PMI was reading the Neural Prophet paper and found this interesting paragraph, relevant to this discussion.
It seems their default normalization does this: "scales the minimum value to 0.0 and the 95th quantile to 1.0" which seems super hacky but might work ok if the data has outliers.
Using the 95th percentile is quite high but perhaps 99th is ok.
I guess you could do this preprocessing manually anyway.

f

Farzad E

02/24/2023, 6:55 PMc

Cristian (Nixtla)

02/24/2023, 6:56 PMThanks for sharing that **@Martin Bel**. Its definitely a valid approach. Our

`robust`

scaler already deals with outliers, by for example using median instead of mean as scale. One key difference with our normalization is that for window based models (MLP, NBEATS, NHITS, TFT) we are normalizing each input window separately, instead of the whole time series.m

Martin Bel

02/24/2023, 7:04 PMI see, the robust one makes sense to me, I think this soft method can work better when you have sparse data. I'm just surprised this is their default method.
I guess normalizing the entire series is just wrong, right? I'm not sure if NP is doing this but it's good to know this is how nixtla handles it!
I haven't used NP much **@Farzad E** but prophet is generally not amazing, it's just easy to use.