You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This package focusses on providing case forecasts via first forecasting Rt values. An outstanding question is the value this adds over a simple case only forecast. It would be interesting to add tooling to enable case only forecasts to be easily compared to Rt based forecasts but this may require a complex rewrite. Opening this as a starting point for thinking about this.
The text was updated successfully, but these errors were encountered:
So I wrote some functions to do direct case forecasts on the branch add_direct_case_fc (you will very much hate the function names :D ) . That was easy, but ultimately is not too helpful, because we can't really do any iterative evaluation and ensembling on these yet within the framework.
My general idea for a rewrite is to separate the following steps:
forecasting / iterative forecasting for evaluation purposes
ensembling / iterative ensembling for evaluation purposes
scoring for evaluation purposes
My suggestions would therefore be the following:
rewrite the forecast_rt function (or alternatively, make a wrapper around it) so that it generates the forecasts for several models
rewrite forecast_cases function (or make wrapper) so that it can forecast cases based on either Rt or based on a direct case prediction model
create a 'form_ensemble' function that takes all the rt or case predictions as input and creates an ensemble based on these predictions
only score after all predictions / ensembles have been generated.
This means rewrite the compare_timeseries / compare_models / evaluate_model functions in a way that scoring is done as the last step in the outermost function call instead of in evaluate_models in the innermost function call
So I am afraid I forgot about this and just implemented some case only forecasts into master as simple wrappers around Rt forecasts. Definitely my bad - take a look and see what you think.
I dont think I have the time for a rewrite at the moment so my interest would be adding an optional logical to evaluate_model that allows a case only forecast to happen with the same model and be scored.
For now I think we should be thinking about testing Rt against case forecasting in EpiSoon and ensembling them outside of EpiSoon but definitely agree that at some point ensembles of case only + Rt makes sense.
All this is up for grabs if you make a working prototype though!
This package focusses on providing case forecasts via first forecasting Rt values. An outstanding question is the value this adds over a simple case only forecast. It would be interesting to add tooling to enable case only forecasts to be easily compared to Rt based forecasts but this may require a complex rewrite. Opening this as a starting point for thinking about this.
The text was updated successfully, but these errors were encountered: