Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infrastructure for comparing case only forecasts to case forecasts via Rt #68

Open
seabbs opened this issue May 15, 2020 · 2 comments
Open
Labels
enhancement New feature or request question Further information is requested

Comments

@seabbs
Copy link
Contributor

seabbs commented May 15, 2020

This package focusses on providing case forecasts via first forecasting Rt values. An outstanding question is the value this adds over a simple case only forecast. It would be interesting to add tooling to enable case only forecasts to be easily compared to Rt based forecasts but this may require a complex rewrite. Opening this as a starting point for thinking about this.

@seabbs seabbs added enhancement New feature or request question Further information is requested labels May 15, 2020
@nikosbosse
Copy link
Contributor

So I wrote some functions to do direct case forecasts on the branch add_direct_case_fc (you will very much hate the function names :D ) . That was easy, but ultimately is not too helpful, because we can't really do any iterative evaluation and ensembling on these yet within the framework.
My general idea for a rewrite is to separate the following steps:

  1. forecasting / iterative forecasting for evaluation purposes
  2. ensembling / iterative ensembling for evaluation purposes
  3. scoring for evaluation purposes

My suggestions would therefore be the following:

  • rewrite the forecast_rt function (or alternatively, make a wrapper around it) so that it generates the forecasts for several models
  • rewrite forecast_cases function (or make wrapper) so that it can forecast cases based on either Rt or based on a direct case prediction model
  • create a 'form_ensemble' function that takes all the rt or case predictions as input and creates an ensemble based on these predictions
  • only score after all predictions / ensembles have been generated.
  • This means rewrite the compare_timeseries / compare_models / evaluate_model functions in a way that scoring is done as the last step in the outermost function call instead of in evaluate_models in the innermost function call

Happy to chat about it anytime

@seabbs
Copy link
Contributor Author

seabbs commented May 27, 2020

So I am afraid I forgot about this and just implemented some case only forecasts into master as simple wrappers around Rt forecasts. Definitely my bad - take a look and see what you think.

I dont think I have the time for a rewrite at the moment so my interest would be adding an optional logical to evaluate_model that allows a case only forecast to happen with the same model and be scored.

For now I think we should be thinking about testing Rt against case forecasting in EpiSoon and ensembling them outside of EpiSoon but definitely agree that at some point ensembles of case only + Rt makes sense.

All this is up for grabs if you make a working prototype though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants