Profile and Optimize the codebase
The goal is to do a profiling of the codebase with a simple set of exhaustive examples, to get a good sense of where compute resources are used, and which parts are the most resource heavy. Parts which can be ill-optimized should jump out in that case.
Good to do this before the public release of the package.
- benchmark and profile all examples in a new daily scheduled job -> https://gitlab.aicrowd.com/flatland/flatland/merge_requests/69/diffs
- benchmark and profile all notebooks (since examples will be moved to notebooks)
- coverage of running examples/notebooks/baelines separate from unit test coverage
- run baselines in a new daily scheduled job for integration testing (in the same way as benchmarks and profiling)
- define acceptance limits on benchmarks or use it for further reference only?