@@ -102,7 +102,7 @@ The scores returned by `AIcrowdEvaluator(...).evaluate()` are updated on the lea
You can find more information on how the evaluation scripts are invoked at [[Evaluation Flow]](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator#evaluation-flow).
The `AIcrowdEvaluator(...).render_status_update()` is invoked in a new process and will keep running as long as the evaluation is in progress. You can use this method to return some markdown content that we will use to display on the GitLab issue page for the participants. You can show the evaluation progress, live scores, and other interesting information that can improve the submission experience for the participant. You can also display images, videos and audios. You can upload the media files to s3 and insert the link in your markdown content. If you need help with uploading the files to s3 (or any file hosting provider), please reach out to us.
The `AIcrowdEvaluator(...).render_current_status_as_markdown()` is invoked in a new process and will keep running as long as the evaluation is in progress. You can use this method to return some markdown content that we will use to display on the GitLab issue page for the participants. You can show the evaluation progress, live scores, and other interesting information that can improve the submission experience for the participant. You can also display images, videos and audios. You can upload the media files to s3 and insert the link in your markdown content. If you need help with uploading the files to s3 (or any file hosting provider), please reach out to us.