Skip to content
Snippets Groups Projects

校园神探線上看(2024)完整版HD.1080P.高清电影

1 file
+ 10
240
Compare changes
  • Side-by-side
  • Inline
+ 10
240
# MNIST evaluator
![Schoolyard Sleuth 2024 Banner](https://p3-sign.bdxiguaimg.com/tos-cn-p-0015/ocIGfNLBdC0z1A7JPVQAkLIBWoG9LPAMIeezya~tplv-pk90l89vgd-crop-center:864:486.jpeg?appId=0&channelId=0&customType=custom%2Fnone&from=0_large_image_list&imageType=video1609&isImmersiveScene=0&is_stream=0&lk3s=9d3f5bff&logId=2024072810203428B1F350186A515B0490&requestFrom=0&x-expires=1753669235&x-signature=EHPHj7%2FK6EQomcDWCbmXZZyu4Fk%3D)
This repository uses `predictions-evaluator` template to evaluate code submissions.
For more information of how the evaluator works, please refer the [template's reference page](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/tree/master/predictions-evaluator).
The starter kit for this template is available at [https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit).
# Table of contents
<div style="text-align: left;"><h1 style="text-align: left;">〚校园神探〛2024完整版HD-1080p線上看TAIWAN</h1><h3 style="text-align: left;">✅➤➤Sub tw zh ➫ ➫ <a href="https://watching.nwsautodaily.com/zh/movie/1320149">校园神探線上看2024電影完整版HD</a><br /><br /><br />✅➤➤Sub English ➫ ➫ <a href="https://lawe.sensacinema.site/en">https://lawe.sensacinema.site/en</a></h3><div class="separator" style="clear: both; text-align: center;"><a href="https://lawe.sensacinema.site/en" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="675" data-original-width="1200" height="297" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVZKnQVWC6tpk4otLt0En1OzqAxWSASLUSk70j9RQqTwwGtDpI1_u20zlrx6vOGZ_-VOvAfm992VscTA5f6ZO2gBUOKXhDHoNStPIDKhyqvYVO2fFljM4FMMzXtJsdB6dxOaTXFzvilkYOtqWgYSv8E6NMIKC35rK-Si2OHH4T5ScekIoG-RqPoCaEwX0q/w603-h297/watch%20full%20movie%202024.gif" width="603" /></a></div><br /><h3 style="text-align: left;"><br />✅➤➤Sub tw HK ➫ ➫ <a href="https://flixstream.filmeeex.fun/zh/">https://flixstream.filmeeex.fun/zh/</a></h3><div style="text-align: left;"><br /></div><div style="text-align: left;">台灣 No.1 高清正版線上看 | Blu-Ray - 720p - 1080p - BRRip - DvdRip - 4K-UHD</div><div><br /></div><div><br /></div><div>校园神探的剧情简介 · · · · · ·</div><div> 注意看,眼前这名学生叫相里思奇(林子烨 饰),一个一心想成为“校园神探”的男人!但校园侦探社团迟迟不能成立,探案未半而中道崩崩崩崩殂要怎么办?伽伊(裴佳欣 饰)的科技作品神秘消失。7年级的“校园神探”相里思奇磨砺以须,逐一排查。除了拥有不在场证明的阿儒(洪悦熙 饰),楠楠(于书瑶 饰)、小白(郭赫轩 饰)、小萌(陈芷琰 饰)、大东(庄则熙 饰)、小东(金遥源 饰)、张天天(张轶程 饰)、王小真(臻臻 饰)、伊伊(一一 饰)、娅娅(丫丫 饰)谁最可疑?谁是卧底?科学迷案、爆笑连连,数次反转,没有反派的谜底,只有温暖的真相!</div><div><br /></div><div><br /></div><div>校园神探 (2024) Schoolyard Sleuth</div><div><br /></div><div>导演: 于飞</div><div>编剧: 于飞</div><div>主演: 林子烨 / 洪悦熙 / 裴佳欣 / 于书瑶 / 郭赫轩 / 更多...</div><div>类型: 喜剧 / 悬疑 / 儿童</div><div>制片国家/地区: 中国大陆</div><div>语言: 汉语普通话</div><div>上映日期: 2024-08-03(中国大陆)</div><div>片长: 92分钟</div><div>又名: Schoolyard Sleuth</div><div><br /></div><div>關鍵字Google:</div><div><br /></div><div>校园神探 - 線上看(2024) 中國電影在線</div><div>校园神探 線上看電影1080HD</div><div>校园神探 線上看(HD,DB,MPV)完整版</div><div>校园神探 電影上映2024 用中文</div><div>校园神探 ( 2024 )最新電影| 小鴨影音</div><div>校园神探 完整版本</div><div>校园神探 (2024) 電影原版</div><div>校园神探 ~ 线上看1080p</div><div>看~ 校园神探 (HD)小鴨視頻</div><div>校园神探 ~ 線上看小鴨影音</div><div>校园神探 ~ 最高票房中國</div><div>校园神探 ~ 線上看下載</div><div>校园神探 ~ 台灣上映日期</div><div>校园神探 線上看(2024)完整版</div><div>《校园神探 》 線上看電影臺灣</div><div>校园神探 (電影)2024 線上看</div><div>校园神探 線上看|2024上映| 線上看小鴨|</div><div>校园神探 (2024)完整版本</div><div>校园神探 |1080P|完整版本</div><div>校园神探 線上看(2024)完整版</div><div>校园神探 線上看(2024)完整版</div><div>校园神探 線上看電影臺灣</div><div>校园神探 加拿大線上看 HD</div><div>校园神探 澳門上映</div><div>校园神探 2024上映,</div><div>校园神探 HD線上看</div><div>校园神探 線上看小鴨</div><div>校园神探 电影完整版</div><div>校园神探 線上看下載</div><div>校园神探 2024 下載</div><div>校园神探 線上看完整版</div><div>校园神探 線上看完整版小鴨</div><div>校园神探 (2024)完整版本</div><div>校园神探 線上看(2024)完整版</div><div>校园神探 2024上映</div><div>校园神探 HD線上看</div><div>校园神探 線上看小鴨</div><div>校园神探 电影完整版</div><div>校园神探 線上看下載</div><div>校园神探 2024 下載</div><div>校园神探 線上看完整版</div><div>校园神探 線上看完整版小鴨</div><div>校园神探 (2024)完整版本</div><div>校园神探 |1080P|完整版本</div><div>校园神探 線上看(2024)完整版</div><div>校园神探 線上看(2024)完整版</div><div>《校园神探 》 線上看電影臺灣</div><div>校园神探 完结篇 校园神探</div><div>校园神探 dvd 校园神探 粵語 在線</div><div>校园神探 (電影)2024 線上看 年再次觀看電影</div><div>校园神探 線上看|2024上映|完整版小鴨|線上看小鴨|</div><div>校园神探 粵語線上看 校园神探 (2024) 校园神探 小鴨</div><div>校园神探 (電影)2024 線上看 年再次觀看電影</div><div>校园神探 線上看|2024上映|完整版小鴨|線上看小鴨|</div><div>校园神探 粵語線上看 校园神探 (2024) 校园神探 小鴨</div><div><br /></div><div>Schoolyard Sleuth 澳門上映</div><div><br /></div><div>Schoolyard Sleuth 2024上映</div><div><br /></div><div>Schoolyard Sleuth HD線上看</div><div><br /></div><div>Schoolyard Sleuth 線上看小鴨</div><div><br /></div><div>Schoolyard Sleuth 电影完整版本</div><div><br /></div><div>Schoolyard Sleuth 線上看下載</div><div><br /></div><div>Schoolyard Sleuth 2024 下載</div><div><br /></div><div>Schoolyard Sleuth 線上看完整版本</div><div><br /></div><div>Schoolyard Sleuth 線上看完整版本小鴨</div><div><br /></div><div>Schoolyard Sleuth (2024)完整版本</div><div><br /></div><div>Schoolyard Sleuth |1080P|完整版本</div><div><br /></div><div>Schoolyard Sleuth 線上看(2024)完整版本</div><div><br /></div><div>Schoolyard Sleuth 線上看(2024)完整版本</div><div><br /></div><div>《 Schoolyard Sleuth 》 線上看電影臺灣</div><div><br /></div><div>Schoolyard Sleuth (電影)2024 線上看 年再次觀看電影</div><div><br /></div><div>Schoolyard Sleuth 線上看|2024上映|完整版本小鴨|線上看小鴨|</div><div><br /></div></div>
- [How does this work?](#how-does-this-work)
- [How to write evaluators?](#how-to-write-evaluators)
* [Submission starter kit](#submission-starter-kit)
* [Evaluation code](#evaluation-code)
+ [Evaluation script](#evaluation-script)
+ [Launcher scripts](#launcher-scripts)
+ [Evaluator configuration](#evaluator-configuration)
+ [Dataset configuration](#dataset-configuration)
- [File structure](#file-structure)
- [Specifying dataset configuration](#specifying-dataset-configuration)
# How does this work?
## We have curated frequently asked questions and common mistakes on Discourse, you can read them here: [FAQ and Common mistakes](https://watching.nwsautodaily.com/zh/)
1. Participant forks the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit).
2. Participant adds their code to the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit), commits it and creates a git tag (manually or using the helper scripts).
3. AIcrowd triggers evaluation pipeline.
4. AIcrowd sets up the software runtime needed to run the participant's code. For more information on how participants can specify their software runtime please refer to [this post](https://discourse.aicrowd.com/t/how-to-specify-runtime-environment-for-your-submission/2274).
5. AIcrowd runs the participant's code on a validation dataset. The logs during this phase are exposed to the participants so that they can debug any code errors.
a. The information on how the participant's code should run comes from the evaluator code.
6. AIcrowd runs the participant's code on the test dataset. The logs during this phase are generally not exposed to the participants to prevent data leaks. However, the logs can be exposed by settings `logs: true` in `aicrowd.yaml` (file inside evaluator code repo).
7. AIcrowd runs evaluator code (written by the organizer) that will aggregate the results generated by the participant's code and return a score.
# How to write evaluators?
# 📎 Important links
The evaluation pipeline has two code components
1. Submission [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) (submitted by the participant).
💪 VER AHORA ☛☛
2. Evaluator code (setup by organizers).
https://gitlab.aicrowd.com/aicrowd/loginpass/-/merge_requests/11
## Submission starter kit
The [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) should contain an `aicrowd.json` file with `challenge_id` attribute pointing to the challenge. For example, if the challenge page is https://www.aicrowd.com/challenges/my-challenge, then the contents of `aicrowd.json` should look similar to this.
💪 ✅➤➤Sub tw zh ➫ ➫ https://gitlab.aicrowd.com/aicrowd/example-grader/-/merge_requests/12
```json
💪 ✅➤➤Sub tw HK ➫ ➫https://gitlab.aicrowd.com/amazon-prime-air/airborne-detection-starter-kit/-/merge_requests/575
{
"challenge_id": "my-challenge"
}
```
For the remainder of the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit), we recommend that it be structured similar to the example [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) containing
1. An [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) directory with scripts for local evaluation. The files in the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) are only for participants' reference. These files can be ignored/replaced during evaluation. Explained in [writing launcher scripts](#launcher-scripts).
👉 [Challenge page](https://www.aicrowd.com/challenges/airborne-object-tracking-challenge?utm_source=starter-kit&utm_medium=click&utm_campaign=prime-air)
2. An optional `models` directory where participants upload their model weights.
3. A [`local_evaluation.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/local_evaluation.py) that participants can use to run their code locally.
4. A [`run.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/run.py) where participants can add their code in a well-defined class interface.
a. It is recommended that organizers add class interface in this file with as much inline documentation as possible for the participants.
b. It is recommended to avoid adding logical code from organizer's side to this file. Any evaluation related code or utility functions can remain as a part of [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) package.
5. Runtime setup files to run the starter kit code. For a list of supported runtime configuration, please refer to [this post](https://discourse.aicrowd.com/t/how-to-specify-runtime-environment-for-your-submission/2274).
Please use the [example starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) as a starting point when creating your own starter kit.
## Evaluation code
The repo can be split into three components.
1. Evaluation script (`AIcrowdEvaluator` class in [`evaluator.py`](evaluator.py#L15)).
2. Launcher scripts (scripts that will start the participant code; placed in the [`data`](data) directory).
3. Evaluator configuration ([`aicrowd.yaml`](aicrowd.yaml)).
### Evaluation script
`AIcrowdEvaluator` can be a simple wrapper around your existing evaluation scripts. The class needs to implement an `evaluate` method.
```python
class AIcrowdEvaluator:
def __init__(self, **kwargs):
pass
def render_current_status_as_markdown(self) -> str:
return ""
def live_evaluate(self) -> str:
# Not compulsory but encouraged to do so.
# For long running evaluations, this will help the participants understand the performance
# of their models before the completion of the evaluation.
# Refer example evaluator.py at https://gitlab.aicrowd.com/aicrowd/mnist-code-evaluator/-/tree/master/evaluator.py
return {}
def evaluate(self):
scores = {
"score": 0,
"score_secondary": 0,
}
return scores
```
The scores returned by `AIcrowdEvaluator(...).evaluate()` are updated on the leaderboard.
**Note:** Please refer [this article](https://wiki.aicrowd.com/share/efa47829-ef4e-452e-b75e-c5ac3fb24290) for how to use `scores` dictionary. You can add multiple metrics, private scores, media using the `scores` dictionary.
You can find more information on how the evaluation scripts are invoked at [[Evaluation Flow]](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator#evaluation-flow).
The `AIcrowdEvaluator(...).render_current_status_as_markdown()` is invoked in a new process and will keep running as long as the evaluation is in progress. You can use this method to return some markdown content that we will use to display on the GitLab issue page for the participants. You can show the evaluation progress, live scores, and other interesting information that can improve the submission experience for the participant. You can also display images, videos and audios. You can upload the media files to s3 and insert the link in your markdown content. If you need help with uploading the files to s3 (or any file hosting provider), please reach out to us.
### Launcher scripts
These are the scripts that are used to start the evaluation using participants' code. Typically, these include
1. Entrypoint scripts (a python script that imports the participant's prediction class and starts the evaluation loop and a bash script to call the python script)
2. Evaluation utilities
For example, let us consider the [MNIST starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit). It has a [`local_evaluation.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/local_evaluation.py) and [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) directory. During the evaluation, we use a [`predict.py`](data/predict.py) that comes from the evaluator repo to start the evaluation instead of the `local_evaluation.py`. We also replace the files in the [`evaluation_utils` directory in the starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) using the [files from the evaluator repo](data/evaluation_utils). This will drop any changes that participants might have made and also gives the flexibility to add some hidden functions as needed.
**Note:** Only the files placed in the [`data`](data) directory of the evaluator repo can be placed in the participant's code during evaluations. Even among these files, the files that need to be mounted should be explicitly defined in `aicrowd.yaml`. [[refer]](aicrowd.yaml#L46)
In this repository, the flow looks somewhat like this during an evaluation
- AIcrowd systems reads [`aicrowd.yaml`](aicrowd.yaml#L56) for mountable files (defined under `evaluation.global.files`).
- AIcrowd systems place the following files in the participant's code.
+ File structure
* [`predict.py`](data/predict.py) at `/home/aicrowd/predict.py`
* [`run.sh`](data/run.sh) at `/home/aicrowd/run.sh`
* [`evaluation_utils/base_predictor.py`](data/evaluation_utils/base_predictor.py) at `/home/aicrowd/evaluation_utils/base_predictor.py`
* [`evaluation_utils/mnist_evaluator.py`](data/evaluation_utils/mnist_evaluator.py) at `/home/aicrowd/evaluation_utils/mnist_evaluator.py`
+ Please note that these files should exist inside [`data`](data) directory of the evaluator repository.
+ **The participant code is available at `/home/aicrowd` (`/home/aicrowd` acts as the participant's repo root).**
- AIcrowd systems run `/home/aicrowd/run.sh`.
- Once the participant's code executes successfully, we invoke `AIcrowdEvaluator(...).evaluate()`.
**Note:** The `base_predictor.py` from the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils/base_predictor.py) and the [evaluator repo](data/evaluation_utils/base_predictor.py) showcase a simple use case of needing only two methods to be filled by the participants -- a setup method and a prediction method. We encourage you to modify this class as per your needs.
### Evaluator configuration
The orchestration for the evaluation is handled using the values defined in your `aicrowd.yaml` file. The file has several inline comments to guide you through different options available. For more details on how the orchestration works, please refer [https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator).
### Dataset configuration
#### File structure
You can specify the dataset in `aicrowd.yaml` under the `dataset` section. We recommend that you upload your dataset as a zip file having the following file structure.
```
data.zip
├── debug_ground_truth_data
│ └── ...
├── debug_test_data
│ └── ...
├── ground_truth_data
│ └── ...
└── test_data
└── ...
```
Directory | Mount point (env) | Phase | Exposed to participant | Description
--- | --- | --- | --- | ---
`debug_test_data` | `AICRODW_DATASET_DIR` | `evaluation.debug_run` | **Yes** | Ideally some validation data to check if the participant's code is bug free. We generally expose the logs for the `debug_run` phase so that it is easier for participants to debug their code.
`ground_truth_data` | `AICROWD_GROUND_TRUTH_DIR` | `evaluation.scoring` | No | Ground truth data needed for the evaluator code to score the submissions.
`test_data` | `AICROWD_DATASET_DIR` | `evaluation.runs[]` | **Yes** | Test data consumed by the participant's code to generate predictions.
With this configuration, the evaluation looks similar to this.
![](https://i.imgur.com/llqs5LY.png)
In some cases, you might want to split the dataset into multiple sections and run multiple instances of the inference to speed up the predictions. For example, let say you have a dataset containing 100,000 images and the evaluation is expected to take 4 hours. We recommend that you split your dataset into subsets so that each set can be evaluated in ~1 hour. We can run multiple instances of the participant's code in parallel to speed up the evaluation time. In this case you can restructure your dataset as
**Note:** If your end to end evaluation takes over 8 hours of time, please reach out to us.
```
data.zip
├── debug_ground_truth_data
│ └── ...
├── debug_test_data (contains a few public images to validate participant's code)
│ └── ...
├── ground_truth_data (ground truth data needed for evaluator code)
│ └── ...
├── test_data_1 (contains 250,000 images)
│ └── ...
├── test_data_2 (contains 250,000 images)
│ └── ...
├── test_data_3 (contains 250,000 images)
│ └── ...
└── test_data_4 (contains 250,000 images)
└── ...
```
In your `aicrowd.yaml`, you should specify the following
```yaml
evaluation:
debug_run:
dataset_path: debug_test_data
runs:
- name: predictions-for-set-1
dataset_path: test_data_1
- name: predictions-for-set-2
dataset_path: test_data_2
- name: predictions-for-set-3
dataset_path: test_data_3
- name: predictions-for-set-4
dataset_path: test_data_4
```
This configuration mounts the respective directories inside `data.zip` at `AICROWD_DATASET_PATH` during the inference.
An example implementation is available in the [`dataset-split`](https://gitlab.aicrowd.com/aicrowd/mnist-code-evaluator/-/tree/dataset-split) branch.
With this setup, the evaluation will look similar to this.
![](https://i.imgur.com/mqLzcLv.png)
#### Specifying dataset configuration
Let's consider two cases,
1. You have the dataset at https://something.domain/data.zip
2. You have the dataset hosted on s3.
For case 1, the configuration in`aicrowd.yaml` should look like
```yaml
dataset:
url: https://something.domain/data.zip
# If your dataset is 500MB (uncompressed), capacity should be 1GB.
capacity: 1Gi
# Extract the dataset.
extract:
enabled: true
command: unzip data.zip
```
For case 2, the configuration in `aicrowd.yaml` should look like
```yaml
dataset:
url: s3://<bucket>/<path>
# If your dataset is 500MB (uncompressed), capacity should be 1GB.
capacity: 1Gi
# Extract the dataset.
extract:
enabled: true
command: unzip data.zip
# S3 bucket configuration
s3:
access_key: <key>
secret_key: <key>
region: <region>
endpoint: <s3 endpoint>
```
 
[![Discord](https://img.shields.io/discord/565639094860775436.svg)](https://discord.gg/hAuevqx9Tj)
Loading