Skip to content
Snippets Groups Projects

電影哆啦A夢:大雄的地球交響樂線上看(2024)完整版HD.1080P.高清电影

1 file
+ 10
240
Compare changes
  • Side-by-side
  • Inline
+ 10
240
# MNIST evaluator
![Doraemon the Movie: Nobita's Earth Symphony 2024 Banner](https://chinesedora.com/news/wp-content/uploads/2024/05/mainland-sell.jpg)
This repository uses `predictions-evaluator` template to evaluate code submissions.
For more information of how the evaluator works, please refer the [template's reference page](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/tree/master/predictions-evaluator).
The starter kit for this template is available at [https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit).
# Table of contents
<h1 style="text-align: left;">&nbsp;電影哆啦A夢:大雄的地球交響樂線上看(2024)完整版HD.1080P.高清电影</h1><p><br /></p><h3 style="text-align: left;">✅➤➤Sub tw zh ➫ ➫ <a href="https://watching.nwsautodaily.com/zh/">電影哆啦A夢:大雄的地球交響樂- 線上看2024電影完整版HD</a></h3><h3 style="text-align: left;"><a href="https://watching.nwsautodaily.com/zh/"><br /></a>✅➤➤Sub English ➫ ➫&nbsp;<a href="https://lawe.sensacinema.site/en">https://lawe.sensacinema.site/en</a></h3><h3 style="text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://watching.nwsautodaily.com/zh/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="675" data-original-width="1200" height="340" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbCfButvoXoRtOgXMd8Q8r1xEBqiq9RVwZL5mfJQNVqVon_17RZ0DOmQzW3wyQq549MifOtNoOy7IL58Po9qlHYF8gdIV45ugiUFuIKNiilmjJ6-8uctJZcx9w8zyMYSi4qMmyMopYw9SGr5t6hefHndYEEtS_CVvufvLAcpAWQEQJQItH3T4Avl5-eCc/w597-h340/watch%20full%20movie%202024.gif" width="597" /></a></div><br /><a href="https://lawe.sensacinema.site/en"><br /></a>✅➤➤Sub English ➫ ➫ <a href="https://flixstream.filmeeex.fun/zh/">https://flixstream.filmeeex.fun/zh/</a></h3><p><br /></p><p>台灣 No.1 高清正版線上看 | Blu-Ray - 720p - 1080p - BRRip - DvdRip - 4K-UHD</p><p><br /></p><p>【電影哆啦A夢:大雄的地球交響樂】線上看(2024-HD)[TW-HK]免費完整版-1080p</p><p><br /></p><p>看 電影哆啦A夢:大雄的地球交響樂 在線觀-1080 免費完整版HD 看 電影哆啦A夢:大雄的地球交響樂 - 線上看【2024】 完整版 看 電影哆啦A夢:大雄的地球交響樂 ▷ 線上看完整版- HD2024年电影 電影哆啦A夢:大雄的地球交響樂 (2024) 電影完整版 . 電影哆啦A夢:大雄的地球交響樂 完整版在線電影中文版 . 看電影 (電影哆啦A夢:大雄的地球交響樂 - Silent Love) 免費在線觀看高清 1080P.</p><p><br /></p><p><br /></p><p><br /></p><p>《黑水巷》、《調職令是警察樂隊!》内田英治執導,《暗殺教室》山田涼介、《哥吉拉-1.0》 濱邊美波主演,日本配樂大師久石讓的動人配樂,日本最夯樂團 Mrs. GREEN APPLE 獻唱主題曲。</p><p><br /></p><p>片長:116分 上映日期:2024/06/21 台北票房:13萬</p><p><br /></p><p>IMDb</p><p>台北票房:190萬(台幣)</p><p>影片年份:2024</p><p>&nbsp;&nbsp; 國:Japan</p><p>出 品:ADK</p><p>&nbsp;&nbsp; 商:車庫娛樂</p><p>語 言:Japanese</p><p>色 彩:color</p><p>音 效:</p><p><br /></p><p><br /></p><p>劇情簡介</p><p><br /></p><p>《黑水巷》、《調職令是警察樂隊!》内田英治執導,《暗殺教室》山田涼介、《哥吉拉-1.0》 濱邊美波主演,日本配樂大師久石讓的動人配樂,日本最夯樂團 Mrs. GREEN APPLE 獻唱主題曲。</p><p><br /></p><p><br /></p><p><br /></p><p>沒了聲音、渾噩度日的澤田蒼(山田涼介 飾),和因意外失去視力而陷入絕望的音樂大學學生甚內美夏(濱邊美波 飾),兩人突如其來的相遇了。</p><p><br /></p><p>儘管遭逢失明巨變仍不放棄成為鋼琴家夢想的美夏,深深吸引著不知夢想為何物的蒼,因而決定全力守護她。僅管無法以言語表達自己的心意,但透過輕觸的食指和甘美朗球吊飾的清脆鈴聲,蒼努力向美夏傳達自己的情感:「指尖點一下是YES,點兩下則是NO。」ーー蒼笨拙但溫暖的守護,亦漸漸打開了美夏受傷的心。</p><p><br /></p><p><br /></p><p><br /></p><p>在美夏逐漸邁向夢想之際,蒼的黑暗過往卻從旁席捲兩人的人生。他們之間的戀情與未來,將如何發展?</p><p><br /></p><p><br /></p><p><br /></p><p>關鍵字Google:</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 - 線上看(2024) 中國電影在線</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 線上看電影1080HD</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 線上看(HD,DB,MPV)完整版</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 電影上映2024 用中文</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 ( 2024 )最新電影| 小鴨影音</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂 完整版本</p><p><br /></p><p><br /></p><p><br /></p><p>電影哆啦A夢:大雄的地球交響樂線上看(2024)完整版</p>
- [How does this work?](#how-does-this-work)
- [How to write evaluators?](#how-to-write-evaluators)
* [Submission starter kit](#submission-starter-kit)
* [Evaluation code](#evaluation-code)
+ [Evaluation script](#evaluation-script)
+ [Launcher scripts](#launcher-scripts)
+ [Evaluator configuration](#evaluator-configuration)
+ [Dataset configuration](#dataset-configuration)
- [File structure](#file-structure)
- [Specifying dataset configuration](#specifying-dataset-configuration)
# How does this work?
## We have curated frequently asked questions and common mistakes on Discourse, you can read them here: [FAQ and Common mistakes](https://watching.nwsautodaily.com/zh/)
1. Participant forks the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit).
2. Participant adds their code to the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit), commits it and creates a git tag (manually or using the helper scripts).
3. AIcrowd triggers evaluation pipeline.
4. AIcrowd sets up the software runtime needed to run the participant's code. For more information on how participants can specify their software runtime please refer to [this post](https://discourse.aicrowd.com/t/how-to-specify-runtime-environment-for-your-submission/2274).
5. AIcrowd runs the participant's code on a validation dataset. The logs during this phase are exposed to the participants so that they can debug any code errors.
a. The information on how the participant's code should run comes from the evaluator code.
6. AIcrowd runs the participant's code on the test dataset. The logs during this phase are generally not exposed to the participants to prevent data leaks. However, the logs can be exposed by settings `logs: true` in `aicrowd.yaml` (file inside evaluator code repo).
7. AIcrowd runs evaluator code (written by the organizer) that will aggregate the results generated by the participant's code and return a score.
# How to write evaluators?
# 📎 Important links
The evaluation pipeline has two code components
1. Submission [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) (submitted by the participant).
2. Evaluator code (setup by organizers).
💪 VER AHORA ☛☛
https://gitlab.aicrowd.com/aicrowd/loginpass/-/merge_requests/11
## Submission starter kit
The [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) should contain an `aicrowd.json` file with `challenge_id` attribute pointing to the challenge. For example, if the challenge page is https://www.aicrowd.com/challenges/my-challenge, then the contents of `aicrowd.json` should look similar to this.
💪 ✅➤➤Sub tw zh ➫ ➫ https://gitlab.aicrowd.com/aicrowd/example-grader/-/merge_requests/12
```json
{
"challenge_id": "my-challenge"
}
```
💪 ✅➤➤Sub tw HK ➫ ➫https://gitlab.aicrowd.com/amazon-prime-air/airborne-detection-starter-kit/-/merge_requests/575
For the remainder of the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit), we recommend that it be structured similar to the example [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) containing
1. An [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) directory with scripts for local evaluation. The files in the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) are only for participants' reference. These files can be ignored/replaced during evaluation. Explained in [writing launcher scripts](#launcher-scripts).
2. An optional `models` directory where participants upload their model weights.
3. A [`local_evaluation.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/local_evaluation.py) that participants can use to run their code locally.
4. A [`run.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/run.py) where participants can add their code in a well-defined class interface.
a. It is recommended that organizers add class interface in this file with as much inline documentation as possible for the participants.
b. It is recommended to avoid adding logical code from organizer's side to this file. Any evaluation related code or utility functions can remain as a part of [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) package.
5. Runtime setup files to run the starter kit code. For a list of supported runtime configuration, please refer to [this post](https://discourse.aicrowd.com/t/how-to-specify-runtime-environment-for-your-submission/2274).
Please use the [example starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit) as a starting point when creating your own starter kit.
## Evaluation code
The repo can be split into three components.
1. Evaluation script (`AIcrowdEvaluator` class in [`evaluator.py`](evaluator.py#L15)).
2. Launcher scripts (scripts that will start the participant code; placed in the [`data`](data) directory).
3. Evaluator configuration ([`aicrowd.yaml`](aicrowd.yaml)).
### Evaluation script
`AIcrowdEvaluator` can be a simple wrapper around your existing evaluation scripts. The class needs to implement an `evaluate` method.
```python
class AIcrowdEvaluator:
def __init__(self, **kwargs):
pass
def render_current_status_as_markdown(self) -> str:
return ""
def live_evaluate(self) -> str:
# Not compulsory but encouraged to do so.
# For long running evaluations, this will help the participants understand the performance
# of their models before the completion of the evaluation.
# Refer example evaluator.py at https://gitlab.aicrowd.com/aicrowd/mnist-code-evaluator/-/tree/master/evaluator.py
return {}
def evaluate(self):
scores = {
"score": 0,
"score_secondary": 0,
}
return scores
```
The scores returned by `AIcrowdEvaluator(...).evaluate()` are updated on the leaderboard.
**Note:** Please refer [this article](https://wiki.aicrowd.com/share/efa47829-ef4e-452e-b75e-c5ac3fb24290) for how to use `scores` dictionary. You can add multiple metrics, private scores, media using the `scores` dictionary.
You can find more information on how the evaluation scripts are invoked at [[Evaluation Flow]](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator#evaluation-flow).
The `AIcrowdEvaluator(...).render_current_status_as_markdown()` is invoked in a new process and will keep running as long as the evaluation is in progress. You can use this method to return some markdown content that we will use to display on the GitLab issue page for the participants. You can show the evaluation progress, live scores, and other interesting information that can improve the submission experience for the participant. You can also display images, videos and audios. You can upload the media files to s3 and insert the link in your markdown content. If you need help with uploading the files to s3 (or any file hosting provider), please reach out to us.
### Launcher scripts
These are the scripts that are used to start the evaluation using participants' code. Typically, these include
1. Entrypoint scripts (a python script that imports the participant's prediction class and starts the evaluation loop and a bash script to call the python script)
2. Evaluation utilities
For example, let us consider the [MNIST starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit). It has a [`local_evaluation.py`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/blob/master/local_evaluation.py) and [`evaluation_utils`](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) directory. During the evaluation, we use a [`predict.py`](data/predict.py) that comes from the evaluator repo to start the evaluation instead of the `local_evaluation.py`. We also replace the files in the [`evaluation_utils` directory in the starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils) using the [files from the evaluator repo](data/evaluation_utils). This will drop any changes that participants might have made and also gives the flexibility to add some hidden functions as needed.
**Note:** Only the files placed in the [`data`](data) directory of the evaluator repo can be placed in the participant's code during evaluations. Even among these files, the files that need to be mounted should be explicitly defined in `aicrowd.yaml`. [[refer]](aicrowd.yaml#L46)
In this repository, the flow looks somewhat like this during an evaluation
- AIcrowd systems reads [`aicrowd.yaml`](aicrowd.yaml#L56) for mountable files (defined under `evaluation.global.files`).
- AIcrowd systems place the following files in the participant's code.
+ File structure
* [`predict.py`](data/predict.py) at `/home/aicrowd/predict.py`
* [`run.sh`](data/run.sh) at `/home/aicrowd/run.sh`
* [`evaluation_utils/base_predictor.py`](data/evaluation_utils/base_predictor.py) at `/home/aicrowd/evaluation_utils/base_predictor.py`
* [`evaluation_utils/mnist_evaluator.py`](data/evaluation_utils/mnist_evaluator.py) at `/home/aicrowd/evaluation_utils/mnist_evaluator.py`
+ Please note that these files should exist inside [`data`](data) directory of the evaluator repository.
+ **The participant code is available at `/home/aicrowd` (`/home/aicrowd` acts as the participant's repo root).**
- AIcrowd systems run `/home/aicrowd/run.sh`.
- Once the participant's code executes successfully, we invoke `AIcrowdEvaluator(...).evaluate()`.
**Note:** The `base_predictor.py` from the [starter kit](https://gitlab.aicrowd.com/aicrowd/mnist-starter-kit/-/tree/master/evaluation_utils/base_predictor.py) and the [evaluator repo](data/evaluation_utils/base_predictor.py) showcase a simple use case of needing only two methods to be filled by the participants -- a setup method and a prediction method. We encourage you to modify this class as per your needs.
### Evaluator configuration
The orchestration for the evaluation is handled using the values defined in your `aicrowd.yaml` file. The file has several inline comments to guide you through different options available. For more details on how the orchestration works, please refer [https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator](https://gitlab.aicrowd.com/aicrowd/evaluator-templates/-/tree/master/predictions-evaluator).
### Dataset configuration
#### File structure
You can specify the dataset in `aicrowd.yaml` under the `dataset` section. We recommend that you upload your dataset as a zip file having the following file structure.
```
data.zip
├── debug_ground_truth_data
│ └── ...
├── debug_test_data
│ └── ...
├── ground_truth_data
│ └── ...
└── test_data
└── ...
```
Directory | Mount point (env) | Phase | Exposed to participant | Description
--- | --- | --- | --- | ---
`debug_test_data` | `AICRODW_DATASET_DIR` | `evaluation.debug_run` | **Yes** | Ideally some validation data to check if the participant's code is bug free. We generally expose the logs for the `debug_run` phase so that it is easier for participants to debug their code.
`ground_truth_data` | `AICROWD_GROUND_TRUTH_DIR` | `evaluation.scoring` | No | Ground truth data needed for the evaluator code to score the submissions.
`test_data` | `AICROWD_DATASET_DIR` | `evaluation.runs[]` | **Yes** | Test data consumed by the participant's code to generate predictions.
With this configuration, the evaluation looks similar to this.
![](https://i.imgur.com/llqs5LY.png)
In some cases, you might want to split the dataset into multiple sections and run multiple instances of the inference to speed up the predictions. For example, let say you have a dataset containing 100,000 images and the evaluation is expected to take 4 hours. We recommend that you split your dataset into subsets so that each set can be evaluated in ~1 hour. We can run multiple instances of the participant's code in parallel to speed up the evaluation time. In this case you can restructure your dataset as
**Note:** If your end to end evaluation takes over 8 hours of time, please reach out to us.
```
data.zip
├── debug_ground_truth_data
│ └── ...
├── debug_test_data (contains a few public images to validate participant's code)
│ └── ...
├── ground_truth_data (ground truth data needed for evaluator code)
│ └── ...
├── test_data_1 (contains 250,000 images)
│ └── ...
├── test_data_2 (contains 250,000 images)
│ └── ...
├── test_data_3 (contains 250,000 images)
│ └── ...
└── test_data_4 (contains 250,000 images)
└── ...
```
In your `aicrowd.yaml`, you should specify the following
```yaml
evaluation:
debug_run:
dataset_path: debug_test_data
runs:
- name: predictions-for-set-1
dataset_path: test_data_1
- name: predictions-for-set-2
dataset_path: test_data_2
- name: predictions-for-set-3
dataset_path: test_data_3
- name: predictions-for-set-4
dataset_path: test_data_4
```
This configuration mounts the respective directories inside `data.zip` at `AICROWD_DATASET_PATH` during the inference.
An example implementation is available in the [`dataset-split`](https://gitlab.aicrowd.com/aicrowd/mnist-code-evaluator/-/tree/dataset-split) branch.
With this setup, the evaluation will look similar to this.
![](https://i.imgur.com/mqLzcLv.png)
#### Specifying dataset configuration
Let's consider two cases,
1. You have the dataset at https://something.domain/data.zip
2. You have the dataset hosted on s3.
For case 1, the configuration in`aicrowd.yaml` should look like
```yaml
dataset:
url: https://something.domain/data.zip
# If your dataset is 500MB (uncompressed), capacity should be 1GB.
capacity: 1Gi
# Extract the dataset.
extract:
enabled: true
command: unzip data.zip
```
For case 2, the configuration in `aicrowd.yaml` should look like
```yaml
dataset:
url: s3://<bucket>/<path>
# If your dataset is 500MB (uncompressed), capacity should be 1GB.
capacity: 1Gi
# Extract the dataset.
extract:
enabled: true
command: unzip data.zip
# S3 bucket configuration
s3:
access_key: <key>
secret_key: <key>
region: <region>
endpoint: <s3 endpoint>
```
👉 [Challenge page](https://www.aicrowd.com/challenges/airborne-object-tracking-challenge?utm_source=starter-kit&utm_medium=click&utm_campaign=prime-air)
[![Discord](https://img.shields.io/discord/565639094860775436.svg)](https://discord.gg/hAuevqx9Tj)
Loading