1.[About the Scene Understanding for Autonomous Drone Delivery Challenge](#about-the-scene-understanding-for-autonomous-drone-delivery-challenge)
1.[About the Music Demixing Challenge 2023](#about-the-music-demixing-challenge-2023)
2.[Evaluation](#evaluation)
2.[Evaluation](#evaluation)
3.[Baselines](#baselines)
3.[Baselines](#baselines)
4.[How to test and debug locally](#how-to-test-and-debug-locally)
4.[How to test and debug locally](#how-to-test-and-debug-locally)
...
@@ -26,7 +25,7 @@ Quick Links:
...
@@ -26,7 +25,7 @@ Quick Links:
7.[Setting up your codebase](#setting-up-your-codebase)
7.[Setting up your codebase](#setting-up-your-codebase)
8.[FAQs](#faqs)
8.[FAQs](#faqs)
# About the Music Demixing Challenge 2023
# 🎶 About the Music Demixing Challenge 2023
Have you ever sung using a karaoke machine or made a DJ music mix of your favourite song? Have you wondered how hearing aids help people listen more clearly or how video conference software reduces background noise?
Have you ever sung using a karaoke machine or made a DJ music mix of your favourite song? Have you wondered how hearing aids help people listen more clearly or how video conference software reduces background noise?
...
@@ -38,7 +37,9 @@ The Music Demixing Challenge 2023 (MDX23) is an opportunity for researchers and
...
@@ -38,7 +37,9 @@ The Music Demixing Challenge 2023 (MDX23) is an opportunity for researchers and
Given an **audio signal as input** (referred to as a "mixture"), you must **decompose in its different parts**.
Given an **audio signal as input** (referred to as a "mixture"), you must **decompose in its different parts**.
This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano).
This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano).
...
@@ -46,49 +47,53 @@ Karaoke systems can benefit from the audio source separation technology as users
...
@@ -46,49 +47,53 @@ Karaoke systems can benefit from the audio source separation technology as users
Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards.
Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards.
**MUSDB18 Leaderboard**
### **Leaderboard A (MUSDB18)**
Participants in Leaderboard will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models.
Participants in Leaderboard A will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models.
The label swaps are included in the dataset for this leaderboard.
The label swaps are included in the dataset for this leaderboard.
**No bars held Leaderboard**
### **Leaderboard B (No bars held)**
This leaderboard will allow bleeding/mixtures in training data. You can train on any data that you like.
Leaderboard B will allow bleeding/mixtures in training data. You can train on any data that you like.
For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method.
For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method.
# Evaluation
### 🚨 **NOTE**: To participate in Leaderboard B, you need to set `"external_dataset_used": true` in the [`aicrowd.json`](aicrowd.json) file.
# ✅ Evaluation
As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as,
As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as,
where S𝑖𝑛𝑠𝑡𝑟(n) is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is.
where $S_{instr}(n)$ is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is.
In order to rank systems, we will use the average SDR computed by
In order to rank systems, we will use the average SDR computed by
for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set.
for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set.
# Baselines
# 🤖 Baselines
We use the [Open-Unmix](https://github.com/sigsep/open-unmix-pytorch) library for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the `openunmix-baseline`[branch](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/) on this repository. To test the models locally, you need to install `git-lfs`.
TODO: To be added
When submitting your own models, you need to submit the checkpoints using `git-lfs`. Check the instructions shared in the inference file [here](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/my_submission/openunmix_separation_model.py)
# How to Test and Debug Locally
# 💻 How to Test and Debug Locally
The best way to test your models is to run your submission locally.
The best way to test your models is to run your submission locally.
You can do this by simply running `python evaluate_locally.py`. **Note that your local setup and the server evalution runtime may vary.** Make sure you mention setup your runtime according to the section: [How do I specify my dependencies?](#how-do-i-specify-my-dependencies)
You can do this by simply running `python evaluate_locally.py`. **Note that your local setup and the server evalution runtime may vary.** Make sure you mention setup your runtime according to the section: [How do I specify my dependencies?](#how-do-i-specify-my-dependencies)
# How to Submit
# 🚀 How to Submit
You can use the submission script `source submit.sh <submission_text>`
You can use the submission script `source submit.sh <submission_text>`
More information on submissions can be found in [SUBMISSION.md](/docs/submission.md).
More information on submissions can be found in [SUBMISSION.md](/docs/submission.md).
#### A high level description of the Challenge Procedure:
#### A high level description of the Challenge Procedure:
1.**Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/neurips-2022-iglu-challenge).
1.**Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/music-demixing-challenge-2023).
2.**Clone** this repo and start developing your solution.
2.**Clone** this repo and start developing your solution.
3.**Train** your models on IGLU, and ensure run.sh will generate rollouts.
3.**Train** your models on IGLU, and ensure run.sh will generate rollouts.
4.**Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com)
4.**Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com)
...
@@ -98,14 +103,12 @@ number of rollouts to compute and report the metrics on the leaderboard
...
@@ -98,14 +103,12 @@ number of rollouts to compute and report the metrics on the leaderboard
of the competition.
of the competition.
# Dataset
# 💽 Dataset
Download the public dataset for this Task using the link below, you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version.
Download the public dataset for this task using this [link](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/dataset_files), you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version.
# Setting Up Your Codebase
# 📑 Setting Up Your Codebase
AIcrowd provides great flexibility in the details of your submission!
AIcrowd provides great flexibility in the details of your submission!
Find the answers to FAQs about submission structure below, followed by
Find the answers to FAQs about submission structure below, followed by
...
@@ -133,7 +136,7 @@ The different files and directories have following meaning:
...
@@ -133,7 +136,7 @@ The different files and directories have following meaning:
```
```
.
.
├── aicrowd.json # Add any descriptions about your model
├── aicrowd.json # Add any descriptions about your model, and set `external_dataset_used`
├── apt.txt # Linux packages to be installed inside docker image
├── apt.txt # Linux packages to be installed inside docker image
├── requirements.txt # Python packages to be installed
├── requirements.txt # Python packages to be installed
├── evaluate_locally.py # Use this to check your model evaluation flow locally
├── evaluate_locally.py # Use this to check your model evaluation flow locally
...
@@ -151,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py`
...
@@ -151,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py`
### How do I actually make a submission?
### How do I actually make a submission?
You can use the submission script `source submit.sh <submission_text>`
The submission is made by adding everything including the model to git,
The submission is made by adding everything including the model to git,
tagging the submission with a git tag that starts with `submission-`, and
tagging the submission with a git tag that starts with `submission-`, and
pushing to AIcrowd's GitLab. The rest is done for you!
pushing to AIcrowd's GitLab. The rest is done for you!
For large model weight files, you'll need to use `git-lfs`
For large model weight files, you'll need to use `git-lfs`
More details are available at [docs/submission.md](/docs/submission.md).
More details are available at [docs/submission.md](/docs/submission.md).
When you make a submission browse to the `issues` page on your repository, a sucessful submission should look like this.
You may also like the new **Cinematic Sound Separation track**
You may also like the new [Cinematic Sound Separation track](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/cinematic-sound-separation)
It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab.
It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab.
Usage: ./submit.sh <unique-submission-name>
Usage: ./submit.sh <unique-submission-name>
Example:
Example:
./submit.sh "bayes-v0.1"
./submit.sh "bayes-v0.1"
Prerequisite:
Prerequisite:
Install aicrowd-cli and login on AIcrowd
Install aicrowd-cli and login on AIcrowd
#> pip install -U aicrowd-cli
#> pip install -U aicrowd-cli
#> aicrowd login
#> aicrowd login
USAGE
USAGE
}
}
bad_remote_message(){
bad_remote_message(){
log_error "AIcrowd remote not found"
log_error "AIcrowd remote not found"
log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/suadd-2023-semantic-segmentation-starter-kit.git\` manually."
log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/mdx-2023-robust-music-separation-starter-kit.git\` manually."
exit 1
exit 1
}
}
get_submission_remote(){
get_submission_remote(){
echo"aicrowd"
echo"aicrowd"
}
}
get_submission_remote_url(){
get_submission_remote_url(){
git remote get-url aicrowd
git remote get-url aicrowd
}
}
check_remote(){
check_remote(){
log_info Checking git remote settings...
log_info Checking git remote settings...
get_submission_remote > /dev/null
get_submission_remote > /dev/null
log_success Using $(get_submission_remote_url | awk-F'@''{print $NF}' | sed's|\.git||g') as the submission repository
log_success Using $(get_submission_remote_url | awk-F'@''{print $NF}' | sed's|\.git||g') as the submission repository