Skip to content
Snippets Groups Projects
Commit 4ff15860 authored by Dipam Chakraborty's avatar Dipam Chakraborty
Browse files

Merge branch 'master' into openunmix-baseline

parents 8e699f3b 8053c398
No related branches found
No related tags found
No related merge requests found
# TODO: Add banner ![Banner image](https://images.aicrowd.com/uploads/ckeditor/pictures/1040/content_Desktop_Banner.png)
![Banner image]()
# **[Music Demixing Challenge 2023 - Robust Music Separation](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)** - Starter Kit # **[Music Demixing Challenge 2023 - Robust Music Separation](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)** - Starter Kit
[![Discord](https://img.shields.io/discord/565639094860775436.svg)](https://discord.gg/fNRrSvZkry) [![Discord](https://img.shields.io/discord/565639094860775436.svg)](https://discord.gg/fNRrSvZkry)
...@@ -12,12 +11,12 @@ This repository is the Music Demixing Challenge 2023 - Robust Music Separation * ...@@ -12,12 +11,12 @@ This repository is the Music Demixing Challenge 2023 - Robust Music Separation *
Quick Links: Quick Links:
* [Music Demixing Challenge 2023 - Robust Music Separation - Competition Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) * [Music Demixing Challenge 2023 - Robust Music Separation - Competition Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)
* [Discussion Forum](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery/discussion) * [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion)
* [SUADD 2023 Challenge Overview](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery) * [Music Demixing 2023 Challenge Overview](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/)
# Table of Contents # 📝 Table of Contents
1. [About the Scene Understanding for Autonomous Drone Delivery Challenge](#about-the-scene-understanding-for-autonomous-drone-delivery-challenge) 1. [About the Music Demixing Challenge 2023](#about-the-music-demixing-challenge-2023)
2. [Evaluation](#evaluation) 2. [Evaluation](#evaluation)
3. [Baselines](#baselines) 3. [Baselines](#baselines)
4. [How to test and debug locally](#how-to-test-and-debug-locally) 4. [How to test and debug locally](#how-to-test-and-debug-locally)
...@@ -26,7 +25,7 @@ Quick Links: ...@@ -26,7 +25,7 @@ Quick Links:
7. [Setting up your codebase](#setting-up-your-codebase) 7. [Setting up your codebase](#setting-up-your-codebase)
8. [FAQs](#faqs) 8. [FAQs](#faqs)
# About the Music Demixing Challenge 2023 # 🎶 About the Music Demixing Challenge 2023
Have you ever sung using a karaoke machine or made a DJ music mix of your favourite song? Have you wondered how hearing aids help people listen more clearly or how video conference software reduces background noise? Have you ever sung using a karaoke machine or made a DJ music mix of your favourite song? Have you wondered how hearing aids help people listen more clearly or how video conference software reduces background noise?
...@@ -38,7 +37,9 @@ The Music Demixing Challenge 2023 (MDX23) is an opportunity for researchers and ...@@ -38,7 +37,9 @@ The Music Demixing Challenge 2023 (MDX23) is an opportunity for researchers and
Given an **audio signal as input** (referred to as a "mixture"), you must **decompose in its different parts**. Given an **audio signal as input** (referred to as a "mixture"), you must **decompose in its different parts**.
🎻 ROBUST MUSIC SEPARATION ![separation image](https://images.aicrowd.com/uploads/ckeditor/pictures/401/content_image.png)
## 🎻 ROBUST MUSIC SEPARATION
This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano). This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano).
...@@ -46,49 +47,53 @@ Karaoke systems can benefit from the audio source separation technology as users ...@@ -46,49 +47,53 @@ Karaoke systems can benefit from the audio source separation technology as users
Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards. Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards.
**MUSDB18 Leaderboard** ### **Leaderboard A (MUSDB18)**
Participants in Leaderboard will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models. Participants in Leaderboard A will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models.
The label swaps are included in the dataset for this leaderboard. The label swaps are included in the dataset for this leaderboard.
**No bars held Leaderboard** ### **Leaderboard B (No bars held)**
This leaderboard will allow bleeding/mixtures in training data. You can train on any data that you like. Leaderboard B will allow bleeding/mixtures in training data. You can train on any data that you like.
For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method. For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method.
# Evaluation ### 🚨 **NOTE**: To participate in Leaderboard B, you need to set `"external_dataset_used": true` in the [`aicrowd.json`](aicrowd.json) file.
# ✅ Evaluation
As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as, As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as,
![](https://images.aicrowd.com/uploads/ckeditor/pictures/404/content_SDR_instr.png) $`SDR_{instr} = 10log_{10}\frac{\sum_n(s_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n))^2}{\sum_n(s_{instr,left\ channel}(n) - \hat{s}_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n) - \hat{s}_{instr,right\ channel}(n))^2}`$
where S𝑖𝑛𝑠𝑡𝑟(n) is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is. where $S_{instr}(n)$ is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is.
In order to rank systems, we will use the average SDR computed by In order to rank systems, we will use the average SDR computed by
![](https://images.aicrowd.com/uploads/ckeditor/pictures/405/content_SDR_song.png) $`SDR_{song} = \frac{1}{4}(SDR_{bass} + SDR_{drums} + SDR_{vocals} + SDR_{other})`$
for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set. for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set.
# Baselines # 🤖 Baselines
We use the [Open-Unmix](https://github.com/sigsep/open-unmix-pytorch) library for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the `openunmix-baseline` [branch](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/) on this repository. To test the models locally, you need to install `git-lfs`.
TODO: To be added When submitting your own models, you need to submit the checkpoints using `git-lfs`. Check the instructions shared in the inference file [here](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/my_submission/openunmix_separation_model.py)
# How to Test and Debug Locally # 💻 How to Test and Debug Locally
The best way to test your models is to run your submission locally. The best way to test your models is to run your submission locally.
You can do this by simply running `python evaluate_locally.py`. **Note that your local setup and the server evalution runtime may vary.** Make sure you mention setup your runtime according to the section: [How do I specify my dependencies?](#how-do-i-specify-my-dependencies) You can do this by simply running `python evaluate_locally.py`. **Note that your local setup and the server evalution runtime may vary.** Make sure you mention setup your runtime according to the section: [How do I specify my dependencies?](#how-do-i-specify-my-dependencies)
# How to Submit # 🚀 How to Submit
You can use the submission script `source submit.sh <submission_text>` You can use the submission script `source submit.sh <submission_text>`
More information on submissions can be found in [SUBMISSION.md](/docs/submission.md). More information on submissions can be found in [SUBMISSION.md](/docs/submission.md).
#### A high level description of the Challenge Procedure: #### A high level description of the Challenge Procedure:
1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/neurips-2022-iglu-challenge). 1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/music-demixing-challenge-2023).
2. **Clone** this repo and start developing your solution. 2. **Clone** this repo and start developing your solution.
3. **Train** your models on IGLU, and ensure run.sh will generate rollouts. 3. **Train** your models on IGLU, and ensure run.sh will generate rollouts.
4. **Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com) 4. **Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com)
...@@ -98,14 +103,12 @@ number of rollouts to compute and report the metrics on the leaderboard ...@@ -98,14 +103,12 @@ number of rollouts to compute and report the metrics on the leaderboard
of the competition. of the competition.
# Dataset # 💽 Dataset
Download the public dataset for this Task using the link below, you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version.
https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/dataset_files Download the public dataset for this task using this [link](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/dataset_files), you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version.
# Setting Up Your Codebase # 📑 Setting Up Your Codebase
AIcrowd provides great flexibility in the details of your submission! AIcrowd provides great flexibility in the details of your submission!
Find the answers to FAQs about submission structure below, followed by Find the answers to FAQs about submission structure below, followed by
...@@ -133,7 +136,7 @@ The different files and directories have following meaning: ...@@ -133,7 +136,7 @@ The different files and directories have following meaning:
``` ```
. .
├── aicrowd.json # Add any descriptions about your model ├── aicrowd.json # Add any descriptions about your model, and set `external_dataset_used`
├── apt.txt # Linux packages to be installed inside docker image ├── apt.txt # Linux packages to be installed inside docker image
├── requirements.txt # Python packages to be installed ├── requirements.txt # Python packages to be installed
├── evaluate_locally.py # Use this to check your model evaluation flow locally ├── evaluate_locally.py # Use this to check your model evaluation flow locally
...@@ -151,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py` ...@@ -151,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py`
### How do I actually make a submission? ### How do I actually make a submission?
You can use the submission script `source submit.sh <submission_text>`
The submission is made by adding everything including the model to git, The submission is made by adding everything including the model to git,
tagging the submission with a git tag that starts with `submission-`, and tagging the submission with a git tag that starts with `submission-`, and
pushing to AIcrowd's GitLab. The rest is done for you! pushing to AIcrowd's GitLab. The rest is done for you!
For large model weight files, you'll need to use `git-lfs` For large model weight files, you'll need to use `git-lfs`
More details are available at [docs/submission.md](/docs/submission.md). More details are available at [docs/submission.md](/docs/submission.md).
When you make a submission browse to the `issues` page on your repository, a sucessful submission should look like this.
![submission image](https://images.aicrowd.com/uploads/ckeditor/pictures/1041/content_Screenshot_from_2022-12-01_17-16-12.png)
### Are there any hardware or time constraints? ### Are there any hardware or time constraints?
Your submission will need to complete predictions on all the **sound tracks** under **120 minutes**. Make sure you take advantage Your submission will need to complete predictions on all the **sound tracks** under **120 minutes**. Make sure you take advantage
...@@ -168,16 +177,12 @@ The machine where the submission will run will have following specifications: ...@@ -168,16 +177,12 @@ The machine where the submission will run will have following specifications:
* 4 vCPUs * 4 vCPUs
* 16GB RAM * 16GB RAM
## Contributors
- [Dipam Chakraborty](https://www.aicrowd.com/participants/dipam)
# 📎 Important links # 📎 Important links
- 💪 [Challenge Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) - 💪 [Challenge Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)
- 🗣️ [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion) - 🗣️ [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion)
- 🏆 [Leaderboard](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/leaderboards) - 🏆 [Leaderboard](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/leaderboards)
- 🎵 [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021) - 🎵 [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021)
You may also like the new **Cinematic Sound Separation track** You may also like the new [Cinematic Sound Separation track](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/cinematic-sound-separation)
**Best of Luck** 🎉 🎉 **Best of Luck** 🎉 🎉
#!/bin/bash #!/bin/bash
set -e set -e
REPO_ROOT_DIR=$(git rev-parse --show-toplevel) REPO_ROOT_DIR=$(git rev-parse --show-toplevel)
log_info() { log_info() {
echo -e "\033[0;36m$@\033[0m" echo -e "\033[0;36m$@\033[0m"
} }
log_success() { log_success() {
echo -e "\033[0;32m$@\033[0m" echo -e "\033[0;32m$@\033[0m"
} }
log_normal() { log_normal() {
echo -e "$@" echo -e "$@"
} }
log_error() { log_error() {
>&2 echo -e "\033[0;31m$@\033[0m" >&2 echo -e "\033[0;31m$@\033[0m"
} }
print_usage() { print_usage() {
cat << USAGE cat << USAGE
Utility script to make submissions. Utility script to make submissions.
It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab. It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab.
Usage: ./submit.sh <unique-submission-name> Usage: ./submit.sh <unique-submission-name>
Example: Example:
./submit.sh "bayes-v0.1" ./submit.sh "bayes-v0.1"
Prerequisite: Prerequisite:
Install aicrowd-cli and login on AIcrowd Install aicrowd-cli and login on AIcrowd
#> pip install -U aicrowd-cli #> pip install -U aicrowd-cli
#> aicrowd login #> aicrowd login
USAGE USAGE
} }
bad_remote_message() { bad_remote_message() {
log_error "AIcrowd remote not found" log_error "AIcrowd remote not found"
log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/suadd-2023-semantic-segmentation-starter-kit.git\` manually." log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/mdx-2023-robust-music-separation-starter-kit.git\` manually."
exit 1 exit 1
} }
get_submission_remote() { get_submission_remote() {
echo "aicrowd" echo "aicrowd"
} }
get_submission_remote_url() { get_submission_remote_url() {
git remote get-url aicrowd git remote get-url aicrowd
} }
check_remote() { check_remote() {
log_info Checking git remote settings... log_info Checking git remote settings...
get_submission_remote > /dev/null get_submission_remote > /dev/null
log_success Using $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g') as the submission repository log_success Using $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g') as the submission repository
} }
setup_lfs() { setup_lfs() {
git lfs install git lfs install
find * -type f -size +5M -exec git lfs track {} &> /dev/null \; find * -type f -size +5M -exec git lfs track {} &> /dev/null \;
...@@ -51,6 +69,8 @@ setup_lfs() { ...@@ -51,6 +69,8 @@ setup_lfs() {
&& git commit -m "auto: tracking new files via LFS" &> /dev/null && git commit -m "auto: tracking new files via LFS" &> /dev/null
set -e set -e
} }
setup_commits() { setup_commits() {
REMOTE=$(get_submission_remote) REMOTE=$(get_submission_remote)
TAG=$(echo "$@" | sed 's/ /-/g') TAG=$(echo "$@" | sed 's/ /-/g')
...@@ -61,6 +81,7 @@ setup_commits() { ...@@ -61,6 +81,7 @@ setup_commits() {
git push -f $REMOTE "submission-$TAG" git push -f $REMOTE "submission-$TAG"
log_success "Check the submission progress in your repository: $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g')/issues" log_success "Check the submission progress in your repository: $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g')/issues"
} }
check_cli_install() { check_cli_install() {
set +e set +e
which aicrowd > /dev/null which aicrowd > /dev/null
...@@ -79,9 +100,9 @@ check_cli_install() { ...@@ -79,9 +100,9 @@ check_cli_install() {
log_error 'You might be on older AIcrowd CLI version. Please upgrade using `pip install -U aicrowd-cli` and login again.';exit 1 log_error 'You might be on older AIcrowd CLI version. Please upgrade using `pip install -U aicrowd-cli` and login again.';exit 1
fi fi
export OAUTH=$(python -c 'from aicrowd.contexts.config import CLIConfig;c=CLIConfig();c.load(None);print(c.get("gitlab")["oauth_token"])') export OAUTH=$(python -c 'from aicrowd.contexts.config import CLIConfig;c=CLIConfig();c.load(None);print(c.get("gitlab")["oauth_token"])')
git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git 2> /dev/null git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git 2> /dev/null
git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true
git config lfs.https://gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true git config lfs.https://gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true
retval=$? retval=$?
if [ $retval -ne 0 ]; then if [ $retval -ne 0 ]; then
log_normal "Remote already exit, repository location: $(get_submission_remote | awk -F'@' '{print $NF}' | sed 's|\.git||g')"; log_normal "Remote already exit, repository location: $(get_submission_remote | awk -F'@' '{print $NF}' | sed 's|\.git||g')";
...@@ -97,14 +118,17 @@ check_cli_install() { ...@@ -97,14 +118,17 @@ check_cli_install() {
log_success "Making submission as \"${USERNAME}\"" log_success "Making submission as \"${USERNAME}\""
set -e set -e
} }
submit() { submit() {
check_cli_install check_cli_install
check_remote check_remote
setup_lfs "$@" setup_lfs "$@"
setup_commits "$@" setup_commits "$@"
} }
if [[ $# -lt 1 ]]; then if [[ $# -lt 1 ]]; then
print_usage print_usage
exit 1 exit 1
fi fi
submit "$@" submit "$@"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment