diff --git a/README.md b/README.md index d9a9aea10e3d301e10ebc3c8a03517b112a0ce8f..36de0ba810f9783bd7580ba2fcfe7844441e283e 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,3 @@ -# TODO: Add banner  # **[Music Demixing Challenge 2023 - Robust Music Separation](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)** - Starter Kit @@ -12,8 +11,8 @@ This repository is the Music Demixing Challenge 2023 - Robust Music Separation * Quick Links: * [Music Demixing Challenge 2023 - Robust Music Separation - Competition Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) -* [Discussion Forum](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery/discussion) -* [SUADD 2023 Challenge Overview](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery) +* [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion) +* [Music Demixing 2023 Challenge Overview](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/) # Table of Contents @@ -40,7 +39,7 @@ Given an **audio signal as input** (referred to as a "mixture"), you must **deco  -🎻 ROBUST MUSIC SEPARATION +## 🎻 ROBUST MUSIC SEPARATION This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano). @@ -48,17 +47,19 @@ Karaoke systems can benefit from the audio source separation technology as users Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards. -**MUSDB18 Leaderboard** +### **Leaderboard A (MUSDB18)** -Participants in Leaderboard will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models. +Participants in Leaderboard A will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models. The label swaps are included in the dataset for this leaderboard. -**No bars held Leaderboard** +### **Leaderboard B (No bars held)** -This leaderboard will allow bleeding/mixtures in training data. You can train on any data that you like. +Leaderboard B will allow bleeding/mixtures in training data. You can train on any data that you like. For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method. +### **NOTE**: To participate in Leaderboard B, you need to set `"external_dataset_used": true` in the [`aicrowd.json`](aicrowd.json) file. + # Evaluation As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as, @@ -69,15 +70,15 @@ where $S_{instr}(n)$ is the waveform of the ground truth and Åœð‘–ð‘›ð‘ ð‘¡ In order to rank systems, we will use the average SDR computed by -$SDR_{song} = \frac{1}{4}(SDR_{bass} + SDR_{drums} + SDR_{vocals} + SDR_{other})$ +$`SDR_{song} = \frac{1}{4}(SDR_{bass} + SDR_{drums} + SDR_{vocals} + SDR_{other})`$ for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set. # Baselines -We use the [Open-Unmix](https://github.com/sigsep/open-unmix-pytorch) library for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the `openunmix-baseline` branch on this repository. To test the models locally, you need to install `git-lfs`. +We use the [Open-Unmix](https://github.com/sigsep/open-unmix-pytorch) library for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the `openunmix-baseline` [branch](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/) on this repository. To test the models locally, you need to install `git-lfs`. -When submitting your own models, you need to submit the checkpoints using `git-lfs`. Check the instructions shared in the inference file [here](https://gitlab.aicrowd.com/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/my_submission/openunmix_separation_model.py) +When submitting your own models, you need to submit the checkpoints using `git-lfs`. Check the instructions shared in the inference file [here](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/my_submission/openunmix_separation_model.py) # How to Test and Debug Locally @@ -92,7 +93,7 @@ You can use the submission script `source submit.sh <submission_text>` More information on submissions can be found in [SUBMISSION.md](/docs/submission.md). #### A high level description of the Challenge Procedure: -1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/neurips-2022-iglu-challenge). +1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/music-demixing-challenge-2023). 2. **Clone** this repo and start developing your solution. 3. **Train** your models on IGLU, and ensure run.sh will generate rollouts. 4. **Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com) @@ -135,7 +136,7 @@ The different files and directories have following meaning: ``` . -├── aicrowd.json # Add any descriptions about your model +├── aicrowd.json # Add any descriptions about your model, and set `external_dataset_used` ├── apt.txt # Linux packages to be installed inside docker image ├── requirements.txt # Python packages to be installed ├── evaluate_locally.py # Use this to check your model evaluation flow locally @@ -153,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py` ### How do I actually make a submission? +You can use the submission script `source submit.sh <submission_text>` + The submission is made by adding everything including the model to git, -tagging the submission with a git tag that starts with `submission-`, and +tagging the submission with a git tag that starts with `submission-`, and pushing to AIcrowd's GitLab. The rest is done for you! For large model weight files, you'll need to use `git-lfs` More details are available at [docs/submission.md](/docs/submission.md). +When you make a submission browse to the `issues` page on your repository, a sucessful submission should look like this. + + + ### Are there any hardware or time constraints? Your submission will need to complete predictions on all the **sound tracks** under **120 minutes**. Make sure you take advantage @@ -170,10 +177,6 @@ The machine where the submission will run will have following specifications: * 4 vCPUs * 16GB RAM -## Contributors - -- [Dipam Chakraborty](https://www.aicrowd.com/participants/dipam) - # 📎 Important links - 💪 [Challenge Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) - ðŸ—£ï¸ [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion) diff --git a/submit.sh b/submit.sh index 755365b0be7e7fa75af728419f059b0f38114c43..9dbef8ecb7d5257a53a8d467a269b49b8d5a54e8 100644 --- a/submit.sh +++ b/submit.sh @@ -1,47 +1,65 @@ #!/bin/bash + set -e + REPO_ROOT_DIR=$(git rev-parse --show-toplevel) + log_info() { echo -e "\033[0;36m$@\033[0m" } + log_success() { echo -e "\033[0;32m$@\033[0m" } + log_normal() { echo -e "$@" } + log_error() { >&2 echo -e "\033[0;31m$@\033[0m" } + print_usage() { cat << USAGE Utility script to make submissions. It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab. + Usage: ./submit.sh <unique-submission-name> + Example: ./submit.sh "bayes-v0.1" + Prerequisite: Install aicrowd-cli and login on AIcrowd #> pip install -U aicrowd-cli #> aicrowd login + USAGE } + + bad_remote_message() { log_error "AIcrowd remote not found" - log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/suadd-2023-semantic-segmentation-starter-kit.git\` manually." + log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/mdx-2023-robust-music-separation-starter-kit.git\` manually." exit 1 } + get_submission_remote() { echo "aicrowd" } + get_submission_remote_url() { git remote get-url aicrowd } + check_remote() { log_info Checking git remote settings... get_submission_remote > /dev/null log_success Using $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g') as the submission repository } + + setup_lfs() { git lfs install find * -type f -size +5M -exec git lfs track {} &> /dev/null \; @@ -51,6 +69,8 @@ setup_lfs() { && git commit -m "auto: tracking new files via LFS" &> /dev/null set -e } + + setup_commits() { REMOTE=$(get_submission_remote) TAG=$(echo "$@" | sed 's/ /-/g') @@ -61,6 +81,7 @@ setup_commits() { git push -f $REMOTE "submission-$TAG" log_success "Check the submission progress in your repository: $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g')/issues" } + check_cli_install() { set +e which aicrowd > /dev/null @@ -79,9 +100,9 @@ check_cli_install() { log_error 'You might be on older AIcrowd CLI version. Please upgrade using `pip install -U aicrowd-cli` and login again.';exit 1 fi export OAUTH=$(python -c 'from aicrowd.contexts.config import CLIConfig;c=CLIConfig();c.load(None);print(c.get("gitlab")["oauth_token"])') - git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git 2> /dev/null - git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true - git config lfs.https://gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true + git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git 2> /dev/null + git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true + git config lfs.https://gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true retval=$? if [ $retval -ne 0 ]; then log_normal "Remote already exit, repository location: $(get_submission_remote | awk -F'@' '{print $NF}' | sed 's|\.git||g')"; @@ -97,14 +118,17 @@ check_cli_install() { log_success "Making submission as \"${USERNAME}\"" set -e } + submit() { check_cli_install check_remote setup_lfs "$@" setup_commits "$@" } + if [[ $# -lt 1 ]]; then print_usage exit 1 fi + submit "$@"