diff --git a/README.md b/README.md index 10318e28162c22963e59a3d7ceeb78891b522cd5..9be60525deab3f652f09bad035c068404066eb64 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,4 @@ -# TODO: Add banner -![Banner image]() + # **[Music Demixing Challenge 2023 - Robust Music Separation](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation)** - Starter Kit [](https://discord.gg/fNRrSvZkry) @@ -12,12 +11,12 @@ This repository is the Music Demixing Challenge 2023 - Robust Music Separation * Quick Links: * [Music Demixing Challenge 2023 - Robust Music Separation - Competition Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) -* [Discussion Forum](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery/discussion) -* [SUADD 2023 Challenge Overview](https://www.aicrowd.com/challenges/suadd-23-scene-understanding-for-autonomous-drone-delivery) +* [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion) +* [Music Demixing 2023 Challenge Overview](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/) -# Table of Contents -1. [About the Scene Understanding for Autonomous Drone Delivery Challenge](#about-the-scene-understanding-for-autonomous-drone-delivery-challenge) +# 📠Table of Contents +1. [About the Music Demixing Challenge 2023](#about-the-music-demixing-challenge-2023) 2. [Evaluation](#evaluation) 3. [Baselines](#baselines) 4. [How to test and debug locally](#how-to-test-and-debug-locally) @@ -26,7 +25,7 @@ Quick Links: 7. [Setting up your codebase](#setting-up-your-codebase) 8. [FAQs](#faqs) -# About the Music Demixing Challenge 2023 +# 🎶 About the Music Demixing Challenge 2023 Have you ever sung using a karaoke machine or made a DJ music mix of your favourite song? Have you wondered how hearing aids help people listen more clearly or how video conference software reduces background noise? @@ -38,7 +37,9 @@ The Music Demixing Challenge 2023 (MDX23) is an opportunity for researchers and Given an **audio signal as input** (referred to as a "mixture"), you must **decompose in its different parts**. -🎻 ROBUST MUSIC SEPARATION + + +## 🎻 ROBUST MUSIC SEPARATION This task will focus on music source separation. Participants will submit systems that separate a song into four instruments: vocals, bass, drums, and other (the instrument "other" contains signals of all instruments other than the first three, e.g., guitar or piano). @@ -46,49 +47,53 @@ Karaoke systems can benefit from the audio source separation technology as users Similar to [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021), this task will have two leaderboards. -**MUSDB18 Leaderboard** +### **Leaderboard A (MUSDB18)** -Participants in Leaderboard will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models. +Participants in Leaderboard A will be allowed to train their system exclusively on the training part of the MUSDB18-HQ dataset. This dataset has become the standard in literature as it is free to use and allows anyone to start training source separation models. The label swaps are included in the dataset for this leaderboard. -**No bars held Leaderboard** +### **Leaderboard B (No bars held)** -This leaderboard will allow bleeding/mixtures in training data. You can train on any data that you like. +Leaderboard B will allow bleeding/mixtures in training data. You can train on any data that you like. For both the leaderboards, the winning teams will be required to publish their training code (to receive a prize) as it is about the training method. -# Evaluation +### 🚨 **NOTE**: To participate in Leaderboard B, you need to set `"external_dataset_used": true` in the [`aicrowd.json`](aicrowd.json) file. + +# ✅ Evaluation As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as, - +$`SDR_{instr} = 10log_{10}\frac{\sum_n(s_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n))^2}{\sum_n(s_{instr,left\ channel}(n) - \hat{s}_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n) - \hat{s}_{instr,right\ channel}(n))^2}`$ -where Sð‘–ð‘›ð‘ ð‘¡ð‘Ÿ(n) is the waveform of the ground truth and Åœð‘–ð‘›ð‘ ð‘¡ð‘Ÿ(ð‘›) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is. +where $S_{instr}(n)$ is the waveform of the ground truth and Åœð‘–ð‘›ð‘ ð‘¡ð‘Ÿ(ð‘›) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is. In order to rank systems, we will use the average SDR computed by - +$`SDR_{song} = \frac{1}{4}(SDR_{bass} + SDR_{drums} + SDR_{vocals} + SDR_{other})`$ for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set. -# Baselines +# 🤖 Baselines + +We use the [Open-Unmix](https://github.com/sigsep/open-unmix-pytorch) library for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the `openunmix-baseline` [branch](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/) on this repository. To test the models locally, you need to install `git-lfs`. -TODO: To be added +When submitting your own models, you need to submit the checkpoints using `git-lfs`. Check the instructions shared in the inference file [here](https://gitlab.aicrowd.com/aicrowd/challenges/music-demixing-challenge-2023/mdx-2023-robust-music-separation-starter-kit/-/blob/openunmix-baseline/my_submission/openunmix_separation_model.py) -# How to Test and Debug Locally +# 💻 How to Test and Debug Locally The best way to test your models is to run your submission locally. You can do this by simply running `python evaluate_locally.py`. **Note that your local setup and the server evalution runtime may vary.** Make sure you mention setup your runtime according to the section: [How do I specify my dependencies?](#how-do-i-specify-my-dependencies) -# How to Submit +# 🚀 How to Submit You can use the submission script `source submit.sh <submission_text>` More information on submissions can be found in [SUBMISSION.md](/docs/submission.md). #### A high level description of the Challenge Procedure: -1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/neurips-2022-iglu-challenge). +1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/music-demixing-challenge-2023). 2. **Clone** this repo and start developing your solution. 3. **Train** your models on IGLU, and ensure run.sh will generate rollouts. 4. **Submit** your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com) @@ -98,14 +103,12 @@ number of rollouts to compute and report the metrics on the leaderboard of the competition. -# Dataset - -Download the public dataset for this Task using the link below, you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version. +# 💽 Dataset -https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/dataset_files +Download the public dataset for this task using this [link](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/dataset_files), you'll need to accept the rules of the competition to access the data. The data is same as the well known MUSDB18-HQ dataset and its compressed version. -# Setting Up Your Codebase +# 📑 Setting Up Your Codebase AIcrowd provides great flexibility in the details of your submission! Find the answers to FAQs about submission structure below, followed by @@ -133,7 +136,7 @@ The different files and directories have following meaning: ``` . -├── aicrowd.json # Add any descriptions about your model +├── aicrowd.json # Add any descriptions about your model, and set `external_dataset_used` ├── apt.txt # Linux packages to be installed inside docker image ├── requirements.txt # Python packages to be installed ├── evaluate_locally.py # Use this to check your model evaluation flow locally @@ -151,14 +154,20 @@ Once you are ready, test your implementation `python evaluate_locally.py` ### How do I actually make a submission? +You can use the submission script `source submit.sh <submission_text>` + The submission is made by adding everything including the model to git, -tagging the submission with a git tag that starts with `submission-`, and +tagging the submission with a git tag that starts with `submission-`, and pushing to AIcrowd's GitLab. The rest is done for you! For large model weight files, you'll need to use `git-lfs` More details are available at [docs/submission.md](/docs/submission.md). +When you make a submission browse to the `issues` page on your repository, a sucessful submission should look like this. + + + ### Are there any hardware or time constraints? Your submission will need to complete predictions on all the **sound tracks** under **120 minutes**. Make sure you take advantage @@ -168,16 +177,12 @@ The machine where the submission will run will have following specifications: * 4 vCPUs * 16GB RAM -## Contributors - -- [Dipam Chakraborty](https://www.aicrowd.com/participants/dipam) - # 📎 Important links - 💪 [Challenge Page](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation) - ðŸ—£ï¸ [Discussion Forum](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/discussion) - 🆠[Leaderboard](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/robust-music-separation/leaderboards) - 🎵 [Music Demixing Challenge 2021](https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021) -You may also like the new **Cinematic Sound Separation track** +You may also like the new [Cinematic Sound Separation track](https://www.aicrowd.com/challenges/music-demixing-challenge-2023/problems/cinematic-sound-separation) **Best of Luck** 🎉 🎉 diff --git a/submit.sh b/submit.sh index 755365b0be7e7fa75af728419f059b0f38114c43..9dbef8ecb7d5257a53a8d467a269b49b8d5a54e8 100644 --- a/submit.sh +++ b/submit.sh @@ -1,47 +1,65 @@ #!/bin/bash + set -e + REPO_ROOT_DIR=$(git rev-parse --show-toplevel) + log_info() { echo -e "\033[0;36m$@\033[0m" } + log_success() { echo -e "\033[0;32m$@\033[0m" } + log_normal() { echo -e "$@" } + log_error() { >&2 echo -e "\033[0;31m$@\033[0m" } + print_usage() { cat << USAGE Utility script to make submissions. It adds current working directory files, setup lfs, commit and upload it to AIcrowd GitLab. + Usage: ./submit.sh <unique-submission-name> + Example: ./submit.sh "bayes-v0.1" + Prerequisite: Install aicrowd-cli and login on AIcrowd #> pip install -U aicrowd-cli #> aicrowd login + USAGE } + + bad_remote_message() { log_error "AIcrowd remote not found" - log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/suadd-2023-semantic-segmentation-starter-kit.git\` manually." + log_error "It should have been automatically set, but given it isn't. Please run \`git remote add aicrowd git@gitlab.aicrowd.com:<username>/mdx-2023-robust-music-separation-starter-kit.git\` manually." exit 1 } + get_submission_remote() { echo "aicrowd" } + get_submission_remote_url() { git remote get-url aicrowd } + check_remote() { log_info Checking git remote settings... get_submission_remote > /dev/null log_success Using $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g') as the submission repository } + + setup_lfs() { git lfs install find * -type f -size +5M -exec git lfs track {} &> /dev/null \; @@ -51,6 +69,8 @@ setup_lfs() { && git commit -m "auto: tracking new files via LFS" &> /dev/null set -e } + + setup_commits() { REMOTE=$(get_submission_remote) TAG=$(echo "$@" | sed 's/ /-/g') @@ -61,6 +81,7 @@ setup_commits() { git push -f $REMOTE "submission-$TAG" log_success "Check the submission progress in your repository: $(get_submission_remote_url | awk -F'@' '{print $NF}' | sed 's|\.git||g')/issues" } + check_cli_install() { set +e which aicrowd > /dev/null @@ -79,9 +100,9 @@ check_cli_install() { log_error 'You might be on older AIcrowd CLI version. Please upgrade using `pip install -U aicrowd-cli` and login again.';exit 1 fi export OAUTH=$(python -c 'from aicrowd.contexts.config import CLIConfig;c=CLIConfig();c.load(None);print(c.get("gitlab")["oauth_token"])') - git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git 2> /dev/null - git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true - git config lfs.https://gitlab.aicrowd.com/$USERNAME/suadd-2023-semantic-segmentation-starter-kit.git/info/lfs.locksverify true + git remote add aicrowd https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git 2> /dev/null + git config lfs.https://oauth2:$OAUTH@gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true + git config lfs.https://gitlab.aicrowd.com/$USERNAME/mdx-2023-robust-music-separation-starter-kit.git/info/lfs.locksverify true retval=$? if [ $retval -ne 0 ]; then log_normal "Remote already exit, repository location: $(get_submission_remote | awk -F'@' '{print $NF}' | sed 's|\.git||g')"; @@ -97,14 +118,17 @@ check_cli_install() { log_success "Making submission as \"${USERNAME}\"" set -e } + submit() { check_cli_install check_remote setup_lfs "$@" setup_commits "$@" } + if [[ $# -lt 1 ]]; then print_usage exit 1 fi + submit "$@"