diff --git a/README.md b/README.md
index 2ebdd5d96cec1734ab7c24e972171853e09b5511..dda27bd270bc85289036c4e2616bf915b5780730 100644
--- a/README.md
+++ b/README.md
@@ -61,19 +61,19 @@ For both the leaderboards, the winning teams will be required to publish their t
 
 As an evaluation metric, we are using the signal-to-distortion ratio (SDR), which is defined as,
 
-![](https://images.aicrowd.com/uploads/ckeditor/pictures/404/content_SDR_instr.png)
+$SDR_{instr} = 10log_{10}\frac{\sum_n(s_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n))^2}{\sum_n(s_{instr,left\ channel}(n) - \hat{s}_{instr,left\ channel}(n))^2 + \sum_n(s_{instr,right\ channel}(n) - \hat{s}_{instr,right\ channel}(n))^2}$
 
-where S𝑖𝑛𝑠𝑡𝑟(n) is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is.
+where $S_{instr}(n)$ is the waveform of the ground truth and Ŝ𝑖𝑛𝑠𝑡𝑟(𝑛) denotes the waveform of the estimate. The higher the SDR score, the better the output of the system is.
 
 In order to rank systems, we will use the average SDR computed by
 
-![](https://images.aicrowd.com/uploads/ckeditor/pictures/405/content_SDR_song.png)
+$SDR_{song} = \frac{1}{4}(SDR_{bass} + SDR_{drums} + SDR_{vocals} + SDR_{other})$
 
 for each song. Finally, the overall score is obtained by averaging SDRsong over all songs in the hidden test set.
 
 # Baselines
 
-TODO: To be added
+We use Open-Unmix for the baseline. Specifically, we provide trained checkpoints for the UMXL model. You can use the baseline by switching to the openunmix-baseline branch on this repository. To test the models locally, you need to install `git-lfs`.
 
 # How to Test and Debug Locally