diff --git a/README.md b/README.md
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..f39a18438c31bd20e64297693e94127d034321b1 100644
--- a/README.md
+++ b/README.md
@@ -0,0 +1,110 @@
+![banner image](https://images.aicrowd.com/raw_images/challenges/banner_file/1135/6c90e052800987c29365.png)
+[![Discord](https://img.shields.io/discord/565639094860775436.svg)](https://discord.gg/yWurtB2huX)
+
+# 🛒 [CRAG: Comprehensive RAG Benchmark](https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024) Starter Kit
+
+
+This repository is the CRAG: Comphrensive RAG Benchmark **Submission template and Starter kit**! Clone the repository to compete now!
+
+**This repository contains**:
+*  **Documentation** on how to submit your models to the leaderboard
+*  **The procedure** for best practices and information on how we evaluate your model, etc.
+*  **Starter code** for you to get started!
+
+# Table of Contents
+
+1. [Competition Overview](#-competition-overview)
+2. [Dataset](#-dataset)
+3. [Tasks](#-tasks)
+4. [Evaluation Metrics](#-evaluation-metrics)
+5. [Getting Started](#-getting-started)
+   - [How to write your own model?](#️-how-to-write-your-own-model)
+   - [How to start participating?](#-how-to-start-participating)
+      - [Setup](#setup)
+      - [How to make a submission?](#-how-to-make-a-submission)
+      - [What hardware does my code run on?](#-what-hardware-does-my-code-run-on-)
+      - [How are my model responses parsed by the evaluators?](#-how-are-my-model-responses-parsed-by-the-evaluators-)
+6. [Frequently Asked Questions](#-frequently-asked-questions)
+6. [Important Links](#-important-links)
+
+
+# 📖 Competition Overview
+
+
+# 📊 Dataset
+
+
+# 👨‍💻👩‍💻 Tasks  
+
+
+## 📏 Evaluation Metrics
+
+
+Please refer to [local_evaluation.py](local_evaluation.py) for more details on how we will evaluate your submissions.
+
+# 🏁 Getting Started
+1. **Sign up** to join the competition [on the AIcrowd website](https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024).
+2. **Fork** this starter kit repository. You can use [this link](https://gitlab.aicrowd.com/aicrowd/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/meta-comphrehensive-rag-benchmark-starter-kit/-/forks/new) to create a fork.
+3. **Clone** your forked repo and start developing your model.
+4. **Develop** your model(s) following the template in [how to write your own model](#how-to-write-your-own-model) section.
+5. [**Submit**](#-how-to-make-a-submission) your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com) for evaluation [(full instructions below)](#-how-to-make-a-submission). The automated evaluation setup will evaluate the submissions on the private datasets and report the metrics on the leaderboard of the competition.
+
+# ✍️ How to write your own model?
+
+Please follow the instructions in [models/README.md](models/README.md) for instructions and examples on how to write your own models for this competition.
+
+# 🚴 How to start participating?
+
+## Setup
+
+1. **Add your SSH key** to AIcrowd GitLab
+
+You can add your SSH Keys to your GitLab account by going to your profile settings [here](https://gitlab.aicrowd.com/profile/keys). If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
+
+2. **Fork the repository**. You can use [this link](https://gitlab.aicrowd.com/aicrowd/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/meta-comphrehensive-rag-benchmark-starter-kit/-/forks/new) to create a fork.
+
+2.  **Clone the repository**
+
+    ```bash
+    git clone git@gitlab.aicrowd.com:aicrowd/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/meta-comphrehensive-rag-benchmark-starter-kit.git
+    cd meta-comphrehensive-rag-benchmark-starter-kit
+    ```
+
+3. **Install** competition specific dependencies!
+    ```bash
+    cd meta-comphrehensive-rag-benchmark-starter-kit
+    pip install -r requirements.txt
+    ```
+
+4. Write your own model as described in [How to write your own model](#how-to-write-your-own-model) section.
+
+5. Test your model locally using `python local_evaluation.py`.
+
+6. Accept the Challenge Rules on the main [challenge page](https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024) by clicking on the **Participate** button. Also accept the Challenge Rules on the Task specific page (link on the challenge page) that you want to submit to.
+
+7. Make a submission as described in [How to make a submission](#-how-to-make-a-submission) section.
+
+
+## 📮 How to make a submission?
+
+Please follow the instructions in [docs/submission.md](docs/submission.md) to make your first submission. 
+This also includes instructions on [specifying your software runtime](docs/submission.md#specifying-software-runtime-and-dependencies), [code structure](docs/submission.md#code-structure-guidelines), [submitting to different tracks](docs/submission.md#submitting-to-different-tracks).
+
+**Note**: **Remember to accept the Challenge Rules** on the challenge page, **and** the task page before making your first submission.
+
+## 💻 What hardware does my code run on ?
+You can find more details about the hardware and system configuration in [docs/hardware-and-system-config.md](docs/hardware-and-system-config.md).
+In summary, we provide you `4` x [[NVIDIA T4 GPUs](https://www.nvidia.com/en-us/data-center/tesla-t4/)].
+
+
+# ❓ Frequently Asked Questions
+## Which track is this starter kit for ?
+This starter kit can be used to submit to any of the tracks. You can find more information in [docs/submission.md#submitting-to-different-tracks](docs/submission.md#submitting-to-different-tracks).
+
+**Best of Luck** :tada: :tada:
+
+# 📎 Important links
+
+- 💪 Challenge Page: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024
+- 🗣 Discussion Forum: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/discussion
+- 🏆 Leaderboard: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/leaderboards
diff --git a/docs/hardware-and-system-config.md b/docs/hardware-and-system-config.md
new file mode 100644
index 0000000000000000000000000000000000000000..9c555d4066a0c53b9499c0e9a95ee06a0fad8d70
--- /dev/null
+++ b/docs/hardware-and-system-config.md
@@ -0,0 +1,20 @@
+## Hardware and System Configuration
+We apply a limit on the hardware available to each participant to run their solutions.
+
+- All solutions will be run on [AWS g4dn.12xlarge](https://aws.amazon.com/ec2/instance-types/g4/) instances equipped with [NVIDIA T4 GPUs](https://www.nvidia.com/en-us/data-center/tesla-t4/). 
+- The hardware available is: 
+    - `4` x [NVIDIA T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/s). 
+    - `40` x vCPU (`20` physical CPU cores)
+    - `180GB` RAM
+
+
+Please note that NVIDIA T4 uses a somewhat outdated architecture and is thus not compatible with certain acceleration toolkits (e.g. Flash Attention), so please be careful about compatibility.
+
+Besides, the following restrictions will also be imposed: 
+
+- Network connection will be disabled.
+- Each submission will be assigned a certain amount of time to run. Submissions that exceed the time limits will be killed and will not be evaluated. The tentative time limit is set as follows **[TO BE ADDED AND TESTED WITH AICROWD SUBMISSION SYSTEM]**. 
+
+- Each team will be able to make up to **1 submission per day**, with a maximum of **[TO BE ADDED AND TESTED WITH AICROWD SUBMISSION SYSTEM]**. 
+
+Based on the hardware and system configuration, we recommend participants to begin with 7B and 13B models. According to our experiments, models like Llama-2 13B can perform inference smoothly on 4 NVIDIA T4 GPUs, while 13B models will result in OOM.
diff --git a/docs/runtime.md b/docs/runtime.md
new file mode 100644
index 0000000000000000000000000000000000000000..60dd631ecf991cbbc4736631fc974c3295c54967
--- /dev/null
+++ b/docs/runtime.md
@@ -0,0 +1,28 @@
+## Adding your runtime
+
+This repository is a valid submission (and submission structure). 
+You can simply add your dependencies on top of this repository.
+
+Few of the most common ways are as follows:
+
+* `requirements.txt` -- The `pip3` packages used by your inference code. As you add new pip3 packages to your inference procedure either manually add them to `requirements.txt` or if your software runtime is simple, perform:
+    ```
+    # Put ALL of the current pip3 packages on your system in the submission
+    >> pip3 freeze >> requirements.txt
+    >> cat requirements.txt
+    aicrowd_api
+    coloredlogs
+    matplotlib
+    pandas
+    [...]
+    ```
+
+* `apt.txt` -- The Debian packages (via aptitude) used by your inference code!
+
+These files are used to construct your **AIcrowd submission docker containers** in which your code will run.
+
+* `Dockerfile` -- **For advanced users only**. `Dockerfile` gives you more flexibility on defining the software runtime used during evaluations.
+
+----
+
+To test your image builds locally, you can use [repo2docker](https://github.com/jupyterhub/repo2docker)
diff --git a/docs/submission.md b/docs/submission.md
new file mode 100644
index 0000000000000000000000000000000000000000..3097c749cae74a5eb6e1bdf6bcecbacce9172eee
--- /dev/null
+++ b/docs/submission.md
@@ -0,0 +1,117 @@
+# Guide to Making Your First Submission
+
+This document is designed to assist you in making your initial submission smoothly. Below, you'll find step-by-step instructions on specifying your software runtime and dependencies, structuring your code, and finally, submitting your project. Follow these guidelines to ensure a smooth submission process.
+
+# Table of Contents
+
+1. [Specifying Software Runtime and Dependencies](#specifying-software-runtime-and-dependencies)
+2. [Code Structure Guidelines](#code-structure-guidelines)
+3. [Submitting to Different Tracks](#submitting-to-different-tracks)
+4. [Submission Entry Point](#submission-entry-point)
+5. [Setting Up SSH Keys](#setting-up-ssh-keys)
+6. [Managing Large Model Files with Git LFS](#managing-large-model-files-with-git-lfs)
+    - [Why Use Git LFS?](#why-use-git-lfs)
+    - [Steps to Use Git LFS](#steps-to-use-git-lfs)
+    - [Handling Previously Committed Large Files](#handling-previously-committed-large-files)
+7. [How to Submit Your Code](#how-to-submit-your-code)
+
+
+## Specifying Software Runtime and Dependencies
+
+Our platform supports custom runtime environments. This means you have the flexibility to choose any libraries or frameworks necessary for your project. Here’s how you can specify your runtime and dependencies:
+
+- **`requirements.txt`**: List any PyPI packages your project needs.
+- **`apt.txt`**: Include any apt packages required.
+- **`Dockerfile`**: Optionally, you can provide your own Dockerfile. An example is located at `utilities/_Dockerfile`, which can serve as a helpful starting point.
+
+For detailed setup instructions regarding runtime dependencies, refer to the documentation in the `docs/runtime.md` file.
+
+## Code Structure Guidelines
+
+Your project should follow the structure outlined in the starter kit. Here’s a brief overview of what each component represents:
+
+```
+.
+├── README.md                       # Project documentation and setup instructions
+├── aicrowd.json                    # Submission meta information - like your username, track name
+├── data
+│   └── development.json            # Development dataset local testing
+├── docs
+│   └── runtime.md                  # Documentation on the runtime environment setup, dependency confifgs
+├── local_evaluation.py             # Use this to check your model evaluation flow locally
+├── metrics.py                      # Scripts to calculate evaluation metrics for your model's performance
+├── models
+│   ├── README.md                   # Documentation specific to the implementation of model interfaces
+│   ├── base_model.py               # Base model class 
+│   ├── dummy_model.py              # A simple or placeholder model for demonstration or testing
+│   └── user_config.py              # IMPORTANT: Configuration file to specify your model 
+├── parsers.py                      # Model output parser
+├── requirements.txt                # Python packages to be installed for model development
+└── Dockerfile                 # Example Dockerfile for specifying runtime via Docker
+```
+
+Remember, **your submission metadata JSON (`aicrowd.json`)** is crucial for mapping your submission to the challenge. Ensure it contains the correct `challenge_id`, `authors`, and other necessary information. To utilize GPUs, set the `"gpu": true` flag in your `aicrowd.json`.
+
+## Submitting to Different Tracks
+
+Specify the track by setting the appropriate `challenge_id` in your [aicrowd.json](aicrowd.json). Here are the challenge IDs for various tracks:
+
+| Track Name                        | Challenge ID                                        |
+|-----------------------------------|-----------------------------------------------------|
+| Retrieval Summarization   | `retrieval-summarization` |
+| Knowledge Graph and Web Retrieval      | `knowledge-graph-and-web-retrieval`    |
+| End-to-end Retrieval Augmented Generation           | `end-to-end-retrieval-augmented-generation`         |
+
+## Submission Entry Point
+
+The evaluation process will instantiate a model from `models/user_config.py` for evaluation. Ensure this configuration is set correctly.
+
+## Setting Up SSH Keys
+
+You will have to add your SSH Keys to your GitLab account by going to your profile settings [here](https://gitlab.aicrowd.com/profile/keys). If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
+
+
+## Managing Large Model Files with Git LFS
+
+When preparing your submission, it's crucial to ensure all necessary models and files required by your inference code are properly saved and included. Due to the potentially large size of model weight files, we highly recommend using Git Large File Storage (Git LFS) to manage these files efficiently.
+
+### Why Use Git LFS?
+
+Git LFS is designed to handle large files more effectively than Git's default handling of large files. This ensures smoother operations and avoids common errors associated with large files, such as:
+
+- `fatal: the remote end hung up unexpectedly`
+- `remote: fatal: pack exceeds maximum allowed size`
+
+These errors typically occur when large files are directly checked into the Git repository without Git LFS, leading to challenges in handling and transferring those files.
+
+### Steps to Use Git LFS
+
+1. **Install Git LFS**: If you haven't already, install Git LFS on your machine. Detailed instructions can be found [here](https://git-lfs.github.com/).
+
+2. **Track Large Files**: Use Git LFS to track the large files within your project. You can do this by running `git lfs track "*.model"` (replace `*.model` with your file type).
+
+3. **Add and Commit**: After tracking the large files with Git LFS, add and commit them as you would with any other file. Git LFS will automatically handle these files differently to optimize their storage and transfer.
+
+4. **Push to Repository**: When you push your changes to the repository, Git LFS will manage the large files, ensuring a smooth push process.
+
+### Handling Previously Committed Large Files
+
+If you have already committed large files directly to your Git repository without using Git LFS, you may encounter issues. These files, even if not present in the current working directory, could still be in the Git history, leading to errors.
+
+To resolve this, ensure that the large files are removed from the Git history and then re-add and commit them using Git LFS. This process cleans up the repository's history and avoids the aforementioned errors.
+
+For more information on how to upload large files to your submission and detailed guidance on using Git LFS, please refer to [this detailed guide](https://discourse.aicrowd.com/t/how-to-upload-large-files-size-to-your-submission/2304).
+
+**Note**: Properly managing large files not only facilitates smoother operations for you but also ensures that the evaluation process can proceed without hindrances.
+
+## How to Submit Your Code
+
+To submit your code, push a tag beginning with "submission-" to your repository on [GitLab](https://gitlab.aicrowd.com/). Follow these steps to make a submission:
+
+1. Commit your changes with `git commit -am "Your commit message"`.
+2. Tag your submission (e.g., `git tag -am "submission-v0.1" submission-v0.1`).
+3. Push your changes and tags to the AIcrowd repository (replace `<YOUR_AICROWD_USER_NAME>` with your actual username).
+
+After pushing your tag, you can view your submission details at `https://gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/meta-comphrehensive-rag-benchmark-starter-kit/issues`.
+
+Ensure your `aicrowd.json` is correctly filled with the necessary metadata, and you've replaced `<YOUR_AICROWD_USER_NAME>` with your GitLab username in the provided URL.
diff --git a/models/README.md b/models/README.md
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..e85d82fc7578726b4a5cd4155208667ee455cf6e 100644
--- a/models/README.md
+++ b/models/README.md
@@ -0,0 +1,24 @@
+# Guide to Writing Your Own Models
+
+## Model Code Organization
+For a streamlined experience, we suggest placing the code for all your models within the `models` directory. This is a recommendation for organizational purposes, but it's not a strict requirement.
+
+## Model Base Class
+Your models should follow the format from the `DummyModel` class found in [dummy_model.py](dummy_model.py). We provide the example model, `dummy_model.py`, to illustrate the structure your own model. Crucially, your model class must implement the `generate_answer` method.
+
+## Selecting which model to use
+To ensure your model is recognized and utilized correctly, please specify your model class name in the [`user_config.py`](user_config.py) file, by following the instructions in the inline comments.
+
+## Model Inputs and Outputs
+
+### Inputs
+Your model will receive two pieces of information for every task:
+- `query`: String representing the input query
+- `search_results`: List of strings, each comes from scraped HTML text of the search query.
+- `character_limit`: The maximum character limit for the answer (can vary per query)
+
+### Outputs
+The output from your model's `generate_answer` function should always be a string.
+
+## Internet Access
+Your model will not have access to the internet during evaluation. Your model will have access to LLaMa 2 weights. **[TO BE ADDED]**.
\ No newline at end of file