Commit 69e837a9 authored by pfrl_rainbow's avatar pfrl_rainbow


parent 1bc3ec17
......@@ -8,7 +8,9 @@ For detailed & latest documentation about the competition/template, see the orig
This repository is a sample of the "Round 1" submission, i.e., the agents are trained locally.
`` is the entrypoint script for Round 1.
Please ignore ``, which will be used in Round 2.
The ``, which is the entrypoint for Round 2, have not been checked if it could work on the MineRL Competition's submission system yet.
To train this baseline agent, see "How to Train Baseline Agent on your own" section below.
`train/` directory contains baseline agent's model weight files trained on `MineRLObtainDiamondDenseVectorObf-v0`.
# Simple env test.
import json
import select
import time
import logging
import os
import aicrowd_helper
import gym
import minerl
from utility.parser import Parser
import coloredlogs
import sys
sys.path.append(os.path.abspath(os.path.join(__file__, os.pardir, 'mod')))
from dqn_family import main as dqn_family_main
# All the evaluations will be evaluated on MineRLObtainDiamond-v0 environment
MINERL_GYM_ENV = os.getenv('MINERL_GYM_ENV', 'MineRLObtainDiamondVectorObf-v0')
# MINERL_GYM_ENV = os.getenv('MINERL_GYM_ENV', 'MineRLObtainDiamondVectorObf-v0')
MINERL_GYM_ENV = os.getenv('MINERL_GYM_ENV', 'MineRLObtainDiamondDenseVectorObf-v0')
# You need to ensure that your submission is trained in under MINERL_TRAINING_MAX_STEPS steps
# You need to ensure that your submission is trained by launching less than MINERL_TRAINING_MAX_INSTANCES instances
......@@ -40,39 +41,20 @@ parser = Parser('performance/',
def main():
This function will be called for training phase.
# How to sample minerl data is document here:
data =, data_dir=MINERL_DATA_ROOT)
# Sample code for illustration, add your training code below
env = gym.make(MINERL_GYM_ENV)
# actions = [env.action_space.sample() for _ in range(10)] # Just doing 10 samples in this example
# xposes = []
# for _ in range(1):
# obs = env.reset()
# done = False
# netr = 0
The trained weights in `train/` directory of this repository were generated by `mod/` entry point,
not by this script.
I've not checked if this script (``) could work on the MineRL Competition's submission system.
(On the Round 1, participants are to submit pre-trained agents.
You have to make your training script work on the competition submission system on the Round 2.)
# # Limiting our code to 1024 steps in this example, you can do "while not done" to run till end
# while not done:
# To get better view in your training phase, it is suggested
# to register progress continuously, example when 54% completed
# aicrowd_helper.register_progress(0.54)
# To fetch latest information from instance manager, you can run below when you want to know the state
#>> parser.update_information()
#>> print(parser.payload)
# .payload: provide AIcrowd generated json
# Example: {'state': 'RUNNING', 'score': {'score': 0.0, 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 2001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamond-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 2001, 'environment': 'MineRLObtainDiamond-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': 0.0, 'score_secondary': 0.0}}}}
# .current_state: provide indepth state information avaiable as dictionary (key: instance id)
For the detail of the options of `dqn_family_main` called below, see "README#How to Train Baseline Agent on you own" section.
# Save trained model to train/ directory
# Training 100% Completed
if __name__ == "__main__":
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment