Newer
Older
from typing import List, Dict
class DummyResponseAgent(object):
def __init__(self):
""" Load your model(s) here """
pass
def generate_responses(self, test_data: List[Dict], api_responses: List[str]) -> List[str]:
"""
You will be provided with a batch of upto 50 independent conversations
Input 1
[
{"persona A": ..., "persona B": ... "dialogue": ... }, # conversation 1 Turn 1
...
{"persona A": ..., "persona B": ... "dialogue": ... } # conversation 50 Turn 1
]
Model should return 50 responses for Turn 1
...
Input 7
[
{"persona A": ..., "persona B": ... "dialogue": ... }, # conversation 1 Turn 7
...
{"persona A": ..., "persona B": ... "dialogue": ... } # conversation 50 Turn 7
]
Model should return 50 responses for Turn 7
api_responses - A list of strings output by the api call for each previous prompt response,
Will be a list of blank strings on the first call
Note: Turn numbers will NOT be provided as input
Return a dictionary with the following format
"use_api": True/False - Note that this cannot be used when using GPU
"prompts": [ <list of the prompts that go as "content" to the api > ] - Note that every call is independent and we don't use threads
"max_generated_tokens": [ list of ints for the max generation limit on each call] - Note that the submission will fail if the total generation limit is exceeded
"final_responses: [ <list of strings with the final responses> ] - Only used when use_api is set to False
# print(f"{len(test_data)=}, {test_data[0].keys()=}, {len(test_data[-1]['dialogue'])}")
"use_api": False, # Cannot use API if GPU true is set in aicrowd.json
"prompts": ["" for _ in test_data], # Cannot use API if GPU true is set in aicrowd.json
"max_generated_tokens": [0 for _ in test_data],
"final_responses": ["THIS IS A TEST REPLY" for _ in test_data]
}
return response