Home
build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

API for solution

Modified 2018-10-12 by Andrea Censi

This is a more in-depth look at the API for the solutions.

Each template already has this glue code written, and you do not need to understand this if you don’t want to take advantage of advanced options.

Examples

Modified 2018-10-09 by Andrea Censi

We have several example repositories that show different aspects of the API.

These are:

ChallengeInterfaceSolution API

Modified 2018-10-09 by Andrea Censi

Each solution is passed a ChallengeInterfaceSolution object as parameter to run().

This section describes the most important methods of the API. The all API is described in the source code here.

Logging

Modified 2018-10-09 by Andrea Censi

Use the following methods to log messages:

def run(self, cis):
    cis.info('Information')
    cis.debug('Debug message')
    cis.error('Error message')

Temporary files

Modified 2018-10-09 by Andrea Censi

If you need a temporary dir, you can use the method get_tmp_dir():

d = cis.get_tmp_dir()
fn = os.path.join(d, 'tmp')
with open(fn, 'w') as f:
    f.write(data)

Declaring failure

Modified 2018-10-09 by Andrea Censi

There is no shame in declaring failure early:

cis.declare_failure("I give up")

Getting challenge parameters and files

Modified 2018-10-09 by Andrea Censi

Certain challenges require that the solution has access to certain files and parameters.

The semantics of these will vary from challenge to challenge.

For parameters, the API allows the evaluator to pass a dictionary, which can then be recovered using the get_challenge_parameters() function:

# assuming that the parameter "param1" has been set by the evaluator
params_from_evaluator = cis.get_challenge_parameters()
param1 = params_from_evaluator['param1']

For files, the API has the function get_challenge_file():

# assuming that the file "log.bag" has been passed by the evaluator
full_path = cis.get_challenge_file("log.bag")
bag = Bag(full_path)

Producing output

Modified 2018-10-09 by Andrea Censi

Symmetrically, the API allows the solution to produce output that can be read by the evaluator.

There is a function set_solution_output_dict() that allows the solution to pass one dictionary back to the evaluator:

response = {'guess': 42}
cis.set_solution_output_dict(response)

There is a function set_solution_output_file(basename, path) that allows the solution to create a file that can be read by the evaluator as well as to the user:

output_file = ... # produced output
cis.set_solution_output_file("output.bag", output_file)

The method set_solution_output_file_from_data() allows to pass directly the contents of the file.

(Advanced) Reading output of previous steps

Modified 2018-10-09 by Andrea Censi

Some challenges have more than one evaluation step.

In this case, there is an API part that allows to read the output of the previous steps.

The function get_current_step() returns the name of the current step.

The function get_completed_steps() returns the names of the completed steps.

The function get_completed_step_solution_file(step_name, basename) returns a file created in a previous step.

Suppose now that there is a challenge with steps step1 and step2 in which we need to pass some data. For example, step1 might be a learning step, and step2 might be an evaluation step.

Using the functions above all together, we can have the following logic, in which the second step reads the output of the first step.

step_name = cis.get_current_step()

if step_name == 'step1':
    learned_model_filename = ... 
    cis.set_solution_output_file('model', learned_model_filename)

if step_name == 'step2':
    # we know that step1 must have been successful
    assert cis.get_completed_steps() == ['step1']
    learned_model_filename = cis.get_solution_output_file('model')

Note that, transparently to the user, the two steps might have run on different machines.

Behind the scenes, the evaluator for step1 has saved the data to S3, the challenge server has kept track of the artefacts, and the evaluator for step2 has downloaded the data from S3. All of this is transparent to the user.

See here for an example of a multi-step challenge.

Because of mathjax bug

No questions found. You can ask a question on the website.