module ml.competitions

Short summary

module ensae_teaching_cs.ml.competitions

Compute metrics in for a competition

source on GitHub

Functions

function

truncated documentation

AUC

Computes the AUC.

main_codalab_wrapper

Adapts the template available at evaluate.py

private_codalab_wrapper

Wraps the function following the guidelines User_Building a Scoring Program for a Competition. …

Documentation

Compute metrics in for a competition

source on GitHub

ensae_teaching_cs.ml.competitions.AUC(answers, scores)[source]

Computes the AUC.

Paramètres
  • answers – expected answers 0 (false), 1 (true)

  • scores – score obtained for class 1

Renvoie

number

source on GitHub

ensae_teaching_cs.ml.competitions.main_codalab_wrapper(fct, metric_name, argv, truth_file='truth.txt', submission_file='answer.txt', output_file='scores.txt')[source]

Adapts the template available at evaluate.py

source on GitHub

ensae_teaching_cs.ml.competitions.private_codalab_wrapper(fct, metric_name, fold1, fold2, f1='answer.txt', f2='answer.txt', output='scores.txt', use_print=False)[source]

Wraps the function following the guidelines User_Building a Scoring Program for a Competition. It replicates the example available at competition-examples/hello_world.

Paramètres
  • fct – function to wrap

  • metric_name – metric name

  • fold1 – folder which contains the data for folder containing the truth

  • fold2 – folder which contains the data for folder containing the data

  • f1 – filename for the truth

  • f2 – filename for the produced answers

  • output – produces an output with the expected results

  • use_print – display intermediate results

Renvoie

metric

source on GitHub