API

This part of the documentation lists the full API reference of all public classes and functions.

Core

class minibench.Benchmark(times=None, prefix=u'bench_', debug=False, before=None, before_each=None, after=None, after_each=None, **kwargs)[source]

Base class for all benchmark suites

times = The number of iteration to run each method
after()[source]

Hook called once after each method

after_class()[source]

Hook called after each class

after_each()[source]

Hook called after each method once

before()[source]

Hook called once before each method

before_class()[source]

Hook called before each class

before_each()[source]

Hook called before each method

label

A human readable label

label_for(name)[source]

Get a human readable label for a method given its name

run()[source]

Collect all tests to run and run them.

Each method will be run Benchmark.times.

class minibench.RunResult(duration, success, result)

Store a single method execution result

class minibench.BenchmarkRunner(*filenames, **kwargs)[source]

Collect all benchmarks and run them

Parameters:
  • filenames (string) – the benchmark files names
  • reporters (list) – the reporters classes or instance to run
  • debug (bool) – Run in debug mode if True
load_from_module(module)[source]

Load all benchmarks from a given module

load_module(filename)[source]

Load a benchmark module from file

run(**kwargs)[source]

Run all benchmarks.

Extras kwargs are passed to benchmarks construtors.

Reporters

class minibench.BaseReporter[source]

Base class for all reporters

after_class(bench)[source]

Hook called once after each benchmark class

after_method(bench, method)[source]

Hook called once after each benchmark method

before_class(bench)[source]

Hook called once before each benchmark class

before_method(bench, method)[source]

Hook called once before each benchmark method

end()[source]

Hook called once on run end

key(bench)[source]

Generate a report key from a benchmark instance

progress(bench, method, times)[source]

Hook called after each benchmark method call

start()[source]

Hook called once on run start

summary()[source]

Compute the execution summary

class minibench.FileReporter(filename)[source]

A reporter dumping results into a file

Parameters:filename (string) – the output file name
end()[source]

Dump the report into the output file.

If the file directory does not exists, it will be created. The open file is then given as parameter to output().

line(text=u'')[source]

A simple helper to write line with ` `

output(out)[source]

Serialize the report into the open file.

Child classes should implement this method.

Parameters:out (file) – an open file object to serialize into.
class minibench.JsonReporter(filename)[source]

A reporter dumping results into a JSON file

Parameters:filename (string) – the output file name
class minibench.CsvReporter(filename)[source]

A reporter dumping results into a CSV file

The CSV will have the following format:

Benchmark Method Times Total (s) Average (s)

It uses ; character as delimiter and as delimiter. All Strings are quoted.

Parameters:filename (string) – the output file name
class minibench.MarkdownReporter(filename)[source]

A reporter rendering result as a markdown table.

Each benchmark will be rendered as a table with the following format:

Method Times Total (s) Average (s)
Parameters:filename (string) – the output file name
class minibench.RstReporter(filename)[source]

A reporter rendering result as a reStructuredText table

Each benchmark will be rendered as a table with the following format:

Method Times Total (s) Average (s)
Parameters:filename (string) – the output file name
class minibench.cli.CliReporter(ref=None, debug=False)[source]

A reporter that display running benchmarks with ANSI colors