faknow.evaluate

faknow.evaluate.evaluator

class faknow.evaluate.evaluator.Evaluator(metrics: List[str | Callable[[Tensor, Tensor], float]] | None = None)[source]

Bases: object

__init__(metrics: List[str | Callable[[Tensor, Tensor], float]] | None = None)[source]

Initialize the Evaluator.

Parameters:

metrics (List[Union[str, Callable[[Tensor, Tensor], float]]]) – A list of metrics, either as strings or Callable functions. If the metric is a string, built-in metric functions (accuracy, precision, recall, f1, auc) will be used based on the metric name. If the metric is a Callable function with signature metric_func(outputs: Tensor, y: Tensor) -> float, it will be used directly as the metric function. If None, the default metrics will be used. Default=None

evaluate(outputs: Tensor, y: Tensor) Dict[str, float][source]

Evaluate the model’s performance using the provided metrics.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

A dictionary containing metric names as keys

and their corresponding values as floats.

Return type:

Dict[str, float]

faknow.evaluate.metrics

faknow.evaluate.metrics.calculate_accuracy(outputs: Tensor, y: Tensor) float[source]

Calculate the accuracy metric.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

The accuracy value.

Return type:

float

faknow.evaluate.metrics.calculate_auc(outputs: Tensor, y: Tensor) float[source]

Calculate the AUC score metric.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

The AUC score value.

Return type:

float

faknow.evaluate.metrics.calculate_f1(outputs: Tensor, y: Tensor) float[source]

Calculate the F1 score metric.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

The F1 score value.

Return type:

float

faknow.evaluate.metrics.calculate_precision(outputs: Tensor, y: Tensor) float[source]

Calculate the precision metric.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

The precision value.

Return type:

float

faknow.evaluate.metrics.calculate_recall(outputs: Tensor, y: Tensor) float[source]

Calculate the recall metric.

Parameters:
  • outputs (torch.Tensor) – Model’s predictions.

  • y (torch.Tensor) – Ground truth labels.

Returns:

The recall value.

Return type:

float

faknow.evaluate.metrics.get_metric_func(name: str) Callable[source]

Get the appropriate metric function based on the given name.

Parameters:

name (str) – The name of the metric function.

Returns:

The corresponding metric function.

Return type:

Callable

Raises:

RuntimeError – If no metric function with the provided name is found.