Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • jsteiglechner/gyrigauge
1 result
Show changes
Commits on Source (4)
# 25-01_metrics_everywhere
# GyriGauge: evaluate your segmentation
## Introduction
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.tuebingen.mpg.de/jsteiglechner/metrics_everywhere.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.tuebingen.mpg.de/jsteiglechner/metrics_everywhere/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
With an increasing amount of model predictions the necessity for quality measures grows since the evaluation of predictions is crucial.
In this package, we want to provide the functionality to evaluate segmentation quality.
Focused on brain MR imaging, this code basis was tested extensively.
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
For segmentation quality evaluation, we calculate metrics based on three different types of information with many metrics included:
- information theory
- contingency table
- marginal entropy
- joint entropy
- mutual information
- variation of information
- spatial distance of point clouds
- (percentile) hausdorff distance with different distance functions
- spatial overlap of specific regions.
- accuracy
- dice dissimilarity
- continous dice
- generalized dice
- intersection over union
- boundary intersection over union
Feel free to experiment with them.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
**TODO** Include nice image of what metrics to and how input should look like
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Project setup
The code is designed to run on GPU but can be run on CPU in principle. Please consider that computational time increases.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
### Hardware requirements
The code has been tested with 3D tensors of size `[352,352,352]` and integer values in range 0-5. Hardware in use was
- GPU RAM: `7.0 GB`
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
### Software requirements
The main package requires `numpy` and `pytorch`. `pytorch` needs to be installed depending on the hardware in use. It is tested with versions `python=3.10`, `numpy=1.26.4`, and `pytorch=2.1.2` with GPU `cuda=12.1` and `cudnn=8.9.2`.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
### Installation
The easiest way of getting this run, install it in "editable" mode by cloning the repository or using it as git submodule:
```bash
cd <path_to_your_repository_src>
git submodule add https://gitlab.tuebingen.mpg.de/jsteiglechner/gyrigauge gyrigauge
git submodule update --init gyrigauge
git submodule update --remote gyrigauge
```
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
### Experiences
This code has been tested with
- OS Ubuntu 24
- CPU: Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz
- GPU NVIDIA RTX A4000
We tested 3D tensors of size `[352,352,352]` and integer values in range 0-5 with important types 1-4 of 10 brain MRI segmentations by running standard `evaluate_prediction`. Here we present mean results:
| Metric | Object of interest | Result |
| --------------- | ------------------ | ---------- |
| IoU | time | `0.025 s` |
| IoU | memory | `3.608 GB` |
| boundary IoU | time | `0.132 s` |
| boundary IoU | memory | `6.234 GB` |
| 95th-hausdorff | time | `346.8 s` |
| 95th-hausdorff | memory | `5.346 GB` |
| VoI | time | `0.976 s` |
| VoI | memory | `1.930 GB` |
## Basic usage
To evaluate most important metrics (IoU, boundary IoU, 95th percentile hausdorff distance, voi) of a prepared segmentation tensor `prepared_output` with its reference `target_tensor` containing 3 types but only want to calculate without background:
```python
from evaluate_segmentation_quality import evaluate_prediction
evaluate_prediction(
output=prepared_output,
target=target_tensor,
predicted_types={0:'bg', 1:'left', 2:'right'},
considered_types=[1,2],
)
```
The expected output is a dictionary from metric labels to values.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Support and contributing
Having trouble or wanting to contribute? Do not hesitate to contact [Julius](mailto:julius.steiglechner@tuebingen.mpg.de). If not response in 7 days, contact him again.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
- Julius
- Lucas
- FLEXseg
## License
For open source projects, say how it is licensed.
This project is licensed under the [MIT License](https://gitlab.tuebingen.mpg.de/jsteiglechner/gyrigauge/-/blob/main/LICENSE).
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
There are open tasks:
- fix information theory metrics. They give nan sometimes.
- Write tests
- Include metrics that are important for loss calculation during training.
Project development has slowed down because the main functionality that was needed is reached.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
The aim of this module is to provide a method to evaluate a model on a dataset.
Created on Wed Jan 29 10:10:38 2025
@author: jsteiglechner
"""
from typing import Callable, Dict, List
import numpy as np
import torch
from segmentation_quality_measures.information_theoretic_based_metrics import (
variation_of_information,
)
from segmentation_quality_measures.prepare_tensors import (
encode_to_batch_times_class_times_spatial,
)
from segmentation_quality_measures.spatial_distance_based_metrics import (
hausdorff_distance_metric,
)
from segmentation_quality_measures.spatial_overlap_based_metrics import (
intersection_over_union,
boundary_intersection_over_union,
)
def calculate_batch_intersection_over_union(
output: torch.Tensor,
target: torch.Tensor,
num_types: int,
considered_types: torch.Tensor,
):
"""Calculate the segmentation accuracy."""
output_one_hot = encode_to_batch_times_class_times_spatial(
output, num_classes=num_types)
target_one_hot = encode_to_batch_times_class_times_spatial(
target, num_classes=num_types)
scores = intersection_over_union(
output_one_hot,
target_one_hot,
labels=considered_types,
batch_iou=True,
reduction='none',
)
return torch.mean(scores, dim=1)
def evaluate_prediction_iou(
output: torch.Tensor,
target: torch.Tensor,
predicted_types: Dict[int, str],
considered_types: List[int] = None,
) -> Dict[str, float]:
"""
Calculate Intersection over Union for multiple labels.
Parameters
----------
output : torch.Tensor
Prediction.
target : torch.Tensor
Reference.
predicted_types : Dict[int, str]
All values that can be in output and target with corresponding names.
considered_types : List[int], optional
All values that should be evaluated. The default is None.
Returns
-------
Dict[str, float]
Metrics string to value.
"""
iou = intersection_over_union(
output=output,
target=target,
labels=considered_types,
non_presence_threshold=torch.prod(
torch.div(
torch.tensor(output.shape[2:]),
100,
rounding_mode="floor",
)
),
reduction="none",
)
mean_iou = torch.nanmean(iou)
metrics = {}
metrics["iou_mean"] = mean_iou.cpu().tolist()
for i, label in enumerate(considered_types):
metrics["iou_" + predicted_types[label.item()]] = iou[i].cpu().tolist()
return metrics
def evaluate_prediction_overlap_based(
output: torch.Tensor,
target: torch.Tensor,
predicted_types: Dict[int, str],
considered_types: List[int] = None,
) -> Dict[str, float]:
"""
Evaluate prediction with overlap based segmentation metrics.
Notes
-----
Metrics that where used:
- Intersection over union
- boundary intersection over union
Parameters
----------
output : torch.Tensor
tensor with labels one-hot encoded with shape 1 x C x SP.
target : torch.Tensor
tensor with labels one-hot encoded with shape 1 x C x SP.
predicted_types : Dict[int, str]
Types that where predicted with names.
considered_types : torch.Tensor, optional
Types that should be selected if metric allows for selection. Default
is None.
Returns
-------
metrics : Dict[str, float]
dictionary which map metrics and considered labels.
"""
metrics = {}
iou = intersection_over_union(
output=output,
target=target,
labels=considered_types,
non_presence_threshold=torch.prod(
torch.div(
torch.tensor(output.shape[2:]),
100,
rounding_mode="floor",
)
),
reduction="none",
)
mean_iou = torch.nanmean(iou)
boundary_iou = boundary_intersection_over_union(
output=output,
target=target,
boundary_width=1,
kernel_type="box",
labels=considered_types,
non_presence_threshold=8,
reduction="none",
)
mean_boundary_iou = torch.nanmean(boundary_iou)
metrics["iou_mean"] = mean_iou.cpu().tolist()
metrics["iou_boundary_mean"] = mean_boundary_iou.cpu().tolist()
for i, label in enumerate(considered_types):
metrics["iou_" + predicted_types[label.item()]] = iou[i].cpu().tolist()
metrics["iou_boundary_" + predicted_types[label.item()]
] = boundary_iou[i].cpu().tolist()
return metrics
def evaluate_prediction(
output: torch.Tensor,
target: torch.Tensor,
predicted_types: Dict[int, str],
considered_types: List[int] = None,
) -> Dict[str, float]:
"""
Evaluate prediction with relevant segmentation metrics.
Notes
-----
Metrics that where used:
- Intersection over union
- 95th percentile of Hausdorff distance (surface distance)
- Variation of information
Parameters
----------
output : torch.Tensor
tensor with labels one-hot encoded with shape 1 x C x SP.
target : torch.Tensor
tensor with labels one-hot encoded with shape 1 x C x SP.
predicted_types : Dict[int, str]
Types that where predicted with names.
considered_types : torch.Tensor, optional
Types that should be selected if metric allows for selection. Default
is None.
Returns
-------
metrics : Dict[str, float]
dictionary which map metrics and considered labels.
"""
metrics = {}
iou = intersection_over_union(
output=output,
target=target,
labels=considered_types,
non_presence_threshold=torch.prod(
torch.div(
torch.tensor(output.shape[2:]),
100,
rounding_mode="floor",
)
),
reduction="none",
)
mean_iou = torch.nanmean(iou)
boundary_iou = boundary_intersection_over_union(
output=output,
target=target,
boundary_width=1,
kernel_type="box",
labels=considered_types,
non_presence_threshold=8,
reduction="none",
)
mean_boundary_iou = torch.nanmean(boundary_iou)
hd = hausdorff_distance_metric(
output=output,
target=target,
percentile=95,
labels=considered_types,
label_reduction="none",
reduction="none",
directed=False,
).squeeze()
mean_hd = torch.nanmean(hd)
voi = variation_of_information(
output=output,
target=target,
labels=None, # not implemented yet
normalization=False,
reduction="none",
)
metrics["iou_mean"] = mean_iou.cpu().tolist()
metrics["iou_boundary_mean"] = mean_boundary_iou.cpu().tolist()
metrics["hd_mean"] = mean_hd.cpu().tolist()
metrics["voi"] = voi.cpu().tolist()
for i, label in enumerate(considered_types):
metrics["iou_" + predicted_types[label.item()]] = iou[i].cpu().tolist()
metrics["iou_boundary_" + predicted_types[label.item()]
] = boundary_iou[i].cpu().tolist()
metrics["hd_" + predicted_types[label.item()]
] = hd[i].cpu().tolist()
return metrics
def evaluate_prediction_from_array(
y_pred: np.ndarray,
target: np.ndarray,
device: torch.device,
predicted_types: Dict[int, str],
considered_types: List[int] = None,
accuracy_fn: Callable = None,
) -> Dict[str, float]:
"""
Evaluate predicted result against reference.
Parameters
----------
y_pred : np.ndarray
Array of prediction.
target : np.ndarray
Array of reference.
device : torch.device
Device which computes.
predicted_types : Dict[int, str]
Types that where predicted with names.
considered_types : List[int], optional
Types that should be selected if metric allows for selection. The
default is None.
accuracy_fn : Callable, optional
Performance metric. If None, there is a standard evaluation process.
The default is None.
Returns
-------
metrics : Dict[str, float]
Dictionary which map metrics and considered labels.
"""
with torch.no_grad():
y_pred = torch.from_numpy(y_pred).to(torch.long).to(device)
target = torch.from_numpy(target).to(torch.long).to(device)
y_pred = encode_to_batch_times_class_times_spatial(
y_pred,
len(predicted_types),
)
target = encode_to_batch_times_class_times_spatial(
target,
len(predicted_types),
)
if accuracy_fn is None:
metrics = evaluate_prediction(
y_pred,
target,
predicted_types,
considered_types,
)
else:
metrics = accuracy_fn(output=y_pred, target=target)
return metrics
......@@ -325,7 +325,7 @@ def variation_of_information(
if labels is not None:
raise NotImplementedError(
f"Reduction to certain labels is not implemented yet.")
"Reduction to certain labels is not implemented yet.")
# output = torch.index_select(output, dim=1, index=labels)
# target = torch.index_select(target, dim=1, index=labels)
......