Commit 118ae705 by 靓靓

upload files

parent 2877f1a6
[flake8]
[flake8]
ignore = E203, E266, E501, W503
max-line-length = 88
max-complexity = 18
select = B,C,E,F,W,T4,B9
---
---
name: 🐜 Bug report
about: Something isn't working.
---
## Subject of the issue
Describe your issue here.
If the bug is confirmed, would you be willing to submit a PR? _(Help can be provided if
you need assistance submitting a PR)_
Yes / No
## Your environment
- DeepReg version (commit hash)
Please use `git rev-parse HEAD` to get the hash of the current commit. Using
`pip list` will provide the fixed tag version inside `setup.py`, therefore it is not
accurate.
We recommend installing DeepReg using
[Anaconda](https://docs.anaconda.com/anaconda/install/) /
[Miniconda](https://docs.conda.io/en/latest/miniconda.html) in a separate virtual
environment.
- OS (e.g. Ubuntu 20.04, MacOS 10.15, etc.)
We do not officially support windows.
- Python Version (3.7, 3.8, etc.)
We support only Python 3.7 officially.
- TensorFlow
- TensorFlow Version (2.2, 2.3, etc.)
- CUDA Version (10.1, etc.) if available.
- cuDNN Version
We support only TensorFlow 2.3 officially.
If using GPU, please check https://www.tensorflow.org/install/source#gpu to verify the
GPU support.
## Steps to reproduce
Tell us how to reproduce this issue. Please provide a working demo.
## Expected behaviour
Tell us what should happen.
## Actual behaviour
Tell us what happens instead.
---
---
name: 💡 Feature request
about: It would be nice to have some new feature.
---
## Subject of the feature
What are you trying to do and how would you want to do it differently?
Is it something you currently you cannot do?
Is this related to an issue/problem?
Has the feature been requested before? If yes, please provide a link to the issue.
If the feature request is approved, would you be willing to submit a PR? _(Help can be
provided if you need assistance submitting a PR)_
Yes / No
---
---
name: 📝 Documentation request
about: Some documentation is missing or incorrect.
---
## Subject of the documentation
What are you trying to do and which part of the code is hard to understand or incorrect
or missing? Please provide a link to the readthedocs page and code if available.
If the documentation request is approved, would you be willing to submit a PR? _(Help
can be provided if you need assistance submitting a PR)_
Yes / No
---
---
name: 🧪 Test request
about: Some code is not covered by test.
---
## Subject of the code / test
What are you trying to do and which part of the code is not tested? Please provide a
link to the code if available.
If the test request is approved, would you be willing to submit a PR? _(Help can be
provided if you need assistance submitting a PR)_
Yes / No
---
---
name: 😀 Other
about: Anything not covered by previous options.
---
## Subject of the issue
Please describe the issue, it can be a question or any other discussions.
blank_issues_enabled: false
blank_issues_enabled: false
contact_links:
- name: Documentation
url: https://deepreg.readthedocs.io/en/latest/
about: Please check the documentation here.
- name: Contribution Guidelines
url: https://deepreg.readthedocs.io/en/latest/contributing/guide.html
about: Please check the guidelines for contributing here.
# Description
# Description
Please include a summary of the change and which issue is fixed. Please also include
relevant motivation and context. List any dependencies that are required for this
change.
Fixes #<issue_number>
## Type of change
What types of changes does your code introduce to DeepReg?
_Please check the boxes that apply after submitting the pull request._
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] Code style update (formatting, renaming)
- [ ] Refactoring (no functional changes, no api changes)
- [ ] Documentation Update (fix or improvement on the documentation)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Other (if none of the other choices apply)
## Checklist
_Please check the boxes that apply after submitting the pull request._
_If you're unsure about any of them, don't hesitate to ask. We're here to help! This is
simply a reminder of what we are going to look for before merging your code._
- [ ] I have
[installed pre-commit](https://deepreg.readthedocs.io/en/latest/contributing/setup.html)
using `pre-commit install` and formatted all changed files. If you are not
certain, run `pre-commit run --all-files`.
- [ ] My commits' message styles matches
[our requested structure](https://deepreg.readthedocs.io/en/latest/contributing/commit.html),
e.g. `Issue #<issue number>: detailed message`.
- [ ] I have updated the
[change log file](https://github.com/DeepRegNet/DeepReg/blob/main/CHANGELOG.md)
regarding my changes.
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] I have added necessary documentation (if appropriate)
# Number of days of inactivity before an issue becomes stale
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 60
# Number of days of inactivity before a stale issue is closed
daysUntilClose: 14
# Issues with these labels will never be considered stale
exemptLabels:
- pinned
- security
# Label to use when marking an issue as stale
staleLabel: stale
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: >
This issue has been automatically marked as stale because it has not had recent
activity. It will be closed if no further activity occurs. Thank you for your
contributions.
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false
name: pre-commit
name: pre-commit
on:
# Trigger the workflow on push or pull request,
# but only for the main branch
push:
branches:
- main
pull_request:
branches:
- main
jobs:
pre-commit:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- uses: pre-commit/action@v2.0.0
name: Integration Test
name: Integration Test
on:
workflow_dispatch:
# Trigger the workflow on push or pull request,
# but only for the main branch
push:
branches:
- main
pull_request:
branches:
- main
jobs:
integration-test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest]
python-version: ["3.7"]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Ubuntu system dependencies
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
sudo apt-get update
sudo apt-get install graphviz
- name: Install Mac OS system dependencies
if: ${{ matrix.os == 'macos-latest' }}
run: |
brew install graphviz
- name: Install python dependencies
run: |
python -m pip install --upgrade pip
pip install codecov==2.1.11
pip install -e .
- name: Test examples
run: |
pytest test/integration/test_examples.py
- name: Test demos
run: |
pytest test/integration/test_demos.py
# This workflows will upload a Python Package using Twine when a release is created
# This workflows will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Upload Python Package
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.7"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Bump setup.py Version
run: perl -pi -e "s/0\.0\.0/${GITHUB_REF##*/v}/g" ./setup.py
- name: Build and publish
env:
TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload dist/*
name: Unit Test
name: Unit Test
on:
# Trigger the workflow on push or pull request,
# but only for the main branch
push:
branches:
- main
pull_request:
branches:
- main
jobs:
unit-test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
python-version: ["3.7", "3.6"]
exclude:
# excludes py3.6 on MacOS to save time/energy
- os: macos-latest
python-version: "3.6"
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install Ubuntu system dependencies
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
sudo apt-get update
sudo apt-get install graphviz
- name: Install Mac OS system dependencies
if: ${{ matrix.os == 'macos-latest' }}
run: |
brew install graphviz
- name: Install python dependencies
run: |
python -m pip install --upgrade pip
pip install codecov==2.1.11
pip install -e .
- name: Linting
# W9006 can't be checked like this as we need to ignore ValueError
run: |
pylint --disable=all --enable=C0103,C0301,R1725,W0107,W9012,W9015 *
pylint --exit-zero *
timeout-minutes: 10
- name: Unit test with pytest
run: |
pytest --cov-report=xml --cov=deepreg --durations=20 --durations-min=1.0 ./test/unit/
timeout-minutes: 20
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v1
if: ${{ matrix.os == 'ubuntu-latest' && matrix.python-version == '3.7' }}
with:
token: ${{ secrets.CODECOV_TOKEN }}
file: ./coverage.xml
# Byte-compiled / optimized / DLL files
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
docs/_site/*
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# intellij
.idea/
# logs
logs/
# data folders
*.tfrecords
# demo dataset folders and zip files
demos/*/dataset
demos/*/*.zip
# demo test folders
demos/*/logs_reg
demos/*/logs_train
demos/*/logs_predict
# test folders
deepreg_download_temp_dir/
[settings]
[settings]
known_third_party = git,h5py,matplotlib,nibabel,numpy,pandas,pytest,scipy,setuptools,sphinx_rtd_theme,tensorflow,testfixtures,tqdm,yaml
multi_line_output = 3
include_trailing_comma = True
force_grid_wrap = 0
use_parentheses = True
line_length = 88
repos:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.4.0
hooks:
- id: check-ast # Simply check whether the files parse as valid python
- id: check-case-conflict # Check for files that would conflict in case-insensitive filesystems
- id: check-builtin-literals # Require literal syntax when initializing empty or zero Python builtin types
- id: check-docstring-first # Check a common error of defining a docstring after code
- id: check-merge-conflict # Check for files that contain merge conflict strings
- id: check-yaml # Check yaml files
- id: check-vcs-permalinks # Ensure that links to vcs websites are permalinks
- id: debug-statements # Check for debugger imports and py37+ `breakpoint()` calls in python source
- id: detect-private-key # Detect the presence of private keys
- id: end-of-file-fixer # Ensure that a file is either empty, or ends with one newline
- id: mixed-line-ending # Replace or checks mixed line ending
- id: trailing-whitespace # This hook trims trailing whitespace
- repo: https://github.com/asottile/seed-isort-config
rev: v2.2.0
hooks:
- id: seed-isort-config
- repo: https://github.com/timothycrosley/isort
rev: 5.8.0
hooks:
- id: isort
- repo: https://github.com/psf/black
rev: 20.8b1
hooks:
- id: black
language_version: python3.7
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.812
hooks:
- id: mypy
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v2.2.1
hooks:
- id: prettier
- repo: https://gitlab.com/pycqa/flake8
rev: 3.9.0
hooks:
- id: flake8
# - repo: https://github.com/pycqa/pydocstyle
# rev: 5.1.1 # pick a git hash / tag to point to
# hooks:
# - id: pydocstyle
docs/joss_paper/paper.md
docs/joss_paper/paper.md
{
{
"printWidth": 88,
"proseWrap": "always",
"useTabs": false,
"tabWidth": 2
}
# .readthedocs.yml
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/source/conf.py
# Build documentation with MkDocs
#mkdocs:
# configuration: mkdocs.yml
# Optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- method: pip
path: .
{
{
"description": "DeepReg is a freely available, community-supported open-source toolkit for research and education in medical image registration using deep learning.",
"license": "Apache-2.0",
"title": "DeepReg: a deep learning toolkit for medical image registration",
"upload_type": "software",
"keywords": [
"Deep Learning",
"Image Fusion",
"Medical Image Registration",
"Neural Networks"
],
"creators": [
{
"orcid": "0000-0002-1184-7421",
"name": "Fu, Yunguan"
},
{
"orcid": "0000-0001-5685-971X",
"name": "Monta\u00f1a-Brown, Nina"
},
{
"orcid": "0000-0002-5004-0663",
"name": "Saeed, Shaheer U."
},
{
"orcid": "0000-0002-0539-3638",
"name": "Casamitjana, Adri\u00e0"
},
{
"orcid": "0000-0001-6838-335X",
"name": "Baum, Zachary M. C."
},
{
"orcid": "0000-0002-0398-4995",
"name": "Delaunay, R\u00e9mi"
},
{
"orcid": "0000-0003-4401-5311",
"name": "Yang, Qianye"
},
{
"orcid": "0000-0002-2608-2580",
"name": "Grimwood, Alexander"
},
{
"orcid": "0000-0002-8903-1561",
"name": "Min, Zhe"
},
{
"orcid": "0000-0002-7150-9918",
"name": "Blumberg, Stefano B."
},
{
"orcid": "0000-0001-7569-173X",
"name": "Iglesias, Juan Eugenio"
},
{
"orcid": "0000-0003-2916-655X",
"name": "Barratt, Dean C."
},
{
"orcid": "0000-0001-9217-5438",
"name": "Bonmati, Ester"
},
{
"orcid": "0000-0003-2439-350X",
"name": "Alexander, Daniel C."
},
{
"orcid": "0000-0002-5565-1252",
"name": "Clarkson, Matthew J."
},
{
"orcid": "0000-0003-1794-0456",
"name": "Vercauteren, Tom"
},
{
"orcid": "0000-0003-4902-0486",
"name": "Hu, Yipeng"
}
],
"access_right": "open"
}
# Change Log
# Change Log
All notable changes to this project will be documented in this file. It's a team effort
to make them as straightforward as possible.
The format is based on [Keep a Changelog](http://keepachangelog.com/) and this project
adheres to [Semantic Versioning](http://semver.org/).
## [1.0.0-rc1] - In Progress
Release comment: refactoring of models means that old checkpoint files are no longer
compatible with the updates.
### Added
- Added `num_parallel_calls` option in config for data preprocessing.
- Added tests for Dice score, Jaccard Index, and cross entropy losses.
- Added statistics on inputs, DDF and TRE into tensorboard.
- Added example for using custom loss.
- Added tests on Mac OS.
- Added tests for python 3.6 and 3.7.
- Added support to custom layer channels in U-Net.
- Added support to multiple loss functions for each loss type: "image", "label" and
"regularization".
- Added LNCC computation using separable 1-D filters for all kernels available
### Changed
- Updated pre-trained models for unpaired_ct_abdomen demo to new version
- Changed dataset config so that `format` and `labeled` are defined per split.
- Reduced TensorFlow logging level.
- Used `DEEPREG_LOG_LEVEL` to control logging in DeepReg.
- Increased all EPS to 1e-5.
- Clarify the suggestion in doc to use all-zero masks for missing labels.
- Moved contributor list to a separate page.
- Changed `no-test` flag to `full` for demo scripts.
- Renamed `neg_weight` to `background_weight`.
- Renamed `log_dir` to `exp_name` and `log_root` to `log_dir` respectively.
- Uniformed local-net, global-net, u-net under a single u-net structure.
- Simplified custom layer definitions.
- Removed multiple unnecessary custom layers and use tf.keras.layers whenever possible.
- Refactored BSplines interpolation independently of the backbone network and available
only for DDF and DVF models.
### Fixed
- Fixed using GPU remotely
- Fixed LNCC loss regarding INF values.
- Removed loss weight checks to be more robust.
- Fixed import error under python 3.6.
- Fixed the residual module in local net architecture, compatible for previous
checkpoints.
- Broken link in README to seminar video.
## [0.1.2] - 2021-01-31
Release comment: This is mainly a bugfix release, although some of the tasks in
1.0.0-rc1 have been included in this release, with or without public-facing
accessibility (see details below).
### Added
- Added global NCC loss
- Added the docs on registry for backbone models.
- Added backward compatible config parser.
- Added tests so that test coverage is 100%.
- Added config file docs with details on how new config works.
- Added DDF data augmentation.
- Added the registry for backbone models and losses.
- Added pylint with partial check (C0103,C0301,R1725,W0107,W9012,W9015) to CI.
- Added badges for code quality and maintainability.
- Added additional links (CoC, PyPI) and information (contributing, citing) to project
README.md.
- Added CMIC seminar where DeepReg was introduced to the project README.md.
- Added deepreg_download entry point to access non-release folders required for Quick
Start.
### Changed
- Refactored optimizer configuration.
- Refactored affine transform data augmentation.
- Modified the implementation of resampler to support zero boundary condition.
- Refactored loss functions into classes.
- Use CheckpointManager callback for saving and support training restore.
- Changed distribute strategy to default for <= 1 GPU.
- Migrated from Travis-CI to GitHub Actions.
- Simplified configuration for backbone models and losses.
- Simplified contributing documentation.
- Uniform kernel size for LNCC loss.
- Improved demo configurations with the updated pre-trained models for:
grouped_mask_prostate_longitudinal, paried_mrus_prostate, unpaired_us_prostate_cv,
grouped_mr_heart, unpaired_ct_lung, paired_ct_lung.
### Fixed
- Fixed several dead links in the documentation.
- Fixed a bug due to typo when image loss weight is zero, label loss is not applied.
- Fixed warp CLI tool by saving outputs in Nifti1 format.
- Fixed optimiser storage and loading from checkpoints.
- Fixed bias initialization for theta in GlobalNet.
- Removed invalid `first` argument in DataLoader for sample_index generator.
- Fixed build error when downloading data from the private repository.
- Fixed the typo for CLI tools in documents.
## [0.1.0] - 2020-11-02
### Added
- Added option to change the kernel size and type for LNCC image similarity loss.
- Added visualization tool for generating gifs from model outputs.
- Added the max_epochs argument for training to overwrite configuration.
- Added the log_root argument for training and prediction to customize the log file
location.
- Added more meaningful error messages for data loading.
- Added integration tests for all demos.
- Added environment.yml file for Conda environment creation.
- Added Dockerfile.
- Added the documentation about using UCL cluster with DeepReg.
### Changed
- Updated TensorFlow version to 2.3.1.
- Updated the pre-trained models in MR brain demo.
- Updated instruction on Conda environment creation.
- Updated the documentation regarding pre-commit and unit-testing.
- Updated the issue and pull-request templates.
- Updated the instructions for all demos.
- Updated pre-commit hooks version.
- Updated JOSS paper to address reviewers' comments.
- Migrated from travis-ci.org to travis-ci.com.
### Fixed
- Fixed prediction error when number of samples cannot be divided by batch size exactly.
- Fixed division by zero handling in multiple image/label losses.
- Fixed tensor comparison in unit tests and impacted tests.
- Removed normalization of DDF/DVF when saving in Nifti formats.
- Fixed invalid link in the quick start page.
## [0.1.0b1] - 2020-09-01
Initial beta release.
FROM tensorflow/tensorflow:2.3.1-gpu
FROM tensorflow/tensorflow:2.3.1-gpu
# install miniconda
ENV CONDA_DIR=/root/miniconda3
ENV PATH=${CONDA_DIR}/bin:${PATH}
ARG PATH=${CONDA_DIR}/bin:${PATH}
RUN apt-get update
RUN apt-get install -y wget git && rm -rf /var/lib/apt/lists/*
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
# directory for following operations
WORKDIR /app
# clone DeepReg
RUN git clone https://github.com/DeepRegNet/DeepReg.git
WORKDIR DeepReg
RUN git pull
# install conda env
RUN conda env create -f environment.yml \
&& conda init bash \
&& echo "conda activate deepreg" >> /root/.bashrc
# install deepreg
ENV CONDA_PIP="${CONDA_DIR}/envs/deepreg/bin/pip"
RUN ${CONDA_PIP} install -e .
include README.md
include README.md
include requirements.txt
<p align="center">
<p align="center">
<img src="https://raw.githubusercontent.com/DeepRegNet/DeepReg/main/docs/asset/deepreg_logo_purple.svg"
alt="deepreg_logo" title="DeepReg" width="200"/>
</p>
<table align="center">
<tr>
<td>
<b>Package</b>
</td>
<td>
<a href="https://opensource.org/licenses/Apache-2.0">
<img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" alt="License">
</a>
<a href="https://pypi.python.org/pypi/DeepReg/">
<img src="https://img.shields.io/pypi/v/deepreg.svg" alt="PyPI Version">
</a>
<a href="https://pypi.python.org/pypi/DeepReg/">
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/deepreg">
</a>
<a href="https://pepy.tech/project/deepreg">
<img src="https://static.pepy.tech/personalized-badge/deepreg?period=total&units=none&left_color=grey&right_color=orange&left_text=Downloads"
alt="PyPI downloads">
</a>
</td>
</tr>
<tr>
<td>
<b>Documentation</b>
</td>
<td>
<a href="https://deepreg.readthedocs.io/en/latest/?badge=latest">
<img src="https://readthedocs.org/projects/deepreg/badge/?version=latest" alt="Documentation Status">
</a>
</td>
</tr>
<tr>
<td>
<b>Code</b>
</td>
<td>
<a href="https://github.com/DeepRegNet/DeepReg/actions?query=workflow%3A%22Unit+Test%22">
<img src="https://github.com/deepregnet/deepreg/workflows/Unit%20Test/badge.svg?branch=main" alt="Unit Test">
</a>
<a href="https://github.com/DeepRegNet/DeepReg/actions?query=workflow%3A%22Integration+Test%22">
<img src="https://github.com/deepregnet/deepreg/workflows/Integration%20Test/badge.svg?branch=main" alt="Integration Test">
</a>
<a href="https://codecov.io/github/DeepRegNet/DeepReg">
<img src="https://codecov.io/gh/DeepRegNet/DeepReg/branch/main/graph/badge.svg" alt="Coverage Status">
</a>
<a href="https://github.com/psf/black">
<img src="https://img.shields.io/badge/code%20style-black-000000.svg" alt="Code Style">
</a>
<a href="https://scrutinizer-ci.com/g/DeepRegNet/DeepReg/">
<img src="https://scrutinizer-ci.com/g/DeepRegNet/DeepReg/badges/quality-score.png" alt="Code Quality">
</a>
<a href="https://codeclimate.com/github/DeepRegNet/DeepReg/maintainability">
<img src="https://api.codeclimate.com/v1/badges/65245e28aa8f2cd7c6b6/maintainability" alt="Code Maintainability">
</a>
</td>
</tr>
<tr>
<td>
<b>Papers</b>
</td>
<td>
<a href="https://joss.theoj.org/papers/7e6de472bc82a70d7618e23f618960b3"><img
src="https://joss.theoj.org/papers/7e6de472bc82a70d7618e23f618960b3/status.svg"
alt="JOSS Paper"></a>
<a href="https://zenodo.org/badge/latestdoi/269365590"><img src="https://zenodo.org/badge/269365590.svg"
alt="DOI"></a>
</td>
</tr>
</table>
# DeepReg
**DeepReg is a freely available, community-supported open-source toolkit for research
and education in medical image registration using deep learning.**
- TensorFlow 2-based for efficient training and rapid deployment;
- Implementing major unsupervised and weakly-supervised algorithms, with their
combinations and variants;
- Focusing on growing and diverse clinical applications, with all DeepReg Demos using
open-accessible data;
- Simple built-in command line tools requiring minimal programming and scripting;
- Open, permissible and research-and-education-driven, under the Apache 2.0 license.
---
## Getting Started
- [DeepReg.Net](http://deepreg.net/)
- [Documentation, Tutorials, and Quick Start](https://deepreg.readthedocs.io/)
- [Medical Image Registration Demos using DeepReg](https://deepreg.readthedocs.io/en/latest/demo/introduction.html)
- [Issue Tracker](https://github.com/DeepRegNet/DeepReg/issues/new/choose)
## Contributing
Get involved, and help make DeepReg better! We want your help - **Really**.
**Being a contributor doesn't just mean writing code.** Equally important to the
open-source process is writing or proof-reading documentation, suggesting or
implementing tests, or giving feedback about the project. You might see the errors and
assumptions that have been glossed over. If you can write any code at all, you can
contribute code to open-source. We are constantly trying out new skills, making
mistakes, and learning from those mistakes. That's how we all improve, and we are happy
to help others learn with us.
### Code of Conduct
This project is released with a
[Code of Conduct](https://github.com/DeepRegNet/DeepReg/blob/main/docs/CODE_OF_CONDUCT.md).
By participating in this project, you agree to abide by its terms.
### Where Should I Start?
For guidance on making a contribution to DeepReg, see our
[Contribution Guidelines](https://deepreg.readthedocs.io/en/latest/contributing/guide.html).
Have a registration application with openly accessible data? Consider
[contributing a DeepReg Demo](https://deepreg.readthedocs.io/en/latest/contributing/demo.html).
## MICCAI 2020 Educational Challenge
Our [MICCAI Educational Challenge](https://miccai-sb.github.io/materials.html)
submission on DeepReg is an Award Winner!
Check it out
[here](https://github.com/DeepRegNet/DeepReg/blob/main/docs/Intro_to_Medical_Image_Registration.ipynb) -
you can also
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DeepRegNet/DeepReg/blob/main/docs/Intro_to_Medical_Image_Registration.ipynb)
## Overview Video
Members of the DeepReg dev team presented "The Road to DeepReg" at the Centre for
Medical Imaging Computing (CMIC) seminar series at University College London on the 4th
of November 2020. You can access the talk
[here](https://www.youtube.com/watch?v=jDEyWXZM3CE&feature=youtu.be).
## Citing DeepReg
DeepReg is research software, made by a
[team of academic researchers](https://deepreg.readthedocs.io/en/latest/#contributors).
Citations and use of our software help us justify the effort which has gone into, and
will keep going into, maintaining and growing this project.
If you have used DeepReg in your research, please consider citing us:
> Fu _et al._, (2020). DeepReg: a deep learning toolkit for medical image registration.
> _Journal of Open Source Software_, **5**(55), 2705,
> https://doi.org/10.21105/joss.02705
Or with BibTex:
```
@article{Fu2020,
doi = {10.21105/joss.02705},
url = {https://doi.org/10.21105/joss.02705},
year = {2020},
publisher = {The Open Journal},
volume = {5},
number = {55},
pages = {2705},
author = {Yunguan Fu and Nina Montaña Brown and Shaheer U. Saeed and Adrià Casamitjana and Zachary M. C. Baum and Rémi Delaunay and Qianye Yang and Alexander Grimwood and Zhe Min and Stefano B. Blumberg and Juan Eugenio Iglesias and Dean C. Barratt and Ester Bonmati and Daniel C. Alexander and Matthew J. Clarkson and Tom Vercauteren and Yipeng Hu},
title = {DeepReg: a deep learning toolkit for medical image registration},
journal = {Journal of Open Source Software}
}
```
train:
train:
method: "ddf" # ddf / dvf / conditional
backbone:
name: "global"
num_channel_initial: 1
depth: 4
loss:
image:
name: "lncc"
weight: 0.1
label:
weight: 1.0
name: "dice"
scales: [0, 1, 2, 4, 8, 16, 32]
regularization:
weight: 0.5
name: "bending"
preprocess:
batch_size: 2
shuffle_buffer_num_batch: 1
num_parallel_calls: -1 # number elements to process asynchronously in parallel during preprocessing, -1 means unlimited, heuristically it should be set to the number of CPU cores available
optimizer:
name: "Adam"
learning_rate: 1.0e-5
epochs: 2
save_period: 2
train:
train:
method: "conditional" # ddf / dvf / conditional
train:
train:
method: "ddf" # ddf / dvf / conditional
backbone:
name: "local"
num_channel_initial: 1
extract_levels: [0, 1, 2, 3, 4]
loss:
image:
name: "lncc"
weight: 0.1
label:
weight: 1.0
name: "dice"
scales: [0, 1, 2, 4, 8, 16, 32]
regularization:
weight: 0.5
name: "bending"
preprocess:
data_augmentation:
name: "affine"
batch_size: 2
shuffle_buffer_num_batch: 1
num_parallel_calls: -1 # number elements to process asynchronously in parallel during preprocessing, -1 means unlimited, heuristically it should be set to the number of CPU cores available
optimizer:
name: "Adam"
learning_rate: 1.0e-5
epochs: 2
save_period: 2
dataset:
dataset:
moving_image_shape: [16, 16, 16]
fixed_image_shape: [8, 8, 8]
train:
epochs: 1
save_period: 1
dataset:
dataset:
image_shape: [16, 16, 16]
train:
epochs: 1
save_period: 1
train:
train:
method: "dvf" # ddf / dvf / conditional
dataset:
dataset:
train:
dir: "data/test/h5/grouped/train"
format: "h5"
labeled: true
valid:
dir: "data/test/h5/grouped/test"
format: "h5"
labeled: true
test:
dir: "data/test/h5/grouped/test"
format: "h5"
labeled: true
type: "grouped" # paired / unpaired / grouped
image_shape: [16, 16, 16]
intra_group_prob: 1
intra_group_option: "forward"
sample_image_in_group: true
dataset:
dataset:
dir:
train: "demos/grouped_mr_heart/dataset/train"
valid: "demos/grouped_mr_heart/dataset/val"
test: "demos/grouped_mr_heart/dataset/test"
format: "nifti"
type: "grouped" # paired / unpaired / grouped
labeled: false
intra_group_prob: 1
intra_group_option: "unconstrained" # forward / backward / unconstrained
sample_image_in_group: true
image_shape: [128, 128, 28]
train:
# define neural network structure
model:
method: "ddf" # the registration method, value should be ddf / dvf / conditional
backbone: "local" # value should be local / global / unet
local:
num_channel_initial: 32 # number of initial channel in local net, controls the size of the network
extract_levels: [0, 1, 2, 3, 4]
# define the loss function for training
loss:
dissimilarity:
image:
name: "gmi"
weight: 1.0
label:
weight: 0.0
name: "multi_scale"
multi_scale:
loss_type: "dice"
loss_scales: [0, 1, 2, 4, 8, 16]
single_scale:
loss_type: "cross-entropy"
regularization:
weight: 0.25 # weight of regularization loss
energy_type: "gradient-l2" # value should be bending / gradient-l1 / gradient-l2
# define the optimizer
optimizer:
name: "adam" # value should be adam / sgd / rms
adam:
learning_rate: 1.0e-4
preprocess:
batch_size: 4
shuffle_buffer_num_batch: 1 # shuffle_buffer_size = batch_size * shuffle_buffer_num_batch
# other training hyper-parameters
epochs: 4000 # number of training epochs
save_period: 1000 # the model will be saved every `save_period` epochs.
dataset:
dataset:
train:
dir: "data/test/nifti/grouped/train"
format: "nifti"
labeled: true
valid:
dir: "data/test/nifti/grouped/test"
format: "nifti"
labeled: true
test:
dir: "data/test/nifti/grouped/test"
format: "nifti"
labeled: true
type: "grouped" # paired / unpaired / grouped
image_shape: [16, 16, 16]
intra_group_prob: 1
intra_group_option: "forward"
sample_image_in_group: true
dataset:
dataset:
train:
labeled: true
valid:
labeled: true
test:
labeled: true
dataset:
dataset:
train:
dir: "data/test/h5/paired/train"
format: "h5"
labeled: true
valid:
dir: "data/test/h5/paired/test"
format: "h5"
labeled: true
test:
dir: "data/test/h5/paired/test"
format: "h5"
labeled: true
type: "paired" # paired / unpaired / grouped
moving_image_shape: [16, 16, 16]
fixed_image_shape: [8, 8, 8]
dataset:
dataset:
train:
dir: "data/test/nifti/paired/train"
format: "nifti"
labeled: true
valid:
dir: "data/test/nifti/paired/test"
format: "nifti"
labeled: true
test:
dir: "data/test/nifti/paired/test"
format: "nifti"
labeled: true
type: "paired" # paired / unpaired / grouped
moving_image_shape: [16, 16, 16]
fixed_image_shape: [8, 8, 8]
dataset:
dataset:
train:
labeled: false
valid:
labeled: false
test:
labeled: false
dataset:
dataset:
train:
dir: "data/test/h5/unpaired/train"
format: "h5"
labeled: true
valid:
dir: "data/test/h5/unpaired/test"
format: "h5"
labeled: true
test:
dir: "data/test/h5/unpaired/test"
format: "h5"
labeled: true
type: "unpaired" # paired / unpaired / grouped
image_shape: [16, 16, 16]
dataset:
dataset:
train:
dir: "data/test/nifti/unpaired/train"
format: "nifti"
labeled: true
valid:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
test:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
type: "unpaired" # paired / unpaired / grouped
image_shape: [16, 16, 16]
dataset:
dataset:
train:
dir:
- "data/test/nifti/unpaired/train"
- "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
valid:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
test:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
type: "unpaired" # paired / unpaired / grouped
image_shape: [16, 16, 16]
dataset:
dataset:
train:
dir: "data/test/nifti/unpaired/train"
format: "nifti"
labeled: true
valid:
test:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
type: "unpaired" # paired / unpaired / grouped
image_shape: [16, 16, 16]
dataset:
dataset:
train:
dir: "data/test/nifti/unpaired/train"
format: "nifti"
labeled: true
valid:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
test:
dir: "data/test/nifti/unpaired/test"
format: "nifti"
labeled: true
type: "unpaired" # paired / unpaired / grouped
image_shape: [16, 16, 16]
train:
# define neural network structure
method: "ddf" # options include "ddf", "dvf", "conditional"
backbone:
name: "local" # options include "local", "unet" and "global"
num_channel_initial: 1 # number of initial channel in local net, controls the size of the network
extract_levels: [0, 1, 2, 3, 4] # this defines the resolution levels used to compose the output ddf, e.g. one does not use the highest resolution level can be specified as [0,1,2,3]
# define the loss function for training
loss:
image:
name: "lncc" # other options include "lncc", "ssd" and "gmi", for local normalised cross correlation,
weight: 0.1
label:
weight: 1.0
name: "dice" # options include "dice", "cross-entropy", "mean-squared", "generalised_dice" and "jaccard"
scales: [0, 1, 2, 4, 8, 16, 32]
regularization:
weight: 0.5 # weight of regularization loss
name: "bending" # options include "bending", "gradient"
# define the optimizer
optimizer:
name: "Adam"
learning_rate: 1.0e-5
# define the hyper-parameters for preprocessing
preprocess:
data_augmentation:
name: "affine"
batch_size: 2
shuffle_buffer_num_batch: 1 # shuffle_buffer_size = batch_size * shuffle_buffer_num_batch
num_parallel_calls: -1 # number elements to process asynchronously in parallel during preprocessing, -1 means unlimited, heuristically it should be set to the number of CPU cores available
# other training hyper-parameters
epochs: 2 # number of training epochs
save_period: 2 # the model will be saved every `save_period` epochs.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment