Skip to content

Releases: albumentations-team/albumentations

Albumentations 1.4.14 Release Notes

16 Aug 00:22
fa2a6d1
Compare
Choose a tag to compare
  • Support Our Work
  • Transforms
  • Improvements and Bug Fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server

Transforms

Added GridElasticDeform transform

image

Grid-based Elastic deformation Albumentation implementation

This class applies elastic transformations using a grid-based approach.
The granularity and intensity of the distortions can be controlled using
the dimensions of the overlaying distortion grid and the magnitude parameter.
Larger grid sizes result in finer, less severe distortions.

Args:
    num_grid_xy (tuple[int, int]): Number of grid cells along the width and height.
        Specified as (grid_width, grid_height). Each value must be greater than 1.
    magnitude (int): Maximum pixel-wise displacement for distortion. Must be greater than 0.
    interpolation (int): Interpolation method to be used for the image transformation.
        Default: cv2.INTER_LINEAR
    mask_interpolation (int): Interpolation method to be used for mask transformation.
        Default: cv2.INTER_NEAREST
    p (float): Probability of applying the transform. Default: 1.0.

Targets:
    image, mask

Image types:
    uint8, float32

Example:
    >>> transform = GridElasticDeform(num_grid_xy=(4, 4), magnitude=10, p=1.0)
    >>> result = transform(image=image, mask=mask)
    >>> transformed_image, transformed_mask = result['image'], result['mask']

Note:
    This transformation is particularly useful for data augmentation in medical imaging
    and other domains where elastic deformations can simulate realistic variations.

by @4pygmalion

PadIfNeeded

Now reflection padding correctly with bounding boxes and keypoints

by @ternaus

RandomShadow

  • Works with any number of channels
  • Intensity of the shadow is not hardcoded constant anymore but could be sampled
Simulates shadows for the image by reducing the brightness of the image in shadow regions.

Args:
    shadow_roi (tuple): region of the image where shadows
        will appear (x_min, y_min, x_max, y_max). All values should be in range [0, 1].
    num_shadows_limit (tuple): Lower and upper limits for the possible number of shadows.
        Default: (1, 2).
    shadow_dimension (int): number of edges in the shadow polygons. Default: 5.
    shadow_intensity_range (tuple): Range for the shadow intensity.
        Should be two float values between 0 and 1. Default: (0.5, 0.5).
    p (float): probability of applying the transform. Default: 0.5.

Targets:
    image

Image types:
    uint8, float32

Reference:
    https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library

by @JonasKlotz

Improvements and Bug Fixes

  • BugFix in Affine. Now fit_output=True works correctly with bounding boxes. by @ternaus
  • BugFix in ColorJitter. By @maremun
  • Speedup in CoarseDropout. By @thomaoc1
  • Check for updates does not use logger anymore. by @ternaus
  • Bugfix in HistorgramMatching. Before it output array of ones. Now works as expected. by @ternaus

1.4.13

05 Aug 23:00
39895a8
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.4.12...1.4.13

Albumentations 1.4.12 Release Notes

27 Jul 00:28
bc42cc7
Compare
Choose a tag to compare
  • Support Our Work
  • Transforms
  • Core Functionality
  • Deprecations
  • Improvements and Bug Fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click away.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server

Transforms

Added TextImage transform

Allows adding text on top of images. Works with np,unit8 and np.float32 images with any number of channels.

Additional functionalities:

  • Insert random stopwords
  • Delete random words
  • Swap word order

Example notebook

image

Core functionality

Added images target

You can now apply the same transform to a list of images of the same shape, not just one image.

Use cases:

  • Video: Split video into frames and apply the transform.
  • Slices of 3D volumes: For example, in medical imaging.
import albumentations as A

transform = A.Compose([A.Affine(p=1)])

transformed = transform(images=<list of images>)

transformed_images = transformed["images"]

Note:
You can apply the same transform to any number of images, masks, bounding boxes, and sets of keypoints using the additional_targets functionality notebook with examples

Contributors @ternaus, @ayasyrev

get_params_dependent_on data

Relevant for those who build custom transforms.

Old way

@property
def targets_as_params(self) -> list[str]:
        return <list of targets>

def get_params_dependent_on_targets(self, params: dict[str, Any]) -> dict[str, np.ndarray]:
    image = params["image"]
....

New way

def get_params_dependent_on_data(self, params: dict[str, Any], data: dict[str, Any]) -> dict[str, np.ndarray]:
    image = data["image"]

Contributor @ayasyrev

Added shape to params

Old way:

def get_params_dependent_on_targets(self, params: dict[str, Any]) -> dict[str, np.ndarray]:
    image = params["image"]
    shape = image.shape

New way:

def get_params_dependent_on_data(self, params: dict[str, Any], data: dict[str, Any]) -> dict[str, np.ndarray]:
    shape = params["shape"]

Contributor @ayasyrev

Deprecations

Elastic Transform

Deprecated parameter alpha_affine in ElasticTransform. To have Affine effects on your image, use the Affine transform.

Contributor @ternaus

Improvements and Bug Fixes

  • Removed dependency on scikit-learn. Contributor: @ternaus
  • Added instructions on how to disable the new version availability message. Contributor: @ternaus
  • Bugfix in constant padding with nonzero values in CropAndPad, Affine, PadIfNeeded, and Rotate. Contributor: @ternaus

Albumentations 1.4.11 Release Notes

05 Jul 02:10
bd644a3
Compare
Choose a tag to compare
  • Support our work
  • Transforms
  • Core functionality
  • Deprecations
  • Improvements and bug fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations

Transforms

Added OverlayElements transform

Allows to paste set of images + corresponding masks to the image.
It is not entirely CopyAndPaste as "masks", "bounding boxes" and "keypoints" are not supported, but step in that direction.

Example Notebook

image

Affine

Added balanced sampling for scale_limit

From FAQ:

The default scaling logic in RandomScale, ShiftScaleRotate, and Affine transformations is biased towards upscaling.

For example, if scale_limit = (0.5, 2), a user might expect that the image will be scaled down in half of the cases and scaled up in the other half. However, in reality, the image will be scaled up in 75% of the cases and scaled down in only 25% of the cases. This is because the default behavior samples uniformly from the interval [0.5, 2], and the interval [0.5, 1] is three times smaller than [1, 2].

To achieve balanced scaling, you can use Affine with balanced_scale=True, which ensures that the probability of scaling up and scaling down is equal.

balanced_scale_transform = A.Compose([A.Affine(scale=(0.5, 2), balanced_scale=True)])

by @ternaus

RandomSizedBBoxSafeCrop

Added support for keypoints

by @ternaus

BBoxSafeRandomCrop

Added support for keypoints

by @ternaus

RandomToneCurve

  1. Now can sample noise per channel
  2. Works with any number of channels
  3. Now works not just with uint8, but with float32 images as well

by @zakajd

ISONoise

  1. BugFix
  2. Now works not just with uint8, but with float32 images as well

by @ternaus

Core

Added strict parameter to Compose

If strict=True only targets that are expected could be passed.
If strict = False, user can pass data with extra keys. Such data would not be affected by transforms.

Request came from users that use pipelines in the form:

transform = A.Compose([....])

data = A.Compose(**data)

by @ayasyrev

Refactoring

Crop module was heavily refactored, all tests and checks pass, but we will see.

Deprecations

Grid Dropout

Old way:

 GridDropout(
 holes_number_x=XXX,
 holes_numver_y=YYY,
 unit_size_min=ZZZ,
 unit_size_max=PPP 
 )

New way:

GridDropout(
holes_number_xy = (XXX, YYY),
unit_size_range = (ZZZ, PPP)
)

by @ternaus

RandomSunFlare

Old way:

RandomSunFlare(
num_flare_circles_lower = XXX,
num_flare_circles_upper = YYY
)

new way:

RandomSunFlare(num_flare_circles_range = (XXX, YYY))

Bugfixes

  • Bugfix in ISONoise, as it returned zeros. by @ternaus
  • BugFix in Affine as during rotation image, mask, keypoints have one center point for rotation and bounding box another => we need to create two separate affine matrices. by @ternaus
  • Small fix in Error Message by @philipp-fischer
  • Bugfix that affected many transforms, where users specified probability as number and not as p=number. Say for VerticalFlip(0.5) you could expect 50% chance, but 0.5 was attributed not to p but to always_apply which meant that the transform was always applied. by @ayasyrev

Hotfix release with fixes for RandomGauss

19 Jun 22:13
a07f249
Compare
Choose a tag to compare

Hotfix release that addresses issues introduced in 1.4.9

There were two issues in GaussNoise that this release addresses:

  • Default value of 0.5 for noise_scale_factor, which is different from the behavior before version 1.4.9. Now default value = 1, which means random noise is created for every point independently
  • Noise was truncated before adding to the image, so that gauss >=0. Fixed.

Albumentations 1.4.9 Release Notes

18 Jun 20:15
ec8cb70
Compare
Choose a tag to compare
  • Support our work
  • New transforms
  • Integrations
  • Speedups
  • Deprecations
  • Improvements and bug fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations

Transforms

PlanckianJitter

New transform, based on

Screenshot 2024-06-17 at 17 53 00

Statements from the paper on why PlanckianJitter is superior to ColorJitter:

  1. Realistic Color Variations: PlanckianJitter applies physically realistic illuminant variations based on Planck’s Law for black-body radiation. This leads to more natural and realistic variations in chromaticity compared to the arbitrary changes in hue, saturation, brightness, and contrast applied by ColorJitter​​.

  2. Improved Representation for Color-Sensitive Tasks: The transformations in PlanckianJitter maintain the ability to discriminate image content based on color information, making it particularly beneficial for tasks where color is a crucial feature, such as classifying natural objects like birds or flowers. ColorJitter, on the other hand, can significantly alter colors, potentially degrading the quality of learned color features​​.

  3. Robustness to Illumination Changes: PlanckianJitter produces models that are robust to illumination changes commonly observed in real-world images. This robustness is advantageous for applications where lighting conditions can vary widely​​.

  4. Enhanced Color Sensitivity: Models trained with PlanckianJitter show a higher number of color-sensitive neurons, indicating that these models retain more color information compared to those trained with ColorJitter, which tends to induce color invariance​​.

by @zakajd

GaussNoise

Added option to approximate GaussNoise.

Generation of random Noise for large images is slow.

Added scaling factor for noise generation. Value should be in the range (0, 1]. When set to 1, noise is sampled for each pixel independently. If less, noise is sampled for a smaller size and resized to fit the shape of the image. Smaller values make the transform much faster. Default: 0.5

Integrations

Added integration wit HFHub. Now you can load and save augmentation pipeline to HuggingFace and reuse it in the future or share with others.

Notebook with documentation

import albumentations as A
import numpy as np

transform = A.Compose([
    A.RandomCrop(256, 256),
    A.HorizontalFlip(),
    A.RandomBrightnessContrast(),
    A.RGBShift(),
    A.Normalize(),
])

evaluation_transform = A.Compose([
    A.PadIfNeeded(256, 256),
    A.Normalize(),
])

transform.save_pretrained("qubvel-hf/albu", key="train")
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"

transform.save_pretrained("qubvel-hf/albu", key="train", push_to_hub=True)
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_train.json"
# + push the transform to the Hub to the repository "qubvel-hf/albu"

transform.push_to_hub("qubvel-hf/albu", key="train")
# ^ this will push the transform to the Hub to the repository "qubvel-hf/albu" (without saving it locally)

loaded_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="train")
# ^ this will load the transform from local folder if exist or from the Hub repository "qubvel-hf/albu"

evaluation_transform.save_pretrained("qubvel-hf/albu", key="eval", push_to_hub=True)
# ^ this will save the transform to a directory "qubvel-hf/albu" with filename "albumentations_config_eval.json"

loaded_evaluation_transform = A.Compose.from_pretrained("qubvel-hf/albu", key="eval")
# ^ this will load the transform from the Hub repository "qubvel-hf/albu"

by @qubvel

Speedups

These transforms should be faster for all types of images. But measured only for three channel uint8

Full updated benchmark

Deprecations

Deprecated always_apply

For years we had two parameters in constructors - probability and always_apply. The interplay between them is not always obvious and intuitively always_apply=True should be equivalent to p=1.

always_apply is deprecated now. always_apply=True still works, but it will be deprecated in the future. Use p=1 instead

by @ayasyrev

RandomFog

Updated interface for RandomFog

Old way:

RandomFog(fog_coef_lower=0.3, fog_coef_upper=1)

New way:

RandomFog(fog_coef_range=(0.3, 1))

by @ternaus

Improvements and bugfixes

Disable check for updates

When one imports Albumentations library, there is a check that it is the latest version installed.

To disable this check you can set up environmental variable: NO_ALBUMENTATIONS_UPDATE to 1

by @lerignoux

Fix for deprecation warnings

For a set of transforms we were throwing deprecation warnings, even when modern version of the interface was used. Fixed. by @ternaus

Albucore

We moved low level operations like add, multiply, normalize, etc to a separate library: https://github.com/albumentations-team/albucore

There are numerous ways to perform such operations in opencv and numpy. And there is no clear winner. Results depend on image type.

Separate library gives us confidence that we picked the fastest version that works on any image type.

by @ternaus

Bugfixes

Various bugfixes by @ayasyrev @immortalCO

Albumentations 1.4.8 Release Notes

28 May 23:52
24a654c
Compare
Choose a tag to compare
  • Support our work
  • Documentation
  • Deprecations
  • Improvements and bug fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations

Documentation

Added to the documentation links to the UI on HuggingFace to explore hyperparameters visually.

Screenshot 2024-05-28 at 16 27 09 Screenshot 2024-05-28 at 16 28 03

Deprecations

RandomSnow

Updated interface:

Old way:

transform = A.Compose([A.RandomSnow(
  snow_point_lower=0.1,
  snow_point_upper=0.3,
  p=0.5
)])

New way:

transform = A.Compose([A.RandomSnow(
  snow_point_range=(0.1, 0.3),
  p=0.5
)])

by @MarognaLorenzo

RandomRain

Old way

transform = A.Compose([A.RandomSnow(
  slant_lower=-10,
  slant_upper=10,
  p=0.5
)])

New way:

transform = A.Compose([A.RandomRain(
  slant_range=(-10, 10),
  p=0.5
)])

by @MarognaLorenzo

Improvements

Created library with core functions albucore. Moved a few helper functions there.
We need this library to be sure that transforms are:

  1. At least as fast as numpy and opencv. For some functions it is possible to be faster than both of them.
  2. Easier to debug.
  3. Could be used in other projects, not related to Albumentations.

Bugfixes

  • Bugfix in check_for_updates. Now the pipeline does not throw an error regardless of why we cannot check for update.
  • Bugfix in RandomShadow. Does not create unexpected purple color on bright white regions with shadow overlay anymore.
  • BugFix in Compose. Now Compose([]) does not throw an error, but just works as NoOp by @ayasyrev
  • Bugfix in min_max normalization. Now return 0 and not NaN on constant images. by @ternaus
  • Bugfix in CropAndPad. Now we can sample pad/crop values for all sides with interface like ((-0.1, -0.2), (-0.2, -0.3), (0.3, 0.4), (0.4, 0.5)) by @christian-steinmeyer
  • Small refactoring to decrease tech debt by @ternaus and @ayasyrev

Albumentations 1.4.7 Release Notes

14 May 00:43
f7f9596
Compare
Choose a tag to compare
  • Support our work
  • Documentation
  • Deprecations
  • Improvements and bug fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations

Documentation

Deprecations

ImageCompression

Old way:

transform = A.Compose([A.ImageCompression(
  quality_lower=75,
  quality_upper=100,
  p=0.5
)])

New way:

transform = A.Compose([A.ImageCompression(
  quality_range=(75, 100),  
  p=0.5
)])

by @MarognaLorenzo

Downscale

Old way:

transform = A.Compose([A.Downscale(
  scale_min=0.25,
  scale_max=1,
  interpolation= {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},
  p=0.5
)])

New way:

transform = A.Compose([A.Downscale(
  scale_range=(0.25, 1),
 interpolation_pair = {"downscale": cv2.INTER_AREA, "upscale": cv2.INTER_CUBIC},  
  p=0.5
)])

As of now both ways work and will provide the same result, but old functionality will be removed in later releases.

by @ternaus

Improvements

  • Buggix in Blur.
  • Bugfix in bbox clipping, it could be not intuitive, but boxes should be clipped by height, width and not height - 1, width -1 by @ternaus
  • Allow to compose only keys, that are required there. Any extra unnecessary key will give an error by @ayasyrev
  • In PadIfNeeded if value parameter is not None, but border mode is reflection, border mode is changed to cv2.BORDER_CONSTANT by @ternaus

Albumentations 1.4.6 Release Notes

04 May 05:33
16a55ae
Compare
Choose a tag to compare

This is out of schedule release with a bugfix that was introduced in version 1.4.5

In version 1.4.5 there was a bug that went unnoticed - if you used pipeline that consisted only of ImageOnly transforms but pass bounding boxes into it, you would get an error.

If you had in such pipeline at least one non ImageOnly transform, say HorizontalFlip or Crop, everything would work as expected.

We fixed the issue and added tests to be sure that it will not happen in the future.

Albumentations 1.4.5 Release Notes

03 May 01:40
8c86616
Compare
Choose a tag to compare
  • Support our work
  • Highlights
  • Deprecations
  • Improvements and bug fixes

Support Our Work

  1. Love the library? You can contribute to its development by becoming a sponsor for the library. Your support is invaluable, and every contribution makes a difference.
  2. Haven't starred our repo yet? Show your support with a ⭐! It's just only one mouse click.
  3. Got ideas or facing issues? We'd love to hear from you. Share your thoughts in our issues or join the conversation on our Discord server for Albumentations

Highlights

Bbox clipping

Before version 1.4.5 it was assumed that bounding boxes that are fed into the augmentation pipeline should not extend outside of the image.

Now we added an option to clip boxes to the image size before augmenting them. This makes pipeline more robust to inaccurate labeling

Example:

Will fail if boxes extend outside of the image:

transform = A.Compose([    
    A.HorizontalFlip(p=0.5)    
], bbox_params=A.BboxParams(format='coco'))

Clipping bounding boxes to the image size:

transform = A.Compose([    
    A.HorizontalFlip(p=0.5)    
], bbox_params=A.BboxParams(format='coco', clip=True))

by @ternaus

SelectiveChannelTransform

Added SelectiveChannelTransform that allows to apply transforms to a selected number of channels.

For example it could be helpful when working with multispectral images, when RGB is a subset of the overall multispectral stack which is common when working with satellite imagery.

Example:

aug = A.Compose(
        [A.HorizontalFlip(p=0.5),
        A.SelectiveChannelTransform(transforms=[A.ColorJItter(p=0.5),
        A.ChromaticAberration(p=0.5))], channels=[1, 2, 18], p=1)],
    )

Here HorizontalFlip applied to the whole multispectral image, but pipeline of ColorJitter and ChromaticAberration only to channels [1, 2, 18]

by @ternaus

Deprecations

CoarseDropout

Old way:

transform = A.Compose([A.CoarseDropout(
  min_holes = 5,
  max_holes = 8,
  min_width = 3,
  max_width = 12,
  min_height = 4,
  max_height = 5
)])

New way:

transform = A.Compose([A.CoarseDropout(
  num_holes_range=(5, 8),
  hole_width_range=(3, 12),
  hole_height_range=(4, 5)
)])

As of now both ways work and will provide the same result, but old functionality will be removed in later releases.

@ternaus

Improvements and bug fixes