Module saev.app.data

Functions

def get_datasets()
def get_img_v_raw(key: str, i: int) ‑> tuple[pyvips.vimage.Image, str]

Get raw image and processed label from dataset.

Returns

Tuple of pyvips.Image and classname.

def pil_to_vips(img_p: PIL.Image.Image) ‑> pyvips.vimage.Image

Convert a PIL Image to a pyvips Image.

def to_sized(img_v_raw: pyvips.vimage.Image, min_px: int, crop_px: tuple[int, int]) ‑> pyvips.vimage.Image

Convert raw vips image to standard model input size (resize + crop).

def vips_to_base64(img_v: pyvips.vimage.Image) ‑> str

Classes

class VipsImageFolder (root: str,
transform: Callable | None = None,
target_transform: Callable | None = None)

Clone of ImageFolder that returns pyvips.Image instead of PIL.Image.Image.

Expand source code
class VipsImageFolder(torchvision.datasets.ImageFolder):
    """
    Clone of ImageFolder that returns pyvips.Image instead of PIL.Image.Image.
    """

    def __init__(
        self,
        root: str,
        transform: typing.Callable | None = None,
        target_transform: typing.Callable | None = None,
    ):
        super().__init__(
            root,
            transform=transform,
            target_transform=target_transform,
            loader=self._vips_loader,
        )

    @staticmethod
    def _vips_loader(path: str) -> torch.Tensor:
        """Load and convert image to tensor using pyvips."""
        image = pyvips.Image.new_from_file(path, access="random")
        return image

    def __getitem__(self, index: int) -> dict[str, object]:
        """
        Args:
            index: Index

        Returns:
            dict with keys 'image', 'index', 'target' and 'label'.
        """
        path, target = self.samples[index]
        sample = self.loader(path)
        if self.transform is not None:
            sample = self.transform(sample)
        if self.target_transform is not None:
            target = self.target_transform(target)

        return {
            "image": sample,
            "target": target,
            "label": self.classes[target],
            "index": index,
        }

Ancestors

  • torchvision.datasets.folder.ImageFolder
  • torchvision.datasets.folder.DatasetFolder
  • torchvision.datasets.vision.VisionDataset
  • torch.utils.data.dataset.Dataset
  • typing.Generic
class VipsImagenet (cfg: ImagenetDataset,
*,
img_transform=None)

An abstract class representing a :class:Dataset.

All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite :meth:__getitem__, supporting fetching a data sample for a given key. Subclasses could also optionally overwrite :meth:__len__, which is expected to return the size of the dataset by many :class:~torch.utils.data.Sampler implementations and the default options of :class:~torch.utils.data.DataLoader. Subclasses could also optionally implement :meth:__getitems__, for speedup batched samples loading. This method accepts list of indices of samples of batch and returns list of samples.

Note

:class:~torch.utils.data.DataLoader by default constructs an index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.

Expand source code
@beartype.beartype
class VipsImagenet(activations.Imagenet):
    def __getitem__(self, i):
        example = self.hf_dataset[i]
        example["index"] = i

        example["image"] = example["image"].convert("RGB")
        # Convert to pyvips
        example["image"] = pyvips.Image.new_from_memory(
            example["image"].tobytes(),
            example["image"].width,
            example["image"].height,
            3,  # bands (RGB)
            "uchar",
        )
        if self.img_transform:
            example["image"] = self.img_transform(example["image"])
        example["target"] = example.pop("label")
        example["label"] = self.labels[example["target"]]

        return example

Ancestors

  • Imagenet
  • torch.utils.data.dataset.Dataset
  • typing.Generic