Module brevettiai.model.metadata.image_segmentation

Expand source code
import cv2
from pydantic import constr, root_validator, Field
from typing import List, Tuple, Optional
from .metadata import ModelMetadata
from brevettiai.data.image import ImagePipeline, ImageLoader, AnnotationLoader
from brevettiai.data.image.annotation_pooling import AnnotationPooling
from brevettiai.data.image.multi_frame_imager import MultiFrameImager

import numpy as np
from base64 import b64encode, b64decode


class Base64Image(np.ndarray):
    @classmethod
    def __get_validators__(cls):
        yield cls.validate_type

    @classmethod
    def validate_type(cls, val):
        if isinstance(val, str):
            return cv2.imdecode(np.frombuffer(b64decode(val), np.uint8), -1).view(Base64Image)
        return val.view(Base64Image)

    def __repr__(self):
        status, buf = cv2.imencode(".png", self)
        assert status
        return b64encode(buf).decode()


class ImageSegmentationModelMetadata(ModelMetadata):
    """
    Metadata for an Image segmentation model
    """
    producer: constr(regex="^ImageSegmentation.*$") = "ImageSegmentation"

    # Info
    classes: List[str]
    suggested_input_shape: Tuple[int, int] = Field(description="height, width of image suggested for input")

    # Training
    image_loader: ImageLoader
    multi_frame_imager: Optional[MultiFrameImager]

    annotation_loader: AnnotationLoader

    # augmentation: Optional[ImageAugmenter]

    annotation_pooling: Optional[AnnotationPooling]

    # Documentation
    example_image: Optional[Base64Image] = Field(description="Base64 encoded image file containing example image")

    class Config:
        json_encoders = {
            Base64Image: repr
        }

    @root_validator(pre=True, allow_reuse=True)
    def prepare_input(cls, values):
        if values.get("producer") == "ImageSegmentation":
            if "classes" not in values:
                values["classes"] = values["image_pipeline"]["segmentation"]["classes"]
            if "suggested_input_shape" not in values:
                values["suggested_input_shape"] = values["tile_size"]
            if "image_pipeline" in values:
                ip = ImagePipeline.from_config(values.pop("image_pipeline"))

                values["image_loader"] = ip.to_image_loader()
                if ip.segmentation is not None:
                    values["annotation_loader"] = AnnotationLoader(
                        mapping=ip.segmentation.mapping,
                        classes=ip.segmentation.classes)
        return values

Classes

class Base64Image (...)

ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None)

An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)

Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array.

For more information, refer to the numpy module and examine the methods and attributes of an array.

Parameters

(for the new method; see Notes below)

shape : tuple of ints
Shape of created array.
dtype : data-type, optional
Any object that can be interpreted as a numpy data type.
buffer : object exposing buffer interface, optional
Used to fill the array with data.
offset : int, optional
Offset of array data in buffer.
strides : tuple of ints, optional
Strides of data in memory.
order : {'C', 'F'}, optional
Row-major (C-style) or column-major (Fortran-style) order.

Attributes

T : ndarray
Transpose of the array.
data : buffer
The array's elements, in memory.
dtype : dtype object
Describes the format of the elements in the array.
flags : dict
Dictionary containing information related to memory use, e.g., 'C_CONTIGUOUS', 'OWNDATA', 'WRITEABLE', etc.
flat : numpy.flatiter object
Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).
imag : ndarray
Imaginary part of the array.
real : ndarray
Real part of the array.
size : int
Number of elements in the array.
itemsize : int
The memory use of each array element in bytes.
nbytes : int
The total number of bytes required to store the array data, i.e., itemsize * size.
ndim : int
The array's number of dimensions.
shape : tuple of ints
Shape of the array.
strides : tuple of ints
The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).
ctypes : ctypes object
Class containing properties of the array needed for interaction with ctypes.
base : ndarray
If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.

See Also

array
Construct an array.
zeros
Create an array, each element of which is zero.
empty
Create an array, but leave its allocated memory unchanged (i.e., it contains "garbage").
dtype
Create a data-type.
numpy.typing.NDArray
A :term:generic <generic type> version of ndarray.

Notes

There are two modes of creating an array using __new__:

  1. If buffer is None, then only shape, dtype, and order are used.
  2. If buffer is an object exposing the buffer interface, then all keywords are interpreted.

No __init__ method is needed because the array is fully initialized after the __new__ method.

Examples

These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.

First mode, buffer is None:

>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
       [     nan, 2.5e-323]])

Second mode:

>>> np.ndarray((2,), buffer=np.array([1,2,3]),
...            offset=np.int_().itemsize,
...            dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])
Expand source code
class Base64Image(np.ndarray):
    @classmethod
    def __get_validators__(cls):
        yield cls.validate_type

    @classmethod
    def validate_type(cls, val):
        if isinstance(val, str):
            return cv2.imdecode(np.frombuffer(b64decode(val), np.uint8), -1).view(Base64Image)
        return val.view(Base64Image)

    def __repr__(self):
        status, buf = cv2.imencode(".png", self)
        assert status
        return b64encode(buf).decode()

Ancestors

  • numpy.ndarray

Static methods

def validate_type(val)
Expand source code
@classmethod
def validate_type(cls, val):
    if isinstance(val, str):
        return cv2.imdecode(np.frombuffer(b64decode(val), np.uint8), -1).view(Base64Image)
    return val.view(Base64Image)
class ImageSegmentationModelMetadata (**data: Any)

Metadata for an Image segmentation model

Create a new model by parsing and validating input data from keyword arguments.

Raises ValidationError if the input data cannot be parsed to form a valid model.

Expand source code
class ImageSegmentationModelMetadata(ModelMetadata):
    """
    Metadata for an Image segmentation model
    """
    producer: constr(regex="^ImageSegmentation.*$") = "ImageSegmentation"

    # Info
    classes: List[str]
    suggested_input_shape: Tuple[int, int] = Field(description="height, width of image suggested for input")

    # Training
    image_loader: ImageLoader
    multi_frame_imager: Optional[MultiFrameImager]

    annotation_loader: AnnotationLoader

    # augmentation: Optional[ImageAugmenter]

    annotation_pooling: Optional[AnnotationPooling]

    # Documentation
    example_image: Optional[Base64Image] = Field(description="Base64 encoded image file containing example image")

    class Config:
        json_encoders = {
            Base64Image: repr
        }

    @root_validator(pre=True, allow_reuse=True)
    def prepare_input(cls, values):
        if values.get("producer") == "ImageSegmentation":
            if "classes" not in values:
                values["classes"] = values["image_pipeline"]["segmentation"]["classes"]
            if "suggested_input_shape" not in values:
                values["suggested_input_shape"] = values["tile_size"]
            if "image_pipeline" in values:
                ip = ImagePipeline.from_config(values.pop("image_pipeline"))

                values["image_loader"] = ip.to_image_loader()
                if ip.segmentation is not None:
                    values["annotation_loader"] = AnnotationLoader(
                        mapping=ip.segmentation.mapping,
                        classes=ip.segmentation.classes)
        return values

Ancestors

  • ModelMetadata
  • pydantic.main.BaseModel
  • pydantic.utils.Representation

Class variables

var Config
var annotation_loaderAnnotationLoader
var annotation_pooling : Optional[AnnotationPooling]
var classes : List[str]
var example_image : Optional[Base64Image]
var image_loaderImageLoader
var multi_frame_imager : Optional[MultiFrameImager]
var producer : brevettiai.model.metadata.image_segmentation.ConstrainedStrValue
var suggested_input_shape : Tuple[int, int]

Static methods

def prepare_input(values)
Expand source code
@root_validator(pre=True, allow_reuse=True)
def prepare_input(cls, values):
    if values.get("producer") == "ImageSegmentation":
        if "classes" not in values:
            values["classes"] = values["image_pipeline"]["segmentation"]["classes"]
        if "suggested_input_shape" not in values:
            values["suggested_input_shape"] = values["tile_size"]
        if "image_pipeline" in values:
            ip = ImagePipeline.from_config(values.pop("image_pipeline"))

            values["image_loader"] = ip.to_image_loader()
            if ip.segmentation is not None:
                values["annotation_loader"] = AnnotationLoader(
                    mapping=ip.segmentation.mapping,
                    classes=ip.segmentation.classes)
    return values