You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working with a dataset that has both color and depth images in it. Color images are typical 3 channel RGB images, but depth is encoded as a single channel uint16 image with a maximum value that is far beyond what I care about. In addition, these depth images are not perfectly dense - invalid values are represented with a 0 value.
This makes viewing details that are close, far, or just in a certain band of the image very difficult. I am proposing that images, single or multi-channel, should have an option to set minimum, maximum or some sort of color filter in the app to aid in dataset reviews. For example, a user could specify that the depth image should be rendered with a minimum value of 0 and a maximum value of 6500, saturating the image for far-off pixels to enhance the visualization for closer objects.
Motivation
What is the use case for this feature?
This feature allows users to customize the rendering of images, much like what one would do without FiftyOne. Packages like matplotlib offer functions like imshow that have min and max scaling factors to make visualizations more readable. CVAT allows users to adjust brightness and contrast for color images, which is useful for images taken in dark environments or with certain types of noise.
Why is this use case valuable to support for FiftyOne users in general?
FiftyOne is a multi-modal tool and should render each mode of data in a way that each can be examined visually by a user. Customized renderings are essential for datasets that may have RGB + Depth + LiDAR as each sensing system brings its own strength to a solution.
Why is this use case valuable to support for your project(s) or organization?
See above - the depth image case is essential because the depth image contains lots of great detail for small objects in the scene or may be noisy due to stereo matching artifacts. Customized renderings allow me to see all details present and may change my modelling approach.
Why is it currently difficult to achieve this use case?
The app renders images with a default mode that forces users to go back to a notebook to see finer details.
What areas of FiftyOne does this feature affect?
App: FiftyOne application
Core: Core fiftyone Python library
Server: FiftyOne server
Details
I think that customized renderings would benefit from:
Min/max defaults for single channel images
Colormap selection for single channel images
Brightness, contrast, and saturation controls for RGB images
Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
Yes. I can contribute this feature independently
Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
No. I cannot contribute this feature at this time
The text was updated successfully, but these errors were encountered:
Proposal Summary
I am working with a dataset that has both color and depth images in it. Color images are typical 3 channel RGB images, but depth is encoded as a single channel uint16 image with a maximum value that is far beyond what I care about. In addition, these depth images are not perfectly dense - invalid values are represented with a 0 value.
This makes viewing details that are close, far, or just in a certain band of the image very difficult. I am proposing that images, single or multi-channel, should have an option to set minimum, maximum or some sort of color filter in the app to aid in dataset reviews. For example, a user could specify that the depth image should be rendered with a minimum value of 0 and a maximum value of 6500, saturating the image for far-off pixels to enhance the visualization for closer objects.
Motivation
This feature allows users to customize the rendering of images, much like what one would do without FiftyOne. Packages like matplotlib offer functions like
imshow
that have min and max scaling factors to make visualizations more readable. CVAT allows users to adjust brightness and contrast for color images, which is useful for images taken in dark environments or with certain types of noise.FiftyOne is a multi-modal tool and should render each mode of data in a way that each can be examined visually by a user. Customized renderings are essential for datasets that may have RGB + Depth + LiDAR as each sensing system brings its own strength to a solution.
See above - the depth image case is essential because the depth image contains lots of great detail for small objects in the scene or may be noisy due to stereo matching artifacts. Customized renderings allow me to see all details present and may change my modelling approach.
The app renders images with a default mode that forces users to go back to a notebook to see finer details.
What areas of FiftyOne does this feature affect?
fiftyone
Python libraryDetails
I think that customized renderings would benefit from:
Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
The text was updated successfully, but these errors were encountered: