Alpha channel should be 1 by default, rather than 0 #2197
Alpha channel should be 1 by default, rather than 0 #2197pennelee wants to merge 8 commits intoAcademySoftwareFoundation:mainfrom
Conversation
Signed-off-by: pylee <penne.y.lee@intel.com>
|
Thanks for the contribution @pennelee ! Added the devdays25 label. |
src/OpenColorIO/ImagePacking.cpp
Outdated
| inBitDepthBuffer[4*pixelsCopied+1] = *gPtr; | ||
| inBitDepthBuffer[4*pixelsCopied+2] = *bPtr; | ||
| inBitDepthBuffer[4*pixelsCopied+3] = aPtr ? *aPtr : (Type)0.0f; | ||
| inBitDepthBuffer[4*pixelsCopied+3] = aPtr ? *aPtr : (Type)1.0f; |
There was a problem hiding this comment.
Thanks for your contribution.
The idea is if the input image doesn't have an alpha channel, we need to consider the pixels as fully opaque instead of fully transparent (alpha=0). For floating point types fully opaque is represented as 1.0f but for integer types it needs to be the max value the type can hold (e.g. for 8-bit types it's 255)
Please check BithDepthUtils.h, the templated struct BitDepthInfo will be useful.
There was a problem hiding this comment.
Ahh, I got you, thanks @cozdas! Let me take a look at that file and fix it, will comment if any questions.
Signed-off-by: pylee <penne.y.lee@intel.com>
…alpha_channel_one
|
Hi @cozdas , |
cozdas
left a comment
There was a problem hiding this comment.
Hi @pennelee,
The unit test expected values are now making much more sense, thanks for the fix but the value that needs to be used should be derived from the input bit depth, not output, I added comments to some of the relevant lines.
I believe the tests will still pass when you use the input depth instead of the output.
src/OpenColorIO/ImagePacking.cpp
Outdated
| int outputBufferSize, | ||
| long imagePixelStartIndex) | ||
| long imagePixelStartIndex, | ||
| BitDepth outputBitDepth) |
There was a problem hiding this comment.
This function Packs the input image (3 or 4 channel, of type 'Type') into 4-channel f32 buffer, and thus the output bit depth should not be part of the conversion. What needs to be done is to fill the missing input alpha value as the max value the "input" type can represent.
Since the GenericImageDesc class is a simplified version of ImageDesc, it doesn't carry the bithDepth information in it like the ImageDesc does, so you are right to pass it as a separate parameter, but it needs to be the input image bit depth, not the output.
There was a problem hiding this comment.
Thank you @cozdas for the helpful explanations!
src/OpenColorIO/ImagePacking.cpp
Outdated
| inBitDepthBuffer[4*pixelsCopied+1] = *gPtr; | ||
| inBitDepthBuffer[4*pixelsCopied+2] = *bPtr; | ||
| inBitDepthBuffer[4*pixelsCopied+3] = aPtr ? *aPtr : (Type)0.0f; | ||
| inBitDepthBuffer[4 * pixelsCopied + 3] = aPtr ? *aPtr : (Type)(maxValue); |
There was a problem hiding this comment.
please keep the formatting consistent with the above lines
src/OpenColorIO/ImagePacking.cpp
Outdated
| int outputBufferSize, | ||
| long imagePixelStartIndex) | ||
| long imagePixelStartIndex, | ||
| BitDepth outputBitDepth) |
There was a problem hiding this comment.
this is a specialization of the template function for float input type, so you can simplify this by using hard-coded 1.0f.
src/OpenColorIO/ScanlineHelper.cpp
Outdated
| m_dstImg.m_width, | ||
| m_yIndex * m_dstImg.m_width); | ||
| m_yIndex * m_dstImg.m_width, | ||
| m_outputBitDepth); |
There was a problem hiding this comment.
as explained above, needs to be m_inputBitDepth
Signed-off-by: pylee <penne.y.lee@intel.com>
Signed-off-by: pylee <penne.y.lee@intel.com>
…alpha_channel_one
Signed-off-by: pylee <penne.y.lee@intel.com>
cozdas
left a comment
There was a problem hiding this comment.
Looks good to me, just added a minor suggestion.
Thanks for the fixes.
Co-authored-by: Cuneyt Ozdas <github@cuneytozdas.com> Signed-off-by: PenneLee <pennelee@gmail.com>
Related to issue #1781.
For CPU processor, when source has no alpha (RGB) but dest does (RGBA) make it default to 1.
GPU Processors (new and legacy) did not seem to have this issue, alpha was already set to 1. Let me know if this is not the case.
Tested changes with ocioconvert and changed the output to force RGBA, since the default is to use inplace conversion (did not check in those changes).
Tested with all the ocioconvert test cases (basic, --lut, --view, --invertview --namedtransform, --invnamed transform), and various EXR file sizes.
Before the changes alpha would be 0, so image was not viewable. After the changes then images can be viewed and from inspection the alpha channel is now 1.0.
Also updated related CPU tests to reflect non zero alpha values.