[RFC,wayland-protocols,v2,1/1] Add the color-management protocol

Submitted by Niels Ole Salscheider on Jan. 22, 2017, 12:31 p.m.

Details

Message ID 20170122123135.22842-2-niels_ole@salscheider-online.de
State New
Headers show
Series "Color management protocol" ( rev: 1 ) in Wayland

Not browsing as part of any series.

Commit Message

Niels Ole Salscheider Jan. 22, 2017, 12:31 p.m.
Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>
---
 Makefile.am                                        |   1 +
 unstable/color-management/README                   |   4 +
 .../color-management-unstable-v1.xml               | 224 +++++++++++++++++++++
 3 files changed, 229 insertions(+)
 create mode 100644 unstable/color-management/README
 create mode 100644 unstable/color-management/color-management-unstable-v1.xml

Patch hide | download patch | download mbox

diff --git a/Makefile.am b/Makefile.am
index e693afa..ff435d5 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -12,6 +12,7 @@  unstable_protocols =								\
 	unstable/tablet/tablet-unstable-v2.xml			                \
 	unstable/xdg-foreign/xdg-foreign-unstable-v1.xml			\
 	unstable/idle-inhibit/idle-inhibit-unstable-v1.xml			\
+	unstable/color-management/color-management-unstable-v1.xml		\
 	$(NULL)
 
 stable_protocols =								\
diff --git a/unstable/color-management/README b/unstable/color-management/README
new file mode 100644
index 0000000..3bd3e6c
--- /dev/null
+++ b/unstable/color-management/README
@@ -0,0 +1,4 @@ 
+Color management protocol
+
+Maintainers:
+Niels Ole Salscheider <niels_ole@salscheider-online.de>
diff --git a/unstable/color-management/color-management-unstable-v1.xml b/unstable/color-management/color-management-unstable-v1.xml
new file mode 100644
index 0000000..3fe6c93
--- /dev/null
+++ b/unstable/color-management/color-management-unstable-v1.xml
@@ -0,0 +1,224 @@ 
+<?xml version="1.0" encoding="UTF-8"?>
+<protocol name="color_management_unstable_v1">
+
+  <copyright>
+    Copyright © 2014-2016 Niels Ole Salscheider
+
+    Permission to use, copy, modify, distribute, and sell this
+    software and its documentation for any purpose is hereby granted
+    without fee, provided that the above copyright notice appear in
+    all copies and that both that copyright notice and this permission
+    notice appear in supporting documentation, and that the name of
+    the copyright holders not be used in advertising or publicity
+    pertaining to distribution of the software without specific,
+    written prior permission.  The copyright holders make no
+    representations about the suitability of this software for any
+    purpose.  It is provided "as is" without express or implied
+    warranty.
+
+    THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
+    SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+    FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
+    SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+    WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
+    AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
+    ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
+    THIS SOFTWARE.
+  </copyright>
+
+  <interface name="zwp_color_management_v1" version="1">
+    <description summary="allows attaching a color profile to a wl_surface">
+      This interface allows to attach a color profile to a wl_surface. The
+      compositor uses this information to display the colors correctly.
+
+      This interface also provides requests to query the sRGB and the preferred
+      color space. It further allows creation of a color profile object from an
+      ICC profile. The client is informed by an event if the color profile of
+      one of the outputs changes.
+
+      This protocol exposes two ways to attach color profiles to a surface.
+      Most applications are expected to simply call set_color_profile to attach
+      a color profile to a surface. The compositor then makes sure that the
+      colors are converted to the correct ouput color space.
+      If blending is performed, the compositor will convert all surfaces to a
+      blending color space and perform blending. It will then convert the output
+      surface to the color space of the output device.
+
+      If an application wants to perform gamut mapping on its own it must query
+      the color profiles of the outputs. It can then create device link profiles
+      that describe the transformation from input to output color space. These
+      device link profiles can be attached to a surface by calling
+      set_device_link_profile.
+      When a device link profile is set for a given surface and output, the
+      compositor will only apply this profile instead of the normal color
+      transformation pipeline. Blending (if necessary) will be performed late in
+      the output color space.
+      The normal color transformation pipeline will be used for all outputs for
+      which no device link profiles are available.
+    </description>
+
+    <request name="set_color_profile">
+      <description summary="set the color profile of a wl_surface">
+        With this request, the color profile of a wl_surface can be set.
+        The previously attached color profile will be replaced by the new one.
+        Initially, the sRGB color profile is attached to a surface before
+        set_color_profile is called for the first time.
+        The color profile is applied after the next wl_surface.commit request.
+      </description>
+      <arg name="surface" type="object" interface="wl_surface"
+           summary="the surface on which the color profile is attached" />
+      <arg name="color_profile" type="object"
+           interface="zwp_color_profile_v1" summary="the color profile" />
+    </request>
+
+    <request name="set_device_link_profile">
+      <description summary="set a device link profile for a wl_surface and wl_output">
+        With this request, a device link profile can be attached to a
+        wl_surface. For each output on which the surface is visible, the
+        compositor will check if there is a device link profile. If there is one
+        it will be used to directly convert the surface to the output color
+        space. Blending of this surface (if necessary) will then be performed in
+        the output color space and after the normal blending operations.
+        The device link profile is applied after the next wl_surface.commit
+        request.
+      </description>
+      <arg name="surface" type="object" interface="wl_surface"
+           summary="the surface for which the device link profile should be used" />
+      <arg name="output" type="object" interface="wl_output"
+           summary="the output for which the device link profile was created" />
+      <arg name="device_link_profile" type="object"
+           interface="zwp_color_profile_v1" summary="the device link profile" />
+    </request>
+
+    <request name="remove_device_link_profile">
+      <description summary="removes a device link profile from a wl_surface">
+        With this request, a device link profile for a given output can be
+        removed from a wl_surface. If the surface is still visible on the
+        output the color conversion will be done with the normal color profile
+        attached to the surface.
+        This request takes effect after the next wl_surface.commit request.
+      </description>
+      <arg name="surface" type="object" interface="wl_surface"
+           summary="the surface from which the device link should be removed" />
+      <arg name="output" type="object" interface="wl_output"
+           summary="the output for which the device link should be removed" />
+    </request>
+
+    <request name="color_profile_from_fd">
+      <description summary="creates a zwp_color_profile_v1 object from an ICC profile">
+        This request allows to create a zwp_color_profile_v1 object from an ICC
+        profile. The fd argument is the file descriptor to the ICC profile (ICC
+        V2 or V4).
+      </description>
+      <arg name="fd" type="fd"
+           summary="the file descriptor of the ICC profile data" />
+      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
+           summary="the new color profile object" />
+    </request>
+
+    <request name="output_color_profile">
+      <description summary="create a color profile object for the requested output">
+        This request returns a zwp_color_profile_v1 object for the requested
+        output. A client can use this if it wants to know the color profile of
+        an output (e. g. to create a device link profile).
+      </description>
+      <arg name="output" type="object" interface="wl_output"
+           summary="the output for which a color profile object should be created" />
+      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
+           summary="the new color profile object" />
+    </request>
+
+    <request name="srgb_color_profile">
+      <description summary="create a new sRGB color profile object">
+        This request returns a zwp_color_profile_1 object for the sRGB color
+        profile. The sRGB color profile is initially attached to all surfaces.
+      </description>
+      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
+           summary="the new color profile object" />
+    </request>
+
+    <request name="preferred_color_space">
+      <description summary="create a color profile object for the preferred color space">
+        This request returns a zwp_color_profile_v1 object for the preferred
+        color space of the compositor. This might be the blending color space
+        of the compositor.
+        A client should render in the color space returned by this request if it
+        does any color conversion on its own. It might also want to use it as
+        its blending space.
+        Doing so might allow the compositor to skip one color conversion.
+      </description>
+      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
+           summary="the new color profile object" />
+    </request>
+
+    <event name="output_color_profile_changed">
+      <description summary="tells the client that the color profile of an output changed">
+        This event will be sent when the color profile of an output changes.
+      </description>
+      <arg name="output" type="object" interface="wl_output"
+           summary="the output of which the color profile changed" />
+    </event>
+
+    <enum name="error">
+      <entry name="invalid_profile" value="0"
+             summary="the passed ICC data is invalid" />
+    </enum>
+  </interface>
+
+  <interface name="zwp_color_profile_v1" version="1">
+    <description summary="represents a color profile">
+      This interface represents a color profile that can be attached to
+      surfaces. It is used by the zwp_color_management_v1 interface.
+    </description>
+
+    <request name="destroy" type="destructor">
+      <description summary="destroys the zwp_color_profile_v1 object">
+        Informs the server that the client will not be using this protocol
+        object anymore. It must not be attached to any surface anymore.
+      </description>
+    </request>
+
+    <request name="get_profile_fd">
+      <description summary="get a file descriptor to the profile data">
+        This request will cause a profile_fd event that returns a file
+        descriptor to the ICC profile data.
+      </description>
+    </request>
+
+    <event name="profile_fd">
+      <description summary="file descriptor to the profile data">
+        This event occurs after a get_profile_fd request and returns the file
+        descriptor to the ICC profile data.
+      </description>
+      <arg name="fd" type="fd" summary="ICC profile fd" />
+    </event>
+
+    <request name="get_profile_md5">
+      <description summary="get an MD5 checksum of the profile data">
+        This request will cause a profile_md5 event that returns the MD5
+        checksum of the ICC profile data.
+      </description>
+    </request>
+
+    <event name="profile_md5">
+      <description summary="MD5 checksum of the profile data">
+        This event occurs after a get_profile_md5 request and returns the MD5
+        checksum of the ICC profile data. This MD5 checksum can be used to
+        compare ICC profiles.
+        The 128 bit MD5 checksum is calculated using the MD5 fingerprinting
+        method as defined in Internet RFC 1321. In accordance with the ICC v4
+        specification, the entire profile is used for this with the profile
+        flags field, rendering intent field and profile ID field temporarily set
+        to zeros.
+      </description>
+      <arg name="md5_3" type="uint"
+           summary="highest 32 bit of the MD5 checksum" />
+      <arg name="md5_2" type="uint"
+           summary="second highest 32 bit of the MD5 checksum" />
+      <arg name="md5_1" type="uint"
+           summary="second lowest 32 bit of the MD5 checksum" />
+      <arg name="md5_0" type="uint"
+           summary="lowest 32 bit of the MD5 checksum" />
+    </event>
+  </interface>
+</protocol>

Comments

On Sun, 22 Jan 2017 13:31:35 +0100
Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:

> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>

Hi Niels,

it is about high time I commented on this, sorry. I saw the color
management topic being picked up again with HDR. I wanted to review
this before I look at the latest proposal from Sebastian, because this
was there first, uncommented in its current form and not mentioned by
Sebastian at all from a very quick glance.

I think the fundamental idea here is exactly right: let a client
describe the content it delivers so that the compositor can display it
as correctly on any output as possible, while giving the client
information about the outputs (and compositor) so that it can optimise
its content and choice of color space.

My failing is that I haven't read about what ICC v4 definition actually
describes, does it characterise content or a device, or is it more
about defining a transformation from something to something without
saying what something is.

> ---
>  Makefile.am                                        |   1 +
>  unstable/color-management/README                   |   4 +
>  .../color-management-unstable-v1.xml               | 224 +++++++++++++++++++++
>  3 files changed, 229 insertions(+)
>  create mode 100644 unstable/color-management/README
>  create mode 100644 unstable/color-management/color-management-unstable-v1.xml
> 
> diff --git a/Makefile.am b/Makefile.am
> index e693afa..ff435d5 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -12,6 +12,7 @@ unstable_protocols =								\
>  	unstable/tablet/tablet-unstable-v2.xml			                \
>  	unstable/xdg-foreign/xdg-foreign-unstable-v1.xml			\
>  	unstable/idle-inhibit/idle-inhibit-unstable-v1.xml			\
> +	unstable/color-management/color-management-unstable-v1.xml		\
>  	$(NULL)
>  
>  stable_protocols =								\
> diff --git a/unstable/color-management/README b/unstable/color-management/README
> new file mode 100644
> index 0000000..3bd3e6c
> --- /dev/null
> +++ b/unstable/color-management/README
> @@ -0,0 +1,4 @@
> +Color management protocol
> +
> +Maintainers:
> +Niels Ole Salscheider <niels_ole@salscheider-online.de>
> diff --git a/unstable/color-management/color-management-unstable-v1.xml b/unstable/color-management/color-management-unstable-v1.xml
> new file mode 100644
> index 0000000..3fe6c93
> --- /dev/null
> +++ b/unstable/color-management/color-management-unstable-v1.xml
> @@ -0,0 +1,224 @@
> +<?xml version="1.0" encoding="UTF-8"?>
> +<protocol name="color_management_unstable_v1">
> +
> +  <copyright>
> +    Copyright © 2014-2016 Niels Ole Salscheider
> +
> +    Permission to use, copy, modify, distribute, and sell this
> +    software and its documentation for any purpose is hereby granted
> +    without fee, provided that the above copyright notice appear in
> +    all copies and that both that copyright notice and this permission
> +    notice appear in supporting documentation, and that the name of
> +    the copyright holders not be used in advertising or publicity
> +    pertaining to distribution of the software without specific,
> +    written prior permission.  The copyright holders make no
> +    representations about the suitability of this software for any
> +    purpose.  It is provided "as is" without express or implied
> +    warranty.
> +
> +    THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
> +    SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> +    FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
> +    SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> +    WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
> +    AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
> +    ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
> +    THIS SOFTWARE.
> +  </copyright>
> +
> +  <interface name="zwp_color_management_v1" version="1">
> +    <description summary="allows attaching a color profile to a wl_surface">
> +      This interface allows to attach a color profile to a wl_surface. The
> +      compositor uses this information to display the colors correctly.

This is a global interface advertised through wl_registry, right?

> +
> +      This interface also provides requests to query the sRGB and the preferred
> +      color space. It further allows creation of a color profile object from an
> +      ICC profile. The client is informed by an event if the color profile of
> +      one of the outputs changes.
> +
> +      This protocol exposes two ways to attach color profiles to a surface.
> +      Most applications are expected to simply call set_color_profile to attach
> +      a color profile to a surface. The compositor then makes sure that the
> +      colors are converted to the correct ouput color space.

Typo: ouput

> +      If blending is performed, the compositor will convert all surfaces to a
> +      blending color space and perform blending. It will then convert the output
> +      surface to the color space of the output device.
> +
> +      If an application wants to perform gamut mapping on its own it must query
> +      the color profiles of the outputs. It can then create device link profiles
> +      that describe the transformation from input to output color space. These
> +      device link profiles can be attached to a surface by calling
> +      set_device_link_profile.
> +      When a device link profile is set for a given surface and output, the
> +      compositor will only apply this profile instead of the normal color
> +      transformation pipeline. Blending (if necessary) will be performed late in
> +      the output color space.
> +      The normal color transformation pipeline will be used for all outputs for
> +      which no device link profiles are available.
> +    </description>

This interface seems to be missing a destroy request.

> +
> +    <request name="set_color_profile">
> +      <description summary="set the color profile of a wl_surface">
> +        With this request, the color profile of a wl_surface can be set.
> +        The previously attached color profile will be replaced by the new one.
> +        Initially, the sRGB color profile is attached to a surface before
> +        set_color_profile is called for the first time.
> +        The color profile is applied after the next wl_surface.commit request.

The canonical language to spell that is:

"The color profile is double-buffered state, see wl_surface.commit."

The reason I prefer the exact wording from the wl_surface specification
is that the sub-surface specification modifies that behaviour, and we
don't want to override the sub-surface specification here.

> +      </description>
> +      <arg name="surface" type="object" interface="wl_surface"
> +           summary="the surface on which the color profile is attached" />
> +      <arg name="color_profile" type="object"
> +           interface="zwp_color_profile_v1" summary="the color profile" />

This is one way of designing this, wp_presentation.feedback works like
this.

The other way is that the global interface has a request that creates an
additional interface object for a given wl_surface, and then the
additional operations are done through the new object. If you want, you
can mandate that only one additional object can be created per
wl_surface, the existence of the additional object alone can already
trigger something, and you can remove the added effects by simply
destroying the object. The disadvantages are having yet another object
(but they are very cheap) and dealing with wl_surface being destroyed.

Since you may have use for resetting the wl_surface to a state before
the additional properties were applied, creating the extra object and
leveraging its destroy request would be convenient.

> +    </request>
> +
> +    <request name="set_device_link_profile">
> +      <description summary="set a device link profile for a wl_surface and wl_output">
> +        With this request, a device link profile can be attached to a
> +        wl_surface. For each output on which the surface is visible, the
> +        compositor will check if there is a device link profile. If there is one
> +        it will be used to directly convert the surface to the output color
> +        space. Blending of this surface (if necessary) will then be performed in
> +        the output color space and after the normal blending operations.

Are those blending rules actually implementable?

It is not generally possible to blend some surfaces into a temporary
buffer, convert that to the next color space, and then blend some more,
because the necessary order of blending operations depends on the
z-order of the surfaces.

What implications does this have on the CRTC color processing pipeline?

If a CRTC color processing pipeline, that is, the transformation from
framebuffer values to on-the-wire values for a monitor, is already set
up by the compositor's preference, what would a device link profile
look like? Does it produce on-the-wire or blending space?

If the transformation defined by the device link profile produced
values for the monitor wire, then the compositor will have to undo the
CRTC pipeline transformation during composition for this surface, or it
needs to reset CRTC pipeline setup to identity and apply it manually
for all other surfaces.

What is the use for a device link profile?

If the client says the content is according to a certain specification
(set_color_profile) and the compositor combines that with the output's
color profile to produce the on-the-wire pixel values, how would
setting a device link profile produce something different?

> +        The device link profile is applied after the next wl_surface.commit
> +        request.

"The device link profile is double-buffered state, see
wl_surface.commit."

> +      </description>
> +      <arg name="surface" type="object" interface="wl_surface"
> +           summary="the surface for which the device link profile should be used" />
> +      <arg name="output" type="object" interface="wl_output"
> +           summary="the output for which the device link profile was created" />
> +      <arg name="device_link_profile" type="object"
> +           interface="zwp_color_profile_v1" summary="the device link profile" />
> +    </request>
> +
> +    <request name="remove_device_link_profile">
> +      <description summary="removes a device link profile from a wl_surface">
> +        With this request, a device link profile for a given output can be
> +        removed from a wl_surface. If the surface is still visible on the
> +        output the color conversion will be done with the normal color profile
> +        attached to the surface.
> +        This request takes effect after the next wl_surface.commit request.

The canonical wording again, please.

> +      </description>
> +      <arg name="surface" type="object" interface="wl_surface"
> +           summary="the surface from which the device link should be removed" />
> +      <arg name="output" type="object" interface="wl_output"
> +           summary="the output for which the device link should be removed" />
> +    </request>
> +
> +    <request name="color_profile_from_fd">
> +      <description summary="creates a zwp_color_profile_v1 object from an ICC profile">
> +        This request allows to create a zwp_color_profile_v1 object from an ICC
> +        profile. The fd argument is the file descriptor to the ICC profile (ICC
> +        V2 or V4).
> +      </description>
> +      <arg name="fd" type="fd"
> +           summary="the file descriptor of the ICC profile data" />

I recall Daniel asked for size and maybe offset to accompany the fd.
wl_shm.create_pool has a precedent about size,
wl_shm_pool.create_buffer has the offset. wl_keyboard.keymap has size.

If there is only the fd, you can fstat() it to get a size, but that
may not be the exact size of the payload. Does the ICC data header or
such indicate the exact size of the payload?

> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
> +           summary="the new color profile object" />
> +    </request>
> +
> +    <request name="output_color_profile">
> +      <description summary="create a color profile object for the requested output">
> +        This request returns a zwp_color_profile_v1 object for the requested
> +        output. A client can use this if it wants to know the color profile of
> +        an output (e. g. to create a device link profile).
> +      </description>
> +      <arg name="output" type="object" interface="wl_output"
> +           summary="the output for which a color profile object should be created" />
> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
> +           summary="the new color profile object" />
> +    </request>
> +
> +    <request name="srgb_color_profile">
> +      <description summary="create a new sRGB color profile object">
> +        This request returns a zwp_color_profile_1 object for the sRGB color
> +        profile. The sRGB color profile is initially attached to all surfaces.
> +      </description>
> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
> +           summary="the new color profile object" />
> +    </request>
> +
> +    <request name="preferred_color_space">
> +      <description summary="create a color profile object for the preferred color space">
> +        This request returns a zwp_color_profile_v1 object for the preferred
> +        color space of the compositor. This might be the blending color space
> +        of the compositor.
> +        A client should render in the color space returned by this request if it
> +        does any color conversion on its own. It might also want to use it as
> +        its blending space.
> +        Doing so might allow the compositor to skip one color conversion.
> +      </description>
> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
> +           summary="the new color profile object" />
> +    </request>
> +
> +    <event name="output_color_profile_changed">
> +      <description summary="tells the client that the color profile of an output changed">
> +        This event will be sent when the color profile of an output changes.
> +      </description>
> +      <arg name="output" type="object" interface="wl_output"
> +           summary="the output of which the color profile changed" />
> +    </event>
> +
> +    <enum name="error">
> +      <entry name="invalid_profile" value="0"
> +             summary="the passed ICC data is invalid" />
> +    </enum>
> +  </interface>
> +
> +  <interface name="zwp_color_profile_v1" version="1">
> +    <description summary="represents a color profile">
> +      This interface represents a color profile that can be attached to
> +      surfaces. It is used by the zwp_color_management_v1 interface.
> +    </description>
> +
> +    <request name="destroy" type="destructor">
> +      <description summary="destroys the zwp_color_profile_v1 object">
> +        Informs the server that the client will not be using this protocol
> +        object anymore. It must not be attached to any surface anymore.

You need an error code for being still attached.

This will make destroying an sRGB color profile object interesting for
a client, because that profile is attached by default to all
wl_surfaces.

> +      </description>
> +    </request>
> +
> +    <request name="get_profile_fd">
> +      <description summary="get a file descriptor to the profile data">
> +        This request will cause a profile_fd event that returns a file
> +        descriptor to the ICC profile data.
> +      </description>
> +    </request>
> +
> +    <event name="profile_fd">
> +      <description summary="file descriptor to the profile data">
> +        This event occurs after a get_profile_fd request and returns the file
> +        descriptor to the ICC profile data.
> +      </description>
> +      <arg name="fd" type="fd" summary="ICC profile fd" />

The same comments about size or offset as further above.

> +    </event>
> +
> +    <request name="get_profile_md5">
> +      <description summary="get an MD5 checksum of the profile data">
> +        This request will cause a profile_md5 event that returns the MD5
> +        checksum of the ICC profile data.
> +      </description>
> +    </request>
> +
> +    <event name="profile_md5">
> +      <description summary="MD5 checksum of the profile data">
> +        This event occurs after a get_profile_md5 request and returns the MD5
> +        checksum of the ICC profile data. This MD5 checksum can be used to
> +        compare ICC profiles.
> +        The 128 bit MD5 checksum is calculated using the MD5 fingerprinting
> +        method as defined in Internet RFC 1321. In accordance with the ICC v4
> +        specification, the entire profile is used for this with the profile
> +        flags field, rendering intent field and profile ID field temporarily set
> +        to zeros.
> +      </description>
> +      <arg name="md5_3" type="uint"
> +           summary="highest 32 bit of the MD5 checksum" />
> +      <arg name="md5_2" type="uint"
> +           summary="second highest 32 bit of the MD5 checksum" />
> +      <arg name="md5_1" type="uint"
> +           summary="second lowest 32 bit of the MD5 checksum" />
> +      <arg name="md5_0" type="uint"
> +           summary="lowest 32 bit of the MD5 checksum" />
> +    </event>
> +  </interface>
> +</protocol>

Thanks,
pq
Am 26.02.19 um 16:48 schrieb Pekka Paalanen:
> On Sun, 22 Jan 2017 13:31:35 +0100
> Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
>
>> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>

> My failing is that I haven't read about what ICC v4 definition actually
> describes, does it characterise content or a device, or is it more
> about defining a transformation from something to something without
> saying what something is.

ICC v4 is a specification from 2010 and became in the same year ISO
15076-1:2010. Both specs are technically identical. The standard
describes the content of a color profile format and gives some hints on
how to handle color transforms. The v4 ICC profiles, as previous ICC v2
profiles too, can describe device color characteristics in relation to a
reference color space (ProfileConnectionSpace PCS CIE*XYZ or CIE*Lab).
This most often variant is used for color space characterisation, e.g.
sRGB or device characterisation (monitors, cameras, ...). With this
variant the compositor takes over responsibility and uses intelligence
to combine the input source profile with perhaps effect profiles, for
white point adjustment, red light reduction etc... and a final output
profile into one color transform. The transform is then applied as 3D
texture/shader depending on the actual compositor implementation.

A ICC profile class variant, called device link profiles, can describe a
color conversion without a reference color space, e.g. RGB->RGB. More below

>> ---
>>  Makefile.am                                        |   1 +
>>  unstable/color-management/README                   |   4 +
>>  .../color-management-unstable-v1.xml               | 224 +++++++++++++++++++++
>>  3 files changed, 229 insertions(+)
>>  create mode 100644 unstable/color-management/README
>>  create mode 100644 unstable/color-management/color-management-unstable-v1.xml
>>
>> diff --git a/Makefile.am b/Makefile.am
>> index e693afa..ff435d5 100644
>> --- a/Makefile.am
>> +++ b/Makefile.am
>> @@ -12,6 +12,7 @@ unstable_protocols =								\
>>  	unstable/tablet/tablet-unstable-v2.xml			                \
>>  	unstable/xdg-foreign/xdg-foreign-unstable-v1.xml			\
>>  	unstable/idle-inhibit/idle-inhibit-unstable-v1.xml			\
>> +	unstable/color-management/color-management-unstable-v1.xml		\
>>  	$(NULL)
>>  
>>  stable_protocols =								\
>> diff --git a/unstable/color-management/README b/unstable/color-management/README
>> new file mode 100644
>> index 0000000..3bd3e6c
>> --- /dev/null
>> +++ b/unstable/color-management/README
>> @@ -0,0 +1,4 @@
>> +Color management protocol
>> +
>> +Maintainers:
>> +Niels Ole Salscheider <niels_ole@salscheider-online.de>
>> diff --git a/unstable/color-management/color-management-unstable-v1.xml b/unstable/color-management/color-management-unstable-v1.xml
>> new file mode 100644
>> index 0000000..3fe6c93
>> --- /dev/null
>> +++ b/unstable/color-management/color-management-unstable-v1.xml
>> @@ -0,0 +1,224 @@
>> +<?xml version="1.0" encoding="UTF-8"?>
>> +<protocol name="color_management_unstable_v1">
>> +
>> +  <copyright>
>> +    Copyright © 2014-2016 Niels Ole Salscheider
>> +
>> +    Permission to use, copy, modify, distribute, and sell this
>> +    software and its documentation for any purpose is hereby granted
>> +    without fee, provided that the above copyright notice appear in
>> +    all copies and that both that copyright notice and this permission
>> +    notice appear in supporting documentation, and that the name of
>> +    the copyright holders not be used in advertising or publicity
>> +    pertaining to distribution of the software without specific,
>> +    written prior permission.  The copyright holders make no
>> +    representations about the suitability of this software for any
>> +    purpose.  It is provided "as is" without express or implied
>> +    warranty.
>> +
>> +    THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
>> +    SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
>> +    FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
>> +    SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
>> +    WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
>> +    AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
>> +    ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
>> +    THIS SOFTWARE.
>> +  </copyright>
>> +
>> +  <interface name="zwp_color_management_v1" version="1">
>> +    <description summary="allows attaching a color profile to a wl_surface">
>> +      This interface allows to attach a color profile to a wl_surface. The
>> +      compositor uses this information to display the colors correctly.
> This is a global interface advertised through wl_registry, right?
>
>> +
>> +      This interface also provides requests to query the sRGB and the preferred
>> +      color space. It further allows creation of a color profile object from an
>> +      ICC profile. The client is informed by an event if the color profile of
>> +      one of the outputs changes.
>> +
>> +      This protocol exposes two ways to attach color profiles to a surface.
>> +      Most applications are expected to simply call set_color_profile to attach
>> +      a color profile to a surface. The compositor then makes sure that the
>> +      colors are converted to the correct ouput color space.
> Typo: ouput
>
>> +      If blending is performed, the compositor will convert all surfaces to a
>> +      blending color space and perform blending. It will then convert the output
>> +      surface to the color space of the output device.
>> +
>> +      If an application wants to perform gamut mapping on its own it must query
>> +      the color profiles of the outputs. It can then create device link profiles
>> +      that describe the transformation from input to output color space. These
>> +      device link profiles can be attached to a surface by calling
>> +      set_device_link_profile.
>> +      When a device link profile is set for a given surface and output, the
>> +      compositor will only apply this profile instead of the normal color
>> +      transformation pipeline. Blending (if necessary) will be performed late in
>> +      the output color space.
>> +      The normal color transformation pipeline will be used for all outputs for
>> +      which no device link profiles are available.
>> +    </description>
> This interface seems to be missing a destroy request.
>
>> +
>> +    <request name="set_color_profile">
>> +      <description summary="set the color profile of a wl_surface">
>> +        With this request, the color profile of a wl_surface can be set.
>> +        The previously attached color profile will be replaced by the new one.
>> +        Initially, the sRGB color profile is attached to a surface before
>> +        set_color_profile is called for the first time.
>> +        The color profile is applied after the next wl_surface.commit request.
> The canonical language to spell that is:
>
> "The color profile is double-buffered state, see wl_surface.commit."
>
> The reason I prefer the exact wording from the wl_surface specification
> is that the sub-surface specification modifies that behaviour, and we
> don't want to override the sub-surface specification here.
>
>> +      </description>
>> +      <arg name="surface" type="object" interface="wl_surface"
>> +           summary="the surface on which the color profile is attached" />
>> +      <arg name="color_profile" type="object"
>> +           interface="zwp_color_profile_v1" summary="the color profile" />
> This is one way of designing this, wp_presentation.feedback works like
> this.
>
> The other way is that the global interface has a request that creates an
> additional interface object for a given wl_surface, and then the
> additional operations are done through the new object. If you want, you
> can mandate that only one additional object can be created per
> wl_surface, the existence of the additional object alone can already
> trigger something, and you can remove the added effects by simply
> destroying the object. The disadvantages are having yet another object
> (but they are very cheap) and dealing with wl_surface being destroyed.
>
> Since you may have use for resetting the wl_surface to a state before
> the additional properties were applied, creating the extra object and
> leveraging its destroy request would be convenient.
>
>> +    </request>
>> +
>> +    <request name="set_device_link_profile">
>> +      <description summary="set a device link profile for a wl_surface and wl_output">
>> +        With this request, a device link profile can be attached to a
>> +        wl_surface. For each output on which the surface is visible, the
>> +        compositor will check if there is a device link profile. If there is one
>> +        it will be used to directly convert the surface to the output color
>> +        space. Blending of this surface (if necessary) will then be performed in
>> +        the output color space and after the normal blending operations.
> Are those blending rules actually implementable?
>
> It is not generally possible to blend some surfaces into a temporary
> buffer, convert that to the next color space, and then blend some more,
> because the necessary order of blending operations depends on the
> z-order of the surfaces.
>
> What implications does this have on the CRTC color processing pipeline?
>
> If a CRTC color processing pipeline, that is, the transformation from
> framebuffer values to on-the-wire values for a monitor, is already set
> up by the compositor's preference, what would a device link profile
> look like? Does it produce on-the-wire or blending space?
>
> If the transformation defined by the device link profile produced
> values for the monitor wire, then the compositor will have to undo the
> CRTC pipeline transformation during composition for this surface, or it
> needs to reset CRTC pipeline setup to identity and apply it manually
> for all other surfaces.
>
> What is the use for a device link profile?

A device link profile is useful to describe a transform from a buffer to
a match one specific output. Device links can give a very fine grained
control to applications to decide what they want with their colors. This
is useful in case a application want to circumvent the default gamut
mapping optimise for each output connected to a computer or add color
effects like proofing. The intelligence is inside the device link
profile and the compositor applies that as a dump rule.


> If the client says the content is according to a certain specification
> (set_color_profile) and the compositor combines that with the output's
> color profile to produce the on-the-wire pixel values, how would
> setting a device link profile produce something different?
>
>> +        The device link profile is applied after the next wl_surface.commit
>> +        request.
> "The device link profile is double-buffered state, see
> wl_surface.commit."
>
>> +      </description>
>> +      <arg name="surface" type="object" interface="wl_surface"
>> +           summary="the surface for which the device link profile should be used" />
>> +      <arg name="output" type="object" interface="wl_output"
>> +           summary="the output for which the device link profile was created" />
>> +      <arg name="device_link_profile" type="object"
>> +           interface="zwp_color_profile_v1" summary="the device link profile" />
>> +    </request>
>> +
>> +    <request name="remove_device_link_profile">
>> +      <description summary="removes a device link profile from a wl_surface">
>> +        With this request, a device link profile for a given output can be
>> +        removed from a wl_surface. If the surface is still visible on the
>> +        output the color conversion will be done with the normal color profile
>> +        attached to the surface.
>> +        This request takes effect after the next wl_surface.commit request.
> The canonical wording again, please.
>
>> +      </description>
>> +      <arg name="surface" type="object" interface="wl_surface"
>> +           summary="the surface from which the device link should be removed" />
>> +      <arg name="output" type="object" interface="wl_output"
>> +           summary="the output for which the device link should be removed" />
>> +    </request>
>> +
>> +    <request name="color_profile_from_fd">
>> +      <description summary="creates a zwp_color_profile_v1 object from an ICC profile">
>> +        This request allows to create a zwp_color_profile_v1 object from an ICC
>> +        profile. The fd argument is the file descriptor to the ICC profile (ICC
>> +        V2 or V4).
>> +      </description>
>> +      <arg name="fd" type="fd"
>> +           summary="the file descriptor of the ICC profile data" />
> I recall Daniel asked for size and maybe offset to accompany the fd.
> wl_shm.create_pool has a precedent about size,
> wl_shm_pool.create_buffer has the offset. wl_keyboard.keymap has size.
>
> If there is only the fd, you can fstat() it to get a size, but that
> may not be the exact size of the payload. Does the ICC data header or
> such indicate the exact size of the payload?
>
>> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
>> +           summary="the new color profile object" />
>> +    </request>
>> +
>> +    <request name="output_color_profile">
>> +      <description summary="create a color profile object for the requested output">
>> +        This request returns a zwp_color_profile_v1 object for the requested
>> +        output. A client can use this if it wants to know the color profile of
>> +        an output (e. g. to create a device link profile).
>> +      </description>
>> +      <arg name="output" type="object" interface="wl_output"
>> +           summary="the output for which a color profile object should be created" />
>> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
>> +           summary="the new color profile object" />
>> +    </request>

The output profile is a pre condition to create device link profiles
(end-to-end color transform). The possibility to fetch the output
profile helps further in system analysis.

thanks,

Kai-Uwe Behrmann


>> +
>> +    <request name="srgb_color_profile">
>> +      <description summary="create a new sRGB color profile object">
>> +        This request returns a zwp_color_profile_1 object for the sRGB color
>> +        profile. The sRGB color profile is initially attached to all surfaces.
>> +      </description>
>> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
>> +           summary="the new color profile object" />
>> +    </request>
>> +
>> +    <request name="preferred_color_space">
>> +      <description summary="create a color profile object for the preferred color space">
>> +        This request returns a zwp_color_profile_v1 object for the preferred
>> +        color space of the compositor. This might be the blending color space
>> +        of the compositor.
>> +        A client should render in the color space returned by this request if it
>> +        does any color conversion on its own. It might also want to use it as
>> +        its blending space.
>> +        Doing so might allow the compositor to skip one color conversion.
>> +      </description>
>> +      <arg name="id" type="new_id" interface="zwp_color_profile_v1"
>> +           summary="the new color profile object" />
>> +    </request>
>> +
>> +    <event name="output_color_profile_changed">
>> +      <description summary="tells the client that the color profile of an output changed">
>> +        This event will be sent when the color profile of an output changes.
>> +      </description>
>> +      <arg name="output" type="object" interface="wl_output"
>> +           summary="the output of which the color profile changed" />
>> +    </event>
>> +
>> +    <enum name="error">
>> +      <entry name="invalid_profile" value="0"
>> +             summary="the passed ICC data is invalid" />
>> +    </enum>
>> +  </interface>
>> +
>> +  <interface name="zwp_color_profile_v1" version="1">
>> +    <description summary="represents a color profile">
>> +      This interface represents a color profile that can be attached to
>> +      surfaces. It is used by the zwp_color_management_v1 interface.
>> +    </description>
>> +
>> +    <request name="destroy" type="destructor">
>> +      <description summary="destroys the zwp_color_profile_v1 object">
>> +        Informs the server that the client will not be using this protocol
>> +        object anymore. It must not be attached to any surface anymore.
> You need an error code for being still attached.
>
> This will make destroying an sRGB color profile object interesting for
> a client, because that profile is attached by default to all
> wl_surfaces.
>
>> +      </description>
>> +    </request>
>> +
>> +    <request name="get_profile_fd">
>> +      <description summary="get a file descriptor to the profile data">
>> +        This request will cause a profile_fd event that returns a file
>> +        descriptor to the ICC profile data.
>> +      </description>
>> +    </request>
>> +
>> +    <event name="profile_fd">
>> +      <description summary="file descriptor to the profile data">
>> +        This event occurs after a get_profile_fd request and returns the file
>> +        descriptor to the ICC profile data.
>> +      </description>
>> +      <arg name="fd" type="fd" summary="ICC profile fd" />
> The same comments about size or offset as further above.
>
>> +    </event>
>> +
>> +    <request name="get_profile_md5">
>> +      <description summary="get an MD5 checksum of the profile data">
>> +        This request will cause a profile_md5 event that returns the MD5
>> +        checksum of the ICC profile data.
>> +      </description>
>> +    </request>
>> +
>> +    <event name="profile_md5">
>> +      <description summary="MD5 checksum of the profile data">
>> +        This event occurs after a get_profile_md5 request and returns the MD5
>> +        checksum of the ICC profile data. This MD5 checksum can be used to
>> +        compare ICC profiles.
>> +        The 128 bit MD5 checksum is calculated using the MD5 fingerprinting
>> +        method as defined in Internet RFC 1321. In accordance with the ICC v4
>> +        specification, the entire profile is used for this with the profile
>> +        flags field, rendering intent field and profile ID field temporarily set
>> +        to zeros.
>> +      </description>
>> +      <arg name="md5_3" type="uint"
>> +           summary="highest 32 bit of the MD5 checksum" />
>> +      <arg name="md5_2" type="uint"
>> +           summary="second highest 32 bit of the MD5 checksum" />
>> +      <arg name="md5_1" type="uint"
>> +           summary="second lowest 32 bit of the MD5 checksum" />
>> +      <arg name="md5_0" type="uint"
>> +           summary="lowest 32 bit of the MD5 checksum" />
>> +    </event>
>> +  </interface>
>> +</protocol>
> Thanks,
> pq
On Tue, 26 Feb 2019 18:56:06 +0100
Kai-Uwe <ku.b-list@gmx.de> wrote:

> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:
> > On Sun, 22 Jan 2017 13:31:35 +0100
> > Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
> >  
> >> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>  
> 
> > My failing is that I haven't read about what ICC v4 definition actually
> > describes, does it characterise content or a device, or is it more
> > about defining a transformation from something to something without
> > saying what something is.  
> 
> ICC v4 is a specification from 2010 and became in the same year ISO
> 15076-1:2010. Both specs are technically identical. The standard
> describes the content of a color profile format and gives some hints on
> how to handle color transforms. The v4 ICC profiles, as previous ICC v2
> profiles too, can describe device color characteristics in relation to a
> reference color space (ProfileConnectionSpace PCS CIE*XYZ or CIE*Lab).
> This most often variant is used for color space characterisation, e.g.
> sRGB or device characterisation (monitors, cameras, ...). With this
> variant the compositor takes over responsibility and uses intelligence
> to combine the input source profile with perhaps effect profiles, for
> white point adjustment, red light reduction etc... and a final output
> profile into one color transform. The transform is then applied as 3D
> texture/shader depending on the actual compositor implementation.
> 
> A ICC profile class variant, called device link profiles, can describe a
> color conversion without a reference color space, e.g. RGB->RGB. More below
> 

...

> >> +    <request name="set_device_link_profile">
> >> +      <description summary="set a device link profile for a wl_surface and wl_output">
> >> +        With this request, a device link profile can be attached to a
> >> +        wl_surface. For each output on which the surface is visible, the
> >> +        compositor will check if there is a device link profile. If there is one
> >> +        it will be used to directly convert the surface to the output color
> >> +        space. Blending of this surface (if necessary) will then be performed in
> >> +        the output color space and after the normal blending operations.  
> > Are those blending rules actually implementable?
> >
> > It is not generally possible to blend some surfaces into a temporary
> > buffer, convert that to the next color space, and then blend some more,
> > because the necessary order of blending operations depends on the
> > z-order of the surfaces.
> >
> > What implications does this have on the CRTC color processing pipeline?
> >
> > If a CRTC color processing pipeline, that is, the transformation from
> > framebuffer values to on-the-wire values for a monitor, is already set
> > up by the compositor's preference, what would a device link profile
> > look like? Does it produce on-the-wire or blending space?
> >
> > If the transformation defined by the device link profile produced
> > values for the monitor wire, then the compositor will have to undo the
> > CRTC pipeline transformation during composition for this surface, or it
> > needs to reset CRTC pipeline setup to identity and apply it manually
> > for all other surfaces.
> >
> > What is the use for a device link profile?  
> 
> A device link profile is useful to describe a transform from a buffer to
> a match one specific output. Device links can give a very fine grained
> control to applications to decide what they want with their colors. This
> is useful in case a application want to circumvent the default gamut
> mapping optimise for each output connected to a computer or add color
> effects like proofing. The intelligence is inside the device link
> profile and the compositor applies that as a dump rule.

Hi Kai-Uwe,

right, thank you. I did get the feeling right on what it is supposed to
do, but I have hard time imagining how to implement that in a compositor
that also needs to cater for other windows on the same output and blend
them all together correctly.

Even without blending, it means that the CRTC color manipulation
features cannot really be used at all, because there are two
conflicting transformations to apply: from compositor internal
(blending) space to the output space, and from the application content
space through the device link profile to the output space. The only
way that could be realized without any additional reverse
transformations is that the CRTC is set as an identity pass-through,
and both kinds of transformations are done in the composite rendering
with OpenGL or Vulkan.

If we want device link profiles in the protocol, then I think that is
the cost we have to pay. But that is just about performance, while to
me it seems like correct blending would be impossible to achieve if
there was another translucent window on top of the window using a
device link profile. Or even worse, a stack like this:

window B (color profile)
window A (device link profile)
wallpaper (color profile)

If both windows have translucency somewhere, they must be blended in
that order. The blending of window A cannot be postponed after the
others.

I guess that implies that if even one surface on an output uses a
device link profile, then all blending must be done in the output color
space instead of an intermediate blending space. Is that an acceptable
trade-off?

Does that even make any difference if the output space was linear at
blending step, and gamma was applied after that?

> 
> > If the client says the content is according to a certain specification
> > (set_color_profile) and the compositor combines that with the output's
> > color profile to produce the on-the-wire pixel values, how would
> > setting a device link profile produce something different?
> >  
> >> +        The device link profile is applied after the next wl_surface.commit
> >> +        request.  
> > "The device link profile is double-buffered state, see
> > wl_surface.commit."
> >  
> >> +      </description>
> >> +      <arg name="surface" type="object" interface="wl_surface"
> >> +           summary="the surface for which the device link profile should be used" />
> >> +      <arg name="output" type="object" interface="wl_output"
> >> +           summary="the output for which the device link profile was created" />
> >> +      <arg name="device_link_profile" type="object"
> >> +           interface="zwp_color_profile_v1" summary="the device link profile" />
> >> +    </request>

Thanks,
pq
Am 27.02.19 um 14:17 schrieb Pekka Paalanen:
> On Tue, 26 Feb 2019 18:56:06 +0100
> Kai-Uwe <ku.b-list@gmx.de> wrote:
>
>> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:
>>> On Sun, 22 Jan 2017 13:31:35 +0100
>>> Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
>>>  
>>>> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>  
>>>>
>>>> +    <request name="set_device_link_profile">
>>>> +      <description summary="set a device link profile for a wl_surface and wl_output">
>>>> +        With this request, a device link profile can be attached to a
>>>> +        wl_surface. For each output on which the surface is visible, the
>>>> +        compositor will check if there is a device link profile. If there is one
>>>> +        it will be used to directly convert the surface to the output color
>>>> +        space. Blending of this surface (if necessary) will then be performed in
>>>> +        the output color space and after the normal blending operations.  
>>> Are those blending rules actually implementable?
>>>
>>> It is not generally possible to blend some surfaces into a temporary
>>> buffer, convert that to the next color space, and then blend some more,
>>> because the necessary order of blending operations depends on the
>>> z-order of the surfaces.
>>>
>>> What implications does this have on the CRTC color processing pipeline?
>>>
>>> If a CRTC color processing pipeline, that is, the transformation from
>>> framebuffer values to on-the-wire values for a monitor, is already set
>>> up by the compositor's preference, what would a device link profile
>>> look like? Does it produce on-the-wire or blending space?
>>>
>>> If the transformation defined by the device link profile produced
>>> values for the monitor wire, then the compositor will have to undo the
>>> CRTC pipeline transformation during composition for this surface, or it
>>> needs to reset CRTC pipeline setup to identity and apply it manually
>>> for all other surfaces.
>>>
>>> What is the use for a device link profile?  
>> A device link profile is useful to describe a transform from a buffer to
>> a match one specific output. Device links can give a very fine grained
>> control to applications to decide what they want with their colors. This
>> is useful in case a application want to circumvent the default gamut
>> mapping optimise for each output connected to a computer or add color
>> effects like proofing. The intelligence is inside the device link
>> profile and the compositor applies that as a dump rule.
> Hi Kai-Uwe,
>
> right, thank you. I did get the feeling right on what it is supposed to
> do, but I have hard time imagining how to implement that in a compositor
> that also needs to cater for other windows on the same output and blend
> them all together correctly.
>
> Even without blending, it means that the CRTC color manipulation
> features cannot really be used at all, because there are two
> conflicting transformations to apply: from compositor internal
> (blending) space to the output space, and from the application content
> space through the device link profile to the output space. The only
> way that could be realized without any additional reverse
> transformations is that the CRTC is set as an identity pass-through,
> and both kinds of transformations are done in the composite rendering
> with OpenGL or Vulkan.
What are CRTC color manipulation features in wayland? blending?
> If we want device link profiles in the protocol, then I think that is
> the cost we have to pay. But that is just about performance, while to
> me it seems like correct blending would be impossible to achieve if
> there was another translucent window on top of the window using a
> device link profile. Or even worse, a stack like this:
>
> window B (color profile)
> window A (device link profile)
> wallpaper (color profile)

Thanks for the simplification.

> If both windows have translucency somewhere, they must be blended in
> that order. The blending of window A cannot be postponed after the
> others.

Remembers me on the discussions we had with the Cairo people years ago.

> I guess that implies that if even one surface on an output uses a
> device link profile, then all blending must be done in the output color
> space instead of an intermediate blending space. Is that an acceptable
> trade-off?

It will make "high quality" apps look like blending fun stoppers. Not so
nice. In contrast, the conversion back from output space to blending
space then blending and then conversion to output will maintain the
blending space experience at some performance cost and break the
original device link intent. Would that fit a trade-off for you? (So app
client developers should never use translucent portions. However the
toolkit or compositor might enforce this, e.g. for window decoration,
that outside translucency would break the app intention.)

> Does that even make any difference if the output space was linear at
> blending step, and gamma was applied after that?

Interesting. There came the argument that the complete graphics pipeline
should be used for measuring the device for profile generation. So, if
linear (blending) space is always shown to applications and measurement
tools, then that should be profiled and fine. Anyway, a comment from
Graeme Gill would be welcome, on how to profile in linear space? As a
side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
without the ICC profiling tool have to worry about. I guess all the
dynamic HDR features will more or less destroy the static connection of
input values to display values. The problem for traditional color
management is that linearisation, or as others spell it, gamma transfer
will be a very late step inside the monitor to do correctly. So one
arising question is, how will the 1D graphic card LUT (VCGT) be
implemented? With VCGT properly supported, the output profile could
still work on sufficiently well prepared device behaviour.

Here a possible process chain:

* app buffer (device link for proofing or source profile) ->[ICC
conversion in compositor]->

* blending space (linear) ->[gamma 2.2]->

* calibration protocol ->[1D calibration (VCGT)]->

* [encoding for on-the-wire values PQ/...]->

* transfer to device

The calibration protocol is there for completeness.

thanks,
Kai-Uwe
On Thu, 28 Feb 2019 09:12:57 +0100
Kai-Uwe <ku.b-list@gmx.de> wrote:

> Am 27.02.19 um 14:17 schrieb Pekka Paalanen:
> > On Tue, 26 Feb 2019 18:56:06 +0100
> > Kai-Uwe <ku.b-list@gmx.de> wrote:
> >  
> >> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:  
> >>> On Sun, 22 Jan 2017 13:31:35 +0100
> >>> Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
> >>>    
> >>>> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>  
> >>>>
> >>>> +    <request name="set_device_link_profile">
> >>>> +      <description summary="set a device link profile for a wl_surface and wl_output">
> >>>> +        With this request, a device link profile can be attached to a
> >>>> +        wl_surface. For each output on which the surface is visible, the
> >>>> +        compositor will check if there is a device link profile. If there is one
> >>>> +        it will be used to directly convert the surface to the output color
> >>>> +        space. Blending of this surface (if necessary) will then be performed in
> >>>> +        the output color space and after the normal blending operations.    
> >>> Are those blending rules actually implementable?
> >>>
> >>> It is not generally possible to blend some surfaces into a temporary
> >>> buffer, convert that to the next color space, and then blend some more,
> >>> because the necessary order of blending operations depends on the
> >>> z-order of the surfaces.
> >>>
> >>> What implications does this have on the CRTC color processing pipeline?
> >>>
> >>> If a CRTC color processing pipeline, that is, the transformation from
> >>> framebuffer values to on-the-wire values for a monitor, is already set
> >>> up by the compositor's preference, what would a device link profile
> >>> look like? Does it produce on-the-wire or blending space?
> >>>
> >>> If the transformation defined by the device link profile produced
> >>> values for the monitor wire, then the compositor will have to undo the
> >>> CRTC pipeline transformation during composition for this surface, or it
> >>> needs to reset CRTC pipeline setup to identity and apply it manually
> >>> for all other surfaces.
> >>>
> >>> What is the use for a device link profile?    
> >> A device link profile is useful to describe a transform from a buffer to
> >> a match one specific output. Device links can give a very fine grained
> >> control to applications to decide what they want with their colors. This
> >> is useful in case a application want to circumvent the default gamut
> >> mapping optimise for each output connected to a computer or add color
> >> effects like proofing. The intelligence is inside the device link
> >> profile and the compositor applies that as a dump rule.  
> > Hi Kai-Uwe,
> >
> > right, thank you. I did get the feeling right on what it is supposed to
> > do, but I have hard time imagining how to implement that in a compositor
> > that also needs to cater for other windows on the same output and blend
> > them all together correctly.
> >
> > Even without blending, it means that the CRTC color manipulation
> > features cannot really be used at all, because there are two
> > conflicting transformations to apply: from compositor internal
> > (blending) space to the output space, and from the application content
> > space through the device link profile to the output space. The only
> > way that could be realized without any additional reverse
> > transformations is that the CRTC is set as an identity pass-through,
> > and both kinds of transformations are done in the composite rendering
> > with OpenGL or Vulkan.  

> What are CRTC color manipulation features in wayland? blending?

Hi Kai-Uwe,

Wayland exposes nothing of CRTC capabilities. I think that is the best.

Blending windows together is implicit from allowing pixel formats with
alpha. Even then, from the client perspective such blending is limited
to sub-surfaces, since those are all a client is aware of.

> > If we want device link profiles in the protocol, then I think that is
> > the cost we have to pay. But that is just about performance, while to
> > me it seems like correct blending would be impossible to achieve if
> > there was another translucent window on top of the window using a
> > device link profile. Or even worse, a stack like this:
> >
> > window B (color profile)
> > window A (device link profile)
> > wallpaper (color profile)  
> 
> Thanks for the simplification.
> 
> > If both windows have translucency somewhere, they must be blended in
> > that order. The blending of window A cannot be postponed after the
> > others.  
> 
> Remembers me on the discussions we had with the Cairo people years ago.

Was the conclusion the same, or have I mistaken something?

> > I guess that implies that if even one surface on an output uses a
> > device link profile, then all blending must be done in the output color
> > space instead of an intermediate blending space. Is that an acceptable
> > trade-off?  
> 
> It will make "high quality" apps look like blending fun stoppers. Not so
> nice. In contrast, the conversion back from output space to blending
> space then blending and then conversion to output will maintain the
> blending space experience at some performance cost and break the
> original device link intent. Would that fit a trade-off for you? (So app

Yes, that is exactly the conflict I meant.

> client developers should never use translucent portions. However the
> toolkit or compositor might enforce this, e.g. for window decoration,
> that outside translucency would break the app intention.)

I really cannot say, I have no opinion on the matter so far. A
compositor could be implemented either way.

It is not just about window A with the device link profile, it is
window B on top of that whose translucency would be a problem, since
window A "forces" the color space to become the output color space
before window B can be blended in. Or indeed, having to convert from
output space to blending space for blending window B and then back to
output space again.

> > Does that even make any difference if the output space was linear at
> > blending step, and gamma was applied after that?  
> 
> Interesting. There came the argument that the complete graphics pipeline
> should be used for measuring the device for profile generation. So, if
> linear (blending) space is always shown to applications and measurement
> tools, then that should be profiled and fine. Anyway, a comment from
> Graeme Gill would be welcome, on how to profile in linear space? As a
> side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
> without the ICC profiling tool have to worry about. I guess all the
> dynamic HDR features will more or less destroy the static connection of
> input values to display values. The problem for traditional color
> management is that linearisation, or as others spell it, gamma transfer
> will be a very late step inside the monitor to do correctly. So one
> arising question is, how will the 1D graphic card LUT (VCGT) be
> implemented? With VCGT properly supported, the output profile could
> still work on sufficiently well prepared device behaviour.
> 
> Here a possible process chain:
> 
> * app buffer (device link for proofing or source profile) ->[ICC
> conversion in compositor]->
> 
> * blending space (linear) ->[gamma 2.2]->
> 
> * calibration protocol ->[1D calibration (VCGT)]->
> 
> * [encoding for on-the-wire values PQ/...]->
> 
> * transfer to device
> 
> The calibration protocol is there for completeness.

I'm happy to have manged to say something interesting! :-)

This is where we start going beyond my knowledge. However, the chain
does look reasonable to me, I don't see any problems Wayland protocol
wise or exposing awkward details, if not the assumption of a single LUT
in the pipeline (VCGT) but I suppose that is a concept in ICC and only
thing that matter is that it gets applied correctly (e.g. in Vulkan),
not that it is actually programmed to specific registers in a video
card.

**

Everyone,

another thought about a compositor implementation detail I would like
to ask you all is about the blending space.

If the compositor blending space was CIE XYZ with direct (linear)
encoding to IEEE754 32-bit float values in pixels, with the units of Y
chosen to match an absolute physical luminance value (or something that
corresponds with the HDR specifications), would that be sufficient for
all imaginable and realistic color reproduction purposes, HDR included?

Or do I have false assumptions about HDR specifications and they do
not define brightness in physical absolute units but somehow in
relative units? I think I saw "nit" as the unit somewhere which is an
absolute physical unit.

It might be heavy to use, both storage wise and computationally, but I
think Weston should start with a gold standard approach that we can
verify to be correct, encode the behaviour into the test suite, and
then look at possible optimizations by looking at e.g. other blending
spaces or opportunistically skipping the blending space.

Would that color space work universally from the colorimetry and
precision perspective, with any kind of gamut one might want/have, and
so on?

Meaning, that all client content gets converted according to the client
provided ICC profiles to CIE XYZ, composited/blended, and then
converted to output space according to the output ICC profile. In my
mind, the conversion of client content to CIE XYZ would happen as part
of sampling the client texture, so that client data remains in the
format the client provided it, and only the shadow framebuffer for
blending would need to be 32-bit per channel format. At least for a
start.


Thanks,
pq
Am 28.02.19 um 12:37 schrieb Pekka Paalanen:
> On Thu, 28 Feb 2019 09:12:57 +0100
> Kai-Uwe <ku.b-list@gmx.de> wrote:
>
>> Am 27.02.19 um 14:17 schrieb Pekka Paalanen:
>>> On Tue, 26 Feb 2019 18:56:06 +0100
>>> Kai-Uwe <ku.b-list@gmx.de> wrote:
>>>  
>>>> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:  
>>>>> On Sun, 22 Jan 2017 13:31:35 +0100
>>>>> Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
>>>>>    
>>>>>> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>  
>>>>>>
>>>>>> +    <request name="set_device_link_profile">
>>>>>> +      <description summary="set a device link profile for a wl_surface and wl_output">
>>>>>> +        With this request, a device link profile can be attached to a
>>>>>> +        wl_surface. For each output on which the surface is visible, the
>>>>>> +        compositor will check if there is a device link profile. If there is one
>>>>>> +        it will be used to directly convert the surface to the output color
>>>>>> +        space. Blending of this surface (if necessary) will then be performed in
>>>>>> +        the output color space and after the normal blending operations.    
>>>>> Are those blending rules actually implementable?
>>>>>
>>>>> It is not generally possible to blend some surfaces into a temporary
>>>>> buffer, convert that to the next color space, and then blend some more,
>>>>> because the necessary order of blending operations depends on the
>>>>> z-order of the surfaces.
>>>>>
>>>>> What implications does this have on the CRTC color processing pipeline?
>>>>>
>>>>> If a CRTC color processing pipeline, that is, the transformation from
>>>>> framebuffer values to on-the-wire values for a monitor, is already set
>>>>> up by the compositor's preference, what would a device link profile
>>>>> look like? Does it produce on-the-wire or blending space?
>>>>>
>>>>> If the transformation defined by the device link profile produced
>>>>> values for the monitor wire, then the compositor will have to undo the
>>>>> CRTC pipeline transformation during composition for this surface, or it
>>>>> needs to reset CRTC pipeline setup to identity and apply it manually
>>>>> for all other surfaces.
>>>>>
>>>>> What is the use for a device link profile?    
>>>> A device link profile is useful to describe a transform from a buffer to
>>>> a match one specific output. Device links can give a very fine grained
>>>> control to applications to decide what they want with their colors. This
>>>> is useful in case a application want to circumvent the default gamut
>>>> mapping optimise for each output connected to a computer or add color
>>>> effects like proofing. The intelligence is inside the device link
>>>> profile and the compositor applies that as a dump rule.  
>>> Hi Kai-Uwe,
>>>
>>> right, thank you. I did get the feeling right on what it is supposed to
>>> do, but I have hard time imagining how to implement that in a compositor
>>> that also needs to cater for other windows on the same output and blend
>>> them all together correctly.
>>>
>>> Even without blending, it means that the CRTC color manipulation
>>> features cannot really be used at all, because there are two
>>> conflicting transformations to apply: from compositor internal
>>> (blending) space to the output space, and from the application content
>>> space through the device link profile to the output space. The only
>>> way that could be realized without any additional reverse
>>> transformations is that the CRTC is set as an identity pass-through,
>>> and both kinds of transformations are done in the composite rendering
>>> with OpenGL or Vulkan.  
>> What are CRTC color manipulation features in wayland? blending?
Hello Pekka,
> Wayland exposes nothing of CRTC capabilities. I think that is the best.
>
> Blending windows together is implicit from allowing pixel formats with
> alpha. Even then, from the client perspective such blending is limited
> to sub-surfaces, since those are all a client is aware of.
...
>>> If we want device link profiles in the protocol, then I think that is
>>> the cost we have to pay. But that is just about performance, while to
>>> me it seems like correct blending would be impossible to achieve if
>>> there was another translucent window on top of the window using a
>>> device link profile. Or even worse, a stack like this:
>>>
>>> window B (color profile)
>>> window A (device link profile)
>>> wallpaper (color profile)  
>> Thanks for the simplification.
>>
>>> If both windows have translucency somewhere, they must be blended in
>>> that order. The blending of window A cannot be postponed after the
>>> others.  
>> Remembers me on the discussions we had with the Cairo people years ago.
> Was the conclusion the same, or have I mistaken something?

My general impression was, with the need to fit outside requirements
(early color managed colors) came in conflict with concepts of Cairo.
The corner cases where the conflict of blending output referred colors,
which need to be occasionally in blending space, like you pointed out
for Wayland too. (But that is reasonably solved in other API's too.
Imagine Postscript suddenly presenting transparencies from
PDF->Postscript conversions. Then there are areas in the postscript with
almost pass through of vectors and areas, which need blending and are
converted reasonably to pixels.)

>>> I guess that implies that if even one surface on an output uses a
>>> device link profile, then all blending must be done in the output color
>>> space instead of an intermediate blending space. Is that an acceptable
>>> trade-off?  
>> It will make "high quality" apps look like blending fun stoppers. Not so
>> nice. In contrast, the conversion back from output space to blending
>> space then blending and then conversion to output will maintain the
>> blending space experience at some performance cost and break the
>> original device link intent. Would that fit a trade-off for you? (So app
> Yes, that is exactly the conflict I meant.
>
>> client developers should never use translucent portions. However the
>> toolkit or compositor might enforce this, e.g. for window decoration,
>> that outside translucency would break the app intention.)
> I really cannot say, I have no opinion on the matter so far. A
> compositor could be implemented either way.

Perhaps the sub surface in wayland is a means for apps, to express the
intention of a pass through for colors with hopefully no blending. At
least Dmitry Kazakov mentioned a concept with similarities for
implementing display HDR support (Windows) inside Krita (open source
image editor) canvas.

> It is not just about window A with the device link profile, it is
> window B on top of that whose translucency would be a problem, since
> window A "forces" the color space to become the output color space
> before window B can be blended in. Or indeed, having to convert from
> output space to blending space for blending window B and then back to
> output space again.

That sounds all not simple. But either that whole conversion or
splitting the affected regions into sub subsurfaces is one of the few
possibilities I see here.

>>> Does that even make any difference if the output space was linear at
>>> blending step, and gamma was applied after that?  
>> Interesting. There came the argument that the complete graphics pipeline
>> should be used for measuring the device for profile generation. So, if
>> linear (blending) space is always shown to applications and measurement
>> tools, then that should be profiled and fine. Anyway, a comment from
>> Graeme Gill would be welcome, on how to profile in linear space? As a
>> side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
>> without the ICC profiling tool have to worry about. I guess all the
>> dynamic HDR features will more or less destroy the static connection of
>> input values to display values. The problem for traditional color
>> management is that linearisation, or as others spell it, gamma transfer
>> will be a very late step inside the monitor to do correctly. So one
>> arising question is, how will the 1D graphic card LUT (VCGT) be
>> implemented? With VCGT properly supported, the output profile could
>> still work on sufficiently well prepared device behaviour.
>>
>> Here a possible process chain:
>>
>> * app buffer (device link for proofing or source profile) ->[ICC
>> conversion in compositor]->
>>
>> * blending space (linear) ->[gamma 2.2]->
>>
>> * calibration protocol ->[1D calibration (VCGT)]->
>>
>> * [encoding for on-the-wire values PQ/...]->
>>
>> * transfer to device
>>
>> The calibration protocol is there for completeness.
> I'm happy to have manged to say something interesting! :-)

Hehe ;-) Personally I am very thankful for you all to take part in this
discussions and that over years.

regards
Kai-Uwe Behrmann

> This is where we start going beyond my knowledge. However, the chain
> does look reasonable to me, I don't see any problems Wayland protocol
> wise or exposing awkward details, if not the assumption of a single LUT
> in the pipeline (VCGT) but I suppose that is a concept in ICC and only
> thing that matter is that it gets applied correctly (e.g. in Vulkan),
> not that it is actually programmed to specific registers in a video
> card.
>
> **
>
> Everyone,
>
> another thought about a compositor implementation detail I would like
> to ask you all is about the blending space.
>
> If the compositor blending space was CIE XYZ with direct (linear)
> encoding to IEEE754 32-bit float values in pixels, with the units of Y
> chosen to match an absolute physical luminance value (or something that
> corresponds with the HDR specifications), would that be sufficient for
> all imaginable and realistic color reproduction purposes, HDR included?
>
> Or do I have false assumptions about HDR specifications and they do
> not define brightness in physical absolute units but somehow in
> relative units? I think I saw "nit" as the unit somewhere which is an
> absolute physical unit.
>
> It might be heavy to use, both storage wise and computationally, but I
> think Weston should start with a gold standard approach that we can
> verify to be correct, encode the behaviour into the test suite, and
> then look at possible optimizations by looking at e.g. other blending
> spaces or opportunistically skipping the blending space.
>
> Would that color space work universally from the colorimetry and
> precision perspective, with any kind of gamut one might want/have, and
> so on?
>
> Meaning, that all client content gets converted according to the client
> provided ICC profiles to CIE XYZ, composited/blended, and then
> converted to output space according to the output ICC profile. In my
> mind, the conversion of client content to CIE XYZ would happen as part
> of sampling the client texture, so that client data remains in the
> format the client provided it, and only the shadow framebuffer for
> blending would need to be 32-bit per channel format. At least for a
> start.
>
>
> Thanks,
> pq
On Thu, Feb 28, 2019 at 4:37 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> another thought about a compositor implementation detail I would like
> to ask you all is about the blending space.
>
> If the compositor blending space was CIE XYZ with direct (linear)
> encoding to IEEE754 32-bit float values in pixels, with the units of Y
> chosen to match an absolute physical luminance value (or something that
> corresponds with the HDR specifications), would that be sufficient for
> all imaginable and realistic color reproduction purposes, HDR included?

CIE XYZ doesn't really have per se limits. It's always possible to
just add more photons, even if things start catching fire.

You can pick sRGB/Rec.709 primaries and define points inside or
outside those primaries, with 32-bit FP precision. This was the
rationalization used in the scRGB color space.
https://en.wikipedia.org/wiki/ScRGB

openEXR assumes Rec.709 primaries if not specified, but quite a bit
more dynamic range than scRGB.
http://www.openexr.com/documentation/TechnicalIntroduction.pdf
http://www.openexr.com/documentation/OpenEXRColorManagement.pdf

An advantage to starting out with constraint, you can much more easily
implement lower precision levels, like 16bpc float or even integer.

> Or do I have false assumptions about HDR specifications and they do
> not define brightness in physical absolute units but somehow in
> relative units? I think I saw "nit" as the unit somewhere which is an
> absolute physical unit.

It depends on which part of specifications you're looking at. The
reference environment, and reference medium are definitely defined in
absolute terms. The term "nit" is the same thing as the candela per
square meter (cd/m^2), and that's the unit for luminance. Display
black luminance and white luminance use this unit. The environment
will use the SI unit lux. The nit is used for projected light, and lux
used for light incident to or emitted from a surface (ceiling, walls,
floor, etc).

In the SDR world including an ICCv4 world, the display class profile
uses relative values: lightness. Not luminance. Even when encoding
XYZ, the values are all relative to that display's white, where Y =
1.0. So yeah for HDR that information is useless and is one of the
gotchas with ICC display class profiles. There are optional tags
defined in the spec for many years now to include measured display
black and white luminance. For HDR applications it would seem it'd
have to be required information. Another gotcha that has been mostly
sorted out I think, is whether the measurements are so called
"contact" or "no contact" measurements, i.e. a contact measurement
won't account for veiling glare, which is the effect of ambient light
reflecting off the surface of the display thereby increasing the
effective display's black luminance. A no contact measurement will
account for it. You might think, the no contact measurement is better.
Well, yeah, maybe in a production environment where everything is
measured and stabilized.

But in a home, you might actually want to estimate veiling glare and
apply it to a no contact display black luminance measurement. Maybe
you have a setting in a player with simple ambient descriptors as
"dark" "moderate" "bright" amounts of ambient condition. The choices
made for handling HDR content in such a case are rather substantially
different. And if this could be done by polling an inexpensive sensor
in the environment, for example a camera on the display, so much the
better. Maybe.

> It might be heavy to use, both storage wise and computationally, but I
> think Weston should start with a gold standard approach that we can
> verify to be correct, encode the behaviour into the test suite, and
> then look at possible optimizations by looking at e.g. other blending
> spaces or opportunistically skipping the blending space.
>
> Would that color space work universally from the colorimetry and
> precision perspective, with any kind of gamut one might want/have, and
> so on?

The compositor is doing what kind of blending for what purpose? I'd
expect any professional video rendering software will do this in their
own defined color space, encoding, and precision - and it all happens
internally. It might be a nice API so that applications don't have to
keep reinventing that particular wheel and doing it internally.

In the near term do you really expect you need blending beyond
Rec.2020/Rec.2100? Rec.2020/Rec.2100 is not so big that transforms to
Rec.709 will require special gamut mapping consideration. But I'm open
to other ideas.

Blender, DaVinci, Lightworks, GIMP or GEGL, and Darktable folks might
have some input here.
On Thu, 28 Feb 2019 20:58:04 +0100
Kai-Uwe <ku.b-list@gmx.de> wrote:

> Am 28.02.19 um 12:37 schrieb Pekka Paalanen:
> > On Thu, 28 Feb 2019 09:12:57 +0100
> > Kai-Uwe <ku.b-list@gmx.de> wrote:
> >  
> >> Am 27.02.19 um 14:17 schrieb Pekka Paalanen:  
> >>> On Tue, 26 Feb 2019 18:56:06 +0100
> >>> Kai-Uwe <ku.b-list@gmx.de> wrote:
> >>>    
> >>>> Am 26.02.19 um 16:48 schrieb Pekka Paalanen:    
> >>>>> On Sun, 22 Jan 2017 13:31:35 +0100
> >>>>> Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
> >>>>>      
> >>>>>> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>  
> >>>>>>
> >>>>>> +    <request name="set_device_link_profile">
> >>>>>> +      <description summary="set a device link profile for a wl_surface and wl_output">
> >>>>>> +        With this request, a device link profile can be attached to a
> >>>>>> +        wl_surface. For each output on which the surface is visible, the
> >>>>>> +        compositor will check if there is a device link profile. If there is one
> >>>>>> +        it will be used to directly convert the surface to the output color
> >>>>>> +        space. Blending of this surface (if necessary) will then be performed in
> >>>>>> +        the output color space and after the normal blending operations.      
> >>>>> Are those blending rules actually implementable?

...

> >> client developers should never use translucent portions. However the
> >> toolkit or compositor might enforce this, e.g. for window decoration,
> >> that outside translucency would break the app intention.)  
> > I really cannot say, I have no opinion on the matter so far. A
> > compositor could be implemented either way.  
> 
> Perhaps the sub surface in wayland is a means for apps, to express the
> intention of a pass through for colors with hopefully no blending. At
> least Dmitry Kazakov mentioned a concept with similarities for
> implementing display HDR support (Windows) inside Krita (open source
> image editor) canvas.

Hi Kai-Uwe,

I'm afraid sub-surface is not quite that. Sub-surfaces allow separating
the pieces of a window into different buffers, e.g. for a video player:
background and decorations, video, subtitles. The benefit is that the
app punts the combining of these pieces to the compositor, which may
then be able to put the video on a hardware overlay for instance. It
does allow using different color spaces for the different pieces, so
maybe that can be useful.

There is currently no concept to say "please, do not put anything
translucent on top of my window" if that is what you mean.

OTOH, the app's own window's blending vs. not is in quite good control
by the app already: use a pixel format without an alpha channel or mark
areas as opaque (wl_surface.set_opaque_region). The opaque region is an
optimization hint, so it should not affect the outcome if alpha says
opaque, but it can allow compositors to optimize their work.

Even still, a compositor could decide to make the window translucent,
if that is the user's preference. This is actually a part of the
reason why measurement/calibration/characterization windows would need
to be marked explicitly as such, so that the compositor knows to present
it correctly regardless of user preferences and other applications.

> > It is not just about window A with the device link profile, it is
> > window B on top of that whose translucency would be a problem, since
> > window A "forces" the color space to become the output color space
> > before window B can be blended in. Or indeed, having to convert from
> > output space to blending space for blending window B and then back to
> > output space again.  
> 
> That sounds all not simple. But either that whole conversion or
> splitting the affected regions into sub subsurfaces is one of the few
> possibilities I see here.

Yes, this is why the device link profile feature seems problematic to
me. Maybe it would have to be reserved for fullscreened and active
top-most windows that are exclusive on the given output... but that
might be prone to bad user experience: "why are my colors different
when I made the app fullscreen?" in case the app is not very careful in
communicating that expectation.

Also, if a compositor was implemented strictly correctly, device link
profiles should not produce any different result. From that point of
view I'm getting the feeling that the design would be optimizing for
faulty compositors. It could be used for testing compositor correctness
though.

> >>> Does that even make any difference if the output space was linear at
> >>> blending step, and gamma was applied after that?    
> >> Interesting. There came the argument that the complete graphics pipeline
> >> should be used for measuring the device for profile generation. So, if
> >> linear (blending) space is always shown to applications and measurement
> >> tools, then that should be profiled and fine. Anyway, a comment from
> >> Graeme Gill would be welcome, on how to profile in linear space? As a
> >> side effect, wayland/OpenGL can do the PQ/HLG decoding afterwards,
> >> without the ICC profiling tool have to worry about. I guess all the
> >> dynamic HDR features will more or less destroy the static connection of
> >> input values to display values. The problem for traditional color
> >> management is that linearisation, or as others spell it, gamma transfer
> >> will be a very late step inside the monitor to do correctly. So one
> >> arising question is, how will the 1D graphic card LUT (VCGT) be
> >> implemented? With VCGT properly supported, the output profile could
> >> still work on sufficiently well prepared device behaviour.
> >>
> >> Here a possible process chain:
> >>
> >> * app buffer (device link for proofing or source profile) ->[ICC
> >> conversion in compositor]->
> >>
> >> * blending space (linear) ->[gamma 2.2]->
> >>
> >> * calibration protocol ->[1D calibration (VCGT)]->
> >>
> >> * [encoding for on-the-wire values PQ/...]->
> >>
> >> * transfer to device
> >>
> >> The calibration protocol is there for completeness.  
> > I'm happy to have manged to say something interesting! :-)  
> 
> Hehe ;-) Personally I am very thankful for you all to take part in this
> discussions and that over years.

Thank *you*,
pq
On Thu, 28 Feb 2019 21:05:13 -0700
Chris Murphy <lists@colorremedies.com> wrote:

> On Thu, Feb 28, 2019 at 4:37 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
> >
> > another thought about a compositor implementation detail I would like
> > to ask you all is about the blending space.
> >
> > If the compositor blending space was CIE XYZ with direct (linear)
> > encoding to IEEE754 32-bit float values in pixels, with the units of Y
> > chosen to match an absolute physical luminance value (or something that
> > corresponds with the HDR specifications), would that be sufficient for
> > all imaginable and realistic color reproduction purposes, HDR included?  
> 
> CIE XYZ doesn't really have per se limits. It's always possible to
> just add more photons, even if things start catching fire.
> 
> You can pick sRGB/Rec.709 primaries and define points inside or
> outside those primaries, with 32-bit FP precision. This was the
> rationalization used in the scRGB color space.
> https://en.wikipedia.org/wiki/ScRGB
> 
> openEXR assumes Rec.709 primaries if not specified, but quite a bit
> more dynamic range than scRGB.
> http://www.openexr.com/documentation/TechnicalIntroduction.pdf
> http://www.openexr.com/documentation/OpenEXRColorManagement.pdf

Hi Chris,

I see, there are much more convenient color spaces to be chosen as the
one internal blending space than my overkill choice of CIE XYZ, that is
very good. Do I understand right that with a suitable value encoding
they can cover any HDR and any gamut?

Thanks for the references!

> An advantage to starting out with constraint, you can much more easily
> implement lower precision levels, like 16bpc float or even integer.
> 
> > Or do I have false assumptions about HDR specifications and they do
> > not define brightness in physical absolute units but somehow in
> > relative units? I think I saw "nit" as the unit somewhere which is an
> > absolute physical unit.  
> 
> It depends on which part of specifications you're looking at. The
> reference environment, and reference medium are definitely defined in
> absolute terms. The term "nit" is the same thing as the candela per
> square meter (cd/m^2), and that's the unit for luminance. Display
> black luminance and white luminance use this unit. The environment
> will use the SI unit lux. The nit is used for projected light, and lux
> used for light incident to or emitted from a surface (ceiling, walls,
> floor, etc).
> 
> In the SDR world including an ICCv4 world, the display class profile
> uses relative values: lightness. Not luminance. Even when encoding
> XYZ, the values are all relative to that display's white, where Y =
> 1.0. So yeah for HDR that information is useless and is one of the
> gotchas with ICC display class profiles. There are optional tags
> defined in the spec for many years now to include measured display
> black and white luminance. For HDR applications it would seem it'd
> have to be required information. Another gotcha that has been mostly
> sorted out I think, is whether the measurements are so called
> "contact" or "no contact" measurements, i.e. a contact measurement
> won't account for veiling glare, which is the effect of ambient light
> reflecting off the surface of the display thereby increasing the
> effective display's black luminance. A no contact measurement will
> account for it. You might think, the no contact measurement is better.
> Well, yeah, maybe in a production environment where everything is
> measured and stabilized.
> 
> But in a home, you might actually want to estimate veiling glare and
> apply it to a no contact display black luminance measurement. Maybe
> you have a setting in a player with simple ambient descriptors as
> "dark" "moderate" "bright" amounts of ambient condition. The choices
> made for handling HDR content in such a case are rather substantially
> different. And if this could be done by polling an inexpensive sensor
> in the environment, for example a camera on the display, so much the
> better. Maybe.
> 
> > It might be heavy to use, both storage wise and computationally, but I
> > think Weston should start with a gold standard approach that we can
> > verify to be correct, encode the behaviour into the test suite, and
> > then look at possible optimizations by looking at e.g. other blending
> > spaces or opportunistically skipping the blending space.
> >
> > Would that color space work universally from the colorimetry and
> > precision perspective, with any kind of gamut one might want/have, and
> > so on?  
> 
> The compositor is doing what kind of blending for what purpose? I'd
> expect any professional video rendering software will do this in their
> own defined color space, encoding, and precision - and it all happens
> internally. It might be a nice API so that applications don't have to
> keep reinventing that particular wheel and doing it internally.

The compositor needs to blend several windows into a single
framebuffer. Granted, we can often assume that 99% or more of the
pixels are opaque so not actually blended, but if we can define the
protocol and make the reference implementation such that even blending
according to the pixels' alpha values will be correct, I think that is
a worthwhile goal.

Giving applications a blending API is a non-goal here, the windows to
be blended together can be assumed to originate from different
applications that do not know of each other. The blending done by the
compositor is for the user's personal viewing pleasure. Applications
are not able to retrieve the blending result even for their own windows.

Sebastian's protocol proposal includes render intent from applications.
Conversion of client content to the blending space should ideally be
lossless, so the render intent in that step should be irrelevant if I
understand right. How to deal with render intent when converting from
blending space to output space is not clear to me, since different
windows may have different intents. Using the window's intent for the
window's pixels works only if the pixel in the framebuffer comes from
exactly one window and not more.

> In the near term do you really expect you need blending beyond
> Rec.2020/Rec.2100? Rec.2020/Rec.2100 is not so big that transforms to
> Rec.709 will require special gamut mapping consideration. But I'm open
> to other ideas.

That's my problem: I don't know what we need, hence I'm asking. My
first guess was to start with something that is able to express
everything in the world, more or less. If we know that the chosen color
space, encoding etc. have no limitations of their own, we can build a
reference implementation that should always produce the correct
results. Once we have a system that is correct, we can ensure with a
test suite that the results are unchanged when we start optimizing and
changing color spaces and encodings.

We need some hand-crafted, manually verified tests to check that the
reference implentation works correctly. Once it does, we can generate a
huge bunch of more tests to ensure the same inputs keep producing the
same outputs. Pixman uses this methodology in its test suite.

> Blender, DaVinci, Lightworks, GIMP or GEGL, and Darktable folks might
> have some input here.
> 

Thank you very much for the insights,
pq
Pekka Paalanen wrote:

> My failing is that I haven't read about what ICC v4 definition actually
> describes, does it characterise content or a device, or is it more
> about defining a transformation from something to something without
> saying what something is.

The ICC format encompasses several related forms. The one
that is pertinent to this discussion is ICC device profiles.

At a minimum an ICC device profile characterizes a devices color
response by encoding a model of device values (i.e. RGB value combinations)
to device independent color values (i.e. values related to device
independent CIE XYZ, called Profile Connection Space values in ICC
terms). A simple model such as color primary values, white point
and per channel responses is easily invertible to allow transformation
both directions.

For less additive devices there are more general models (cLut -
multi-dimensional color Lookup Table), and they are non-trivial
to invert, so a profile contains both forward tables (device -> PCS
AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).

Then there is intents. The most basic is Absolute Colorimetric
and Relative Colorimetric. The former relates the measured
values, while the latter one assumes that the observer is adapted
to the white point of the devices. Typically the difference is assumed
to be a simple chromatic adaptation transform that can be encoded
as the absolute white point or a 3x3 matrix. The default intent
is Relative Colorimetric because this is the transform of least
surprise.

cLUT based profiles allow for two additional intents,
Perceptual where out of gamut colors are mapped to be within
gamut while retaining proportionality, and Saturation where
colors are expanded if possible to maximize colorfulness. These
two intents allow the profile creator considerable latitude in
how they achieve these aims, and they can only be encoded using
a cLUT model.

So in summary an ICC profile provides device characterization, as well
as facilitating fast and efficient transformation between different
devices, as well as a choice of intent handling that cannot typically
be computed on the fly. Naturally to do a device to device space transform
you need two device profiles, one for the source space and one
for the destination.

> What is the use for a device link profile?

Device link profile use was a suggestion to overcome the previously stated
impossibility of a client knowing which output a surface was mapped to.
Since this no longer appears to be impossible (due to wl_surface.enter/leave events
being available), device link profiles should be dropped from the extension.
It is sufficient that a client can do its own color transformation
to the primary output if it chooses, while leaving the compositor to perform
a fallback color transform for any portion that is mapped to a secondary output,
or for any client that is color management unaware, or does not wish to
implement its own color transforms.
This greatly reduces the implementation burden on the compositor.

> If there is only the fd, you can fstat() it to get a size, but that
> may not be the exact size of the payload. Does the ICC data header or
> such indicate the exact size of the payload?

Yes, the size of an ICC profile is stored in its header.

Cheers,
	Graeme Gill.
Pekka Paalanen wrote:

Hi,

> Does that even make any difference if the output space was linear at
> blending step, and gamma was applied after that?

as mentioned earlier, I think talk of using device links is now a red herring.

If it is desirable to do blending in a linear light space (as is typically
the case), then this can be implemented in a way that leverages color management,
without interfering with it. In a color managed workflow all color values intended
for a particular output will be converted to that devices output space either
by the client or the compositor on the clients behalf. The installed output device
profile then can provide the necessary blending information. Even if
a device colorspace is not terribly additive for the purposes of
accurate profile creation, RGB devices are generally additive enough to
approximate linear light mixing with per channel lookup curves. It's
pretty straightforward to use the ICC profile to create such curves
(for each of the RGB channels in turn with the others at zero, lookup
 the Relative Colorimetric XYZ value and take the dot product with the
 100% channel value. Ensure the curves are monotonic, and normalize them
 to map 0 and 1 unchanged.)
Compositing in a linear light space has to occur to sufficiently high
precision of course, so as not to introduce quantization errors.
After composition the inverse curves would be applied to return to the
output device space.

Cheers,
	Graeme Gill.
On Mon, 4 Mar 2019 19:04:11 +1100
Graeme Gill <graeme2@argyllcms.com> wrote:

> Pekka Paalanen wrote:
> 
> > My failing is that I haven't read about what ICC v4 definition actually
> > describes, does it characterise content or a device, or is it more
> > about defining a transformation from something to something without
> > saying what something is.  
> 
> The ICC format encompasses several related forms. The one
> that is pertinent to this discussion is ICC device profiles.
> 
> At a minimum an ICC device profile characterizes a devices color
> response by encoding a model of device values (i.e. RGB value combinations)
> to device independent color values (i.e. values related to device
> independent CIE XYZ, called Profile Connection Space values in ICC
> terms). A simple model such as color primary values, white point
> and per channel responses is easily invertible to allow transformation
> both directions.
> 
> For less additive devices there are more general models (cLut -
> multi-dimensional color Lookup Table), and they are non-trivial
> to invert, so a profile contains both forward tables (device -> PCS
> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
> 
> Then there is intents. The most basic is Absolute Colorimetric
> and Relative Colorimetric. The former relates the measured
> values, while the latter one assumes that the observer is adapted
> to the white point of the devices. Typically the difference is assumed
> to be a simple chromatic adaptation transform that can be encoded
> as the absolute white point or a 3x3 matrix. The default intent
> is Relative Colorimetric because this is the transform of least
> surprise.
> 
> cLUT based profiles allow for two additional intents,
> Perceptual where out of gamut colors are mapped to be within
> gamut while retaining proportionality, and Saturation where
> colors are expanded if possible to maximize colorfulness. These
> two intents allow the profile creator considerable latitude in
> how they achieve these aims, and they can only be encoded using
> a cLUT model.

Hi Graeme,

thank you for taking the time to explain this, much appreciated.

I'm still wondering, if an application uses an ICC profile for the
content it provides and defines an intent with it, should a compositor
apply that intent when converting from application color space to the
blending color space in the compositor?

Should the same application provided intent be used when converting the
composition result of the window to the output color space?

What would be a reasonable way to do those conversions, using which
intents?

> So in summary an ICC profile provides device characterization, as well
> as facilitating fast and efficient transformation between different
> devices, as well as a choice of intent handling that cannot typically
> be computed on the fly. Naturally to do a device to device space transform
> you need two device profiles, one for the source space and one
> for the destination.

Do I understand correctly that an ICC profile can provide separate
(A2B and B2A) cLUT for each intent?

> > What is the use for a device link profile?  
> 
> Device link profile use was a suggestion to overcome the previously stated
> impossibility of a client knowing which output a surface was mapped to.
> Since this no longer appears to be impossible (due to wl_surface.enter/leave events
> being available), device link profiles should be dropped from the extension.
> It is sufficient that a client can do its own color transformation
> to the primary output if it chooses, while leaving the compositor to perform
> a fallback color transform for any portion that is mapped to a secondary output,
> or for any client that is color management unaware, or does not wish to
> implement its own color transforms.
> This greatly reduces the implementation burden on the compositor.

Btw. wl_surface.enter/leave is not unambigous, because they may
indicate multiple outputs simultaneously. I did talk with you about
adding an event to define the one output the app should be optimizing
for, but so far neither protocol proposal has that.

Niels, Sebastian, would you consider such event?


Thanks,
pq
On Mon, 4 Mar 2019 22:07:24 +1100
Graeme Gill <graeme2@argyllcms.com> wrote:

> Pekka Paalanen wrote:
> 
> Hi,
> 
> > Does that even make any difference if the output space was linear at
> > blending step, and gamma was applied after that?  
> 
> as mentioned earlier, I think talk of using device links is now a red herring.
> 
> If it is desirable to do blending in a linear light space (as is typically
> the case), then this can be implemented in a way that leverages color management,
> without interfering with it. In a color managed workflow all color values intended
> for a particular output will be converted to that devices output space either
> by the client or the compositor on the clients behalf. The installed output device
> profile then can provide the necessary blending information. Even if
> a device colorspace is not terribly additive for the purposes of
> accurate profile creation, RGB devices are generally additive enough to
> approximate linear light mixing with per channel lookup curves. It's
> pretty straightforward to use the ICC profile to create such curves
> (for each of the RGB channels in turn with the others at zero, lookup
>  the Relative Colorimetric XYZ value and take the dot product with the
>  100% channel value. Ensure the curves are monotonic, and normalize them
>  to map 0 and 1 unchanged.)
> Compositing in a linear light space has to occur to sufficiently high
> precision of course, so as not to introduce quantization errors.
> After composition the inverse curves would be applied to return to the
> output device space.

Excellent!

Thank you,
pq
Pekka Paalanen wrote:

Hi,

> another thought about a compositor implementation detail I would like
> to ask you all is about the blending space.
> 
> If the compositor blending space was CIE XYZ with direct (linear)
> encoding to IEEE754 32-bit float values in pixels, with the units of Y
> chosen to match an absolute physical luminance value (or something that
> corresponds with the HDR specifications), would that be sufficient for
> all imaginable and realistic color reproduction purposes, HDR included?

I don't think such a thing is necessary. There is no need to transform
to some other primary basis such as XYZ, unless you were attempting
to compose different colorspaces together, something that is
highly undesirable at the compositor blending stage, due to the
lack of input possible from the clients as to source colorspace
encoding and gamut mapping/intent handling.

AFAIK, blending just has to be in a linear light space in
a common set of primaries. If the surfaces that will be
composed have already been converted to the output device colorspace,
then all that is necessary for blending is that they be converted
to a linear light version of the output device colorspace
via per channel curves. Such curves do not have to be 100% accurate
to get most of the benefits of linear light composition. If the
per channel LUTs and compositing is done to sufficient
resolution, this will leave the color management fidelity
completely in tact.

> Or do I have false assumptions about HDR specifications and they do
> not define brightness in physical absolute units but somehow in
> relative units? I think I saw "nit" as the unit somewhere which is an
> absolute physical unit.

Unfortunately the HDR specifications that have been (hastily) adopted
by the video industry are specified in absolute units. I say unfortunately
because the mastering standards they are derived from have a specified
viewing conditions and brightness, something that does not occur
in typical consumer viewing situations. So the consumer needs a "brightness"
control to adapt the imagery for their actual viewing environment and
display capability. The problem is that even if the user was to specify
somehow what absolute "brightness" they wanted, the HDR specs do not
specify what level the "typical" (or as I would call it, the diffuse
white value) of the program material should be, so there is no way to
calculate the brightness multiplier.

Why is this important ? Because if you know the nominal diffuse white
(which is equivalent to the 100% white of SDR), then you can know
how to process the HDR you get from the source to the HDR capabilities
of the display. You can map 100% diffuse white to the users brightness
setting, and then compress the specular highlights/direct light source
values above this to match the displays maximum HDR brightness level.
(Of course propriety dynamic mappings are all the rage too.)

Interestingly it seems that some systems are starting to simply
assume that the HDR 48 or 100 cd/m^2 encoding level diffuse white level,
and going from there.

> It might be heavy to use, both storage wise and computationally, but I
> think Weston should start with a gold standard approach that we can
> verify to be correct, encode the behaviour into the test suite, and
> then look at possible optimizations by looking at e.g. other blending
> spaces or opportunistically skipping the blending space.

The HDR literature has a bunch of information on encoding precision
requirements, since they spend a lot of time trying to figure
out how to encode HDR efficiently. Encoded HDR typically uses
12 bits, and 16 bit half float encoding would work well if you
have hardware to handle it, but for integer encoding, I suspect
24 - 32 bits/channel might be needed.

> Meaning, that all client content gets converted according to the client
> provided ICC profiles to CIE XYZ, composited/blended, and then
> converted to output space according to the output ICC profile.

See all my previous discussions. This approach has many problems
when it comes to gamut and intent.

Cheers,
	Graeme Gill.
On 2019-03-04 12:27, Pekka Paalanen wrote:
> On Mon, 4 Mar 2019 19:04:11 +1100
> Graeme Gill <graeme2@argyllcms.com> wrote:
> 
>> Pekka Paalanen wrote:
>> 
>> > My failing is that I haven't read about what ICC v4 definition actually
>> > describes, does it characterise content or a device, or is it more
>> > about defining a transformation from something to something without
>> > saying what something is.
>> 
>> The ICC format encompasses several related forms. The one
>> that is pertinent to this discussion is ICC device profiles.
>> 
>> At a minimum an ICC device profile characterizes a devices color
>> response by encoding a model of device values (i.e. RGB value 
>> combinations)
>> to device independent color values (i.e. values related to device
>> independent CIE XYZ, called Profile Connection Space values in ICC
>> terms). A simple model such as color primary values, white point
>> and per channel responses is easily invertible to allow transformation
>> both directions.
>> 
>> For less additive devices there are more general models (cLut -
>> multi-dimensional color Lookup Table), and they are non-trivial
>> to invert, so a profile contains both forward tables (device -> PCS
>> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
>> 
>> Then there is intents. The most basic is Absolute Colorimetric
>> and Relative Colorimetric. The former relates the measured
>> values, while the latter one assumes that the observer is adapted
>> to the white point of the devices. Typically the difference is assumed
>> to be a simple chromatic adaptation transform that can be encoded
>> as the absolute white point or a 3x3 matrix. The default intent
>> is Relative Colorimetric because this is the transform of least
>> surprise.
>> 
>> cLUT based profiles allow for two additional intents,
>> Perceptual where out of gamut colors are mapped to be within
>> gamut while retaining proportionality, and Saturation where
>> colors are expanded if possible to maximize colorfulness. These
>> two intents allow the profile creator considerable latitude in
>> how they achieve these aims, and they can only be encoded using
>> a cLUT model.
> 
> Hi Graeme,
> 
> thank you for taking the time to explain this, much appreciated.
> 
> I'm still wondering, if an application uses an ICC profile for the
> content it provides and defines an intent with it, should a compositor
> apply that intent when converting from application color space to the
> blending color space in the compositor?

I think the correct approach would be to first convert from
application color space to the output color space using the intent and
then to blending color space. That way all colors in the blending
color space will fit in the output color space.

> Should the same application provided intent be used when converting the
> composition result of the window to the output color space?

When all blending color sources are in the output color space so is
the resulting color. No intent required.

> What would be a reasonable way to do those conversions, using which
> intents?
> 
>> So in summary an ICC profile provides device characterization, as well
>> as facilitating fast and efficient transformation between different
>> devices, as well as a choice of intent handling that cannot typically
>> be computed on the fly. Naturally to do a device to device space 
>> transform
>> you need two device profiles, one for the source space and one
>> for the destination.
> 
> Do I understand correctly that an ICC profile can provide separate
> (A2B and B2A) cLUT for each intent?

That's my understanding as well.

>> > What is the use for a device link profile?
>> 
>> Device link profile use was a suggestion to overcome the previously 
>> stated
>> impossibility of a client knowing which output a surface was mapped 
>> to.
>> Since this no longer appears to be impossible (due to 
>> wl_surface.enter/leave events
>> being available), device link profiles should be dropped from the 
>> extension.
>> It is sufficient that a client can do its own color transformation
>> to the primary output if it chooses, while leaving the compositor to 
>> perform
>> a fallback color transform for any portion that is mapped to a 
>> secondary output,
>> or for any client that is color management unaware, or does not wish 
>> to
>> implement its own color transforms.
>> This greatly reduces the implementation burden on the compositor.
> 
> Btw. wl_surface.enter/leave is not unambigous, because they may
> indicate multiple outputs simultaneously. I did talk with you about
> adding an event to define the one output the app should be optimizing
> for, but so far neither protocol proposal has that.
> 
> Niels, Sebastian, would you consider such event?

My proposal has the zwp_color_space_feedback_v1 interface which is
trying to solve this issue by listing the color spaces a surface was
converted to in order of importance.

> 
> Thanks,
> pq
> 
> _______________________________________________
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/wayland-devel
On Mon, 4 Mar 2019 23:09:59 +1100
Graeme Gill <graeme2@argyllcms.com> wrote:

> Pekka Paalanen wrote:
> 
> Hi,
> 
> > another thought about a compositor implementation detail I would like
> > to ask you all is about the blending space.
> > 
> > If the compositor blending space was CIE XYZ with direct (linear)
> > encoding to IEEE754 32-bit float values in pixels, with the units of Y
> > chosen to match an absolute physical luminance value (or something that
> > corresponds with the HDR specifications), would that be sufficient for
> > all imaginable and realistic color reproduction purposes, HDR included?  
> 
> I don't think such a thing is necessary. There is no need to transform
> to some other primary basis such as XYZ, unless you were attempting
> to compose different colorspaces together, something that is
> highly undesirable at the compositor blending stage, due to the
> lack of input possible from the clients as to source colorspace
> encoding and gamut mapping/intent handling.
> 
> AFAIK, blending just has to be in a linear light space in
> a common set of primaries. If the surfaces that will be
> composed have already been converted to the output device colorspace,
> then all that is necessary for blending is that they be converted
> to a linear light version of the output device colorspace
> via per channel curves. Such curves do not have to be 100% accurate
> to get most of the benefits of linear light composition. If the
> per channel LUTs and compositing is done to sufficient
> resolution, this will leave the color management fidelity
> completely in tact.

...

> > Meaning, that all client content gets converted according to the client
> > provided ICC profiles to CIE XYZ, composited/blended, and then
> > converted to output space according to the output ICC profile.  
> 
> See all my previous discussions. This approach has many problems
> when it comes to gamut and intent.

Hi Graeme,

ok, doing the composition and blending in the output color but linear
light space sounds good. I'm glad my overkill suggestion was not
necessary.


Thanks,
pq
Hi Sebastian and Graeme

On Mon, 04 Mar 2019 13:37:06 +0100
Sebastian Wick <sebastian@sebastianwick.net> wrote:

> On 2019-03-04 12:27, Pekka Paalanen wrote:
> > On Mon, 4 Mar 2019 19:04:11 +1100
> > Graeme Gill <graeme2@argyllcms.com> wrote:
> >   
> >> Pekka Paalanen wrote:
> >>   
> >> > My failing is that I haven't read about what ICC v4 definition actually
> >> > describes, does it characterise content or a device, or is it more
> >> > about defining a transformation from something to something without
> >> > saying what something is.  
> >> 
> >> The ICC format encompasses several related forms. The one
> >> that is pertinent to this discussion is ICC device profiles.
> >> 
> >> At a minimum an ICC device profile characterizes a devices color
> >> response by encoding a model of device values (i.e. RGB value 
> >> combinations)
> >> to device independent color values (i.e. values related to device
> >> independent CIE XYZ, called Profile Connection Space values in ICC
> >> terms). A simple model such as color primary values, white point
> >> and per channel responses is easily invertible to allow transformation
> >> both directions.
> >> 
> >> For less additive devices there are more general models (cLut -
> >> multi-dimensional color Lookup Table), and they are non-trivial
> >> to invert, so a profile contains both forward tables (device -> PCS
> >> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
> >> 
> >> Then there is intents. The most basic is Absolute Colorimetric
> >> and Relative Colorimetric. The former relates the measured
> >> values, while the latter one assumes that the observer is adapted
> >> to the white point of the devices. Typically the difference is assumed
> >> to be a simple chromatic adaptation transform that can be encoded
> >> as the absolute white point or a 3x3 matrix. The default intent
> >> is Relative Colorimetric because this is the transform of least
> >> surprise.
> >> 
> >> cLUT based profiles allow for two additional intents,
> >> Perceptual where out of gamut colors are mapped to be within
> >> gamut while retaining proportionality, and Saturation where
> >> colors are expanded if possible to maximize colorfulness. These
> >> two intents allow the profile creator considerable latitude in
> >> how they achieve these aims, and they can only be encoded using
> >> a cLUT model.  
> > 
> > Hi Graeme,
> > 
> > thank you for taking the time to explain this, much appreciated.
> > 
> > I'm still wondering, if an application uses an ICC profile for the
> > content it provides and defines an intent with it, should a compositor
> > apply that intent when converting from application color space to the
> > blending color space in the compositor?  
> 
> I think the correct approach would be to first convert from
> application color space to the output color space using the intent and
> then to blending color space. That way all colors in the blending
> color space will fit in the output color space.
> 
> > Should the same application provided intent be used when converting the
> > composition result of the window to the output color space?  
> 
> When all blending color sources are in the output color space so is
> the resulting color. No intent required.

Right, this is what I did not realize until Graeme explained it. Very
good.

> >> > What is the use for a device link profile?  
> >> 
> >> Device link profile use was a suggestion to overcome the previously 
> >> stated
> >> impossibility of a client knowing which output a surface was mapped 
> >> to.
> >> Since this no longer appears to be impossible (due to 
> >> wl_surface.enter/leave events
> >> being available), device link profiles should be dropped from the 
> >> extension.
> >> It is sufficient that a client can do its own color transformation
> >> to the primary output if it chooses, while leaving the compositor to 
> >> perform
> >> a fallback color transform for any portion that is mapped to a 
> >> secondary output,
> >> or for any client that is color management unaware, or does not wish 
> >> to
> >> implement its own color transforms.
> >> This greatly reduces the implementation burden on the compositor.  
> > 
> > Btw. wl_surface.enter/leave is not unambigous, because they may
> > indicate multiple outputs simultaneously. I did talk with you about
> > adding an event to define the one output the app should be optimizing
> > for, but so far neither protocol proposal has that.
> > 
> > Niels, Sebastian, would you consider such event?  
> 
> My proposal has the zwp_color_space_feedback_v1 interface which is
> trying to solve this issue by listing the color spaces a surface was
> converted to in order of importance.

Oh yes, indeed, sorry. We had a discussion going on about that.

Either advertising the main output for the surface, or the compositor's
preferred color space for the surface.

I'm still a little confused here. If an application ICC profile
describes how the content maps to PCS, and an output ICC profile
describes how PCS map to the output space, and we always need both in
the compositor to have a meaningful transformation of content to output
space, what was the benefit of the app knowing the "primary output" or
"preferred color space" for a surface?

Especially if device link profiles are taken out, which removes the
possibility of the application to provide its own color transformation.

Is it the "displayRGB" Chris was talking about? Is there actually a way
for an ICC profile to say "do not touch the values, forward them as is"?

Do I understand right that "displayRGB" is a special-case where the
application provides content already in the color space of some output,
and to facilitate that, the app needs to tell which output it is for,
and for the app to first choose an output, the compositor needs to tell
which output it should be for?

I think I'm starting to see this, understanding that blending space can
be simply the output space in linear form.


Thanks,
pq
On 2019-03-04 14:45, Pekka Paalanen wrote:
> Hi Sebastian and Graeme
> 
> On Mon, 04 Mar 2019 13:37:06 +0100
> Sebastian Wick <sebastian@sebastianwick.net> wrote:
> 
>> On 2019-03-04 12:27, Pekka Paalanen wrote:
>> > On Mon, 4 Mar 2019 19:04:11 +1100
>> > Graeme Gill <graeme2@argyllcms.com> wrote:
>> >
>> >> Pekka Paalanen wrote:
>> >>
>> >> > My failing is that I haven't read about what ICC v4 definition actually
>> >> > describes, does it characterise content or a device, or is it more
>> >> > about defining a transformation from something to something without
>> >> > saying what something is.
>> >>
>> >> The ICC format encompasses several related forms. The one
>> >> that is pertinent to this discussion is ICC device profiles.
>> >>
>> >> At a minimum an ICC device profile characterizes a devices color
>> >> response by encoding a model of device values (i.e. RGB value
>> >> combinations)
>> >> to device independent color values (i.e. values related to device
>> >> independent CIE XYZ, called Profile Connection Space values in ICC
>> >> terms). A simple model such as color primary values, white point
>> >> and per channel responses is easily invertible to allow transformation
>> >> both directions.
>> >>
>> >> For less additive devices there are more general models (cLut -
>> >> multi-dimensional color Lookup Table), and they are non-trivial
>> >> to invert, so a profile contains both forward tables (device -> PCS
>> >> AKA A2B tables) and reverse tables (PCS -> device AKA B2A tables).
>> >>
>> >> Then there is intents. The most basic is Absolute Colorimetric
>> >> and Relative Colorimetric. The former relates the measured
>> >> values, while the latter one assumes that the observer is adapted
>> >> to the white point of the devices. Typically the difference is assumed
>> >> to be a simple chromatic adaptation transform that can be encoded
>> >> as the absolute white point or a 3x3 matrix. The default intent
>> >> is Relative Colorimetric because this is the transform of least
>> >> surprise.
>> >>
>> >> cLUT based profiles allow for two additional intents,
>> >> Perceptual where out of gamut colors are mapped to be within
>> >> gamut while retaining proportionality, and Saturation where
>> >> colors are expanded if possible to maximize colorfulness. These
>> >> two intents allow the profile creator considerable latitude in
>> >> how they achieve these aims, and they can only be encoded using
>> >> a cLUT model.
>> >
>> > Hi Graeme,
>> >
>> > thank you for taking the time to explain this, much appreciated.
>> >
>> > I'm still wondering, if an application uses an ICC profile for the
>> > content it provides and defines an intent with it, should a compositor
>> > apply that intent when converting from application color space to the
>> > blending color space in the compositor?
>> 
>> I think the correct approach would be to first convert from
>> application color space to the output color space using the intent and
>> then to blending color space. That way all colors in the blending
>> color space will fit in the output color space.
>> 
>> > Should the same application provided intent be used when converting the
>> > composition result of the window to the output color space?
>> 
>> When all blending color sources are in the output color space so is
>> the resulting color. No intent required.
> 
> Right, this is what I did not realize until Graeme explained it. Very
> good.
> 
>> >> > What is the use for a device link profile?
>> >>
>> >> Device link profile use was a suggestion to overcome the previously
>> >> stated
>> >> impossibility of a client knowing which output a surface was mapped
>> >> to.
>> >> Since this no longer appears to be impossible (due to
>> >> wl_surface.enter/leave events
>> >> being available), device link profiles should be dropped from the
>> >> extension.
>> >> It is sufficient that a client can do its own color transformation
>> >> to the primary output if it chooses, while leaving the compositor to
>> >> perform
>> >> a fallback color transform for any portion that is mapped to a
>> >> secondary output,
>> >> or for any client that is color management unaware, or does not wish
>> >> to
>> >> implement its own color transforms.
>> >> This greatly reduces the implementation burden on the compositor.
>> >
>> > Btw. wl_surface.enter/leave is not unambigous, because they may
>> > indicate multiple outputs simultaneously. I did talk with you about
>> > adding an event to define the one output the app should be optimizing
>> > for, but so far neither protocol proposal has that.
>> >
>> > Niels, Sebastian, would you consider such event?
>> 
>> My proposal has the zwp_color_space_feedback_v1 interface which is
>> trying to solve this issue by listing the color spaces a surface was
>> converted to in order of importance.
> 
> Oh yes, indeed, sorry. We had a discussion going on about that.
> 
> Either advertising the main output for the surface, or the compositor's
> preferred color space for the surface.
> 
> I'm still a little confused here. If an application ICC profile
> describes how the content maps to PCS, and an output ICC profile
> describes how PCS map to the output space, and we always need both in
> the compositor to have a meaningful transformation of content to output
> space, what was the benefit of the app knowing the "primary output" or
> "preferred color space" for a surface?

Not requiring a color space transformation at all (in the best case)
by supplying a surface with the color space of the output.

> Especially if device link profiles are taken out, which removes the
> possibility of the application to provide its own color transformation.

If the application has a device link profile it can just do the color
space conversion on its own and assign an ICC profile of the resulting
color space to the surface.

> Is it the "displayRGB" Chris was talking about? Is there actually a way
> for an ICC profile to say "do not touch the values, forward them as 
> is"?
> 
> Do I understand right that "displayRGB" is a special-case where the
> application provides content already in the color space of some output,
> and to facilitate that, the app needs to tell which output it is for,
> and for the app to first choose an output, the compositor needs to tell
> which output it should be for?

It's not a special case in the protocol because the client still just
assigns an ICC profile to the surface but yes, if the color space of
the surface matches the color space of the display the compositor can
skip color space conversion and the colors from the client don't get
touched (except for blending but even then colors with alpha one will
look exactly like the client intended them to look).

> I think I'm starting to see this, understanding that blending space can
> be simply the output space in linear form.
> 
> 
> Thanks,
> pq
> 
> _______________________________________________
> wayland-devel mailing list
> wayland-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/wayland-devel
Hello Sebastian and Pekka,

Am 04.03.19 um 15:44 schrieb Sebastian Wick:

> On 2019-03-04 14:45, Pekka Paalanen wrote:
...
> Not requiring a color space transformation at all (in the best case)
> by supplying a surface with the color space of the output.
>
>> Especially if device link profiles are taken out, which removes the
>> possibility of the application to provide its own color transformation.
With a
* device link profile or with
* color transform inside the application and attaching the output
profile to the surface,
in both cases the application provides it's own color transform. The
difference is, that in the former case to transform runs usually pretty
slowly on the CPU and memory optimisation for the whole system is not
possible. Each application is cooking on its own. In the later case the
conversion is performed by the compositor on the GPU, which is way
faster and the it can be easier optimised both in terms of performance
and memory wise.
> If the application has a device link profile it can just do the color
> space conversion on its own and assign an ICC profile of the resulting
> color space to the surface.

See above. Without offloading the conversion to the compositor, that
will be typical slower. Maybe I am wrong? But I do not see too many
applications doing GPU color management by their own. For certain they
do not share the color transforms in memory.

As a implementation detail, the compositor needs to cache the color
transforms in memory as they are expensive on first run or cache even on
disk. The later happens typical as device link profile. So, the
compositor might choose to support device links anyway. In my experience
on desktop color management and application color management, the
implementation effort for device link versus without is similar.

thanks,
Kai-Uwe Behrmann
On Mon, Mar 4, 2019 at 4:27 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:

> I'm still wondering, if an application uses an ICC profile for the
> content it provides and defines an intent with it, should a compositor
> apply that intent when converting from application color space to the
> blending color space in the compositor?

Well, in an ICC context it can be your (the programming infrastructure
designer and builder) choice to only offer perceptual rendering. That
is the only required tag in the profiles under discussion, all of
which are 'display class' profiles. That applies to real display
profiles (colord EDID based, or measurement base) as well as most
instances of "artificial" color spaces like sRGB, opRGB, ROMM RGB and
so on.

In an HDR context this is still an open question, but someone or
something will have to account for mismatches or suffer dramatic
consequences in rendering.


> Should the same application provided intent be used when converting the
> composition result of the window to the output color space?

It can be image specific. Again for ICC, standard dynamic range,
output referred context, it's sane to always and only permit
perceptual. There are use cases for absolute colorimetric, but they
are special use cases and for them to be effective you have to apply
such white point simulation across the entire UI. And because this is
still lacking on Windows and macOS, the way it's handled in those
workflows that require it is, hardware level calibration of display
white point to match print media white point. In that case, an
absolute colorimetric and media-relative colorimetric transform are
the same.

Also, because there is an 'input class' (scanners and cameras), and
'output class' (printers and presses) ICC profiles. I tend to refer to
source profiles (the origin or starting point of a transform) and
destination profiles (the intended or end point of a transform) rather
than referring to input and output color spaces. That is, a display is
not an output. It could be a source or destination, by using its
'display class' profile.

> What would be a reasonable way to do those conversions, using which
> intents?

Always do perceptual rendering. And special use cases need to do
things differently. Quite often, media simulation implies CMYK like
the case of Scribus doing proof simulations, you probably don't need
to get involved in that, just give Scribus a way to opt out of display
compensation by indicating they've already done it.


> Do I understand correctly that an ICC profile can provide separate
> (A2B and B2A) cLUT for each intent?

It's rare outside of the output class and occasionally for input
class. And in the output class case, it's mainly the B2A (PCS to
device) that have different tables, since the gamut mapping is baked
into the profile rather than intelligently/dynamically performed by an
engine. Most often I see output class have one A2B table (device to
PCS), which is what applies in the application display use case, with
the rendering intent choices pointing to that table.
Am Mittwoch, 27. Februar 2019, 14:17:07 CET schrieb Pekka Paalanen:
> On Tue, 26 Feb 2019 18:56:06 +0100
> 
> Kai-Uwe <ku.b-list@gmx.de> wrote:
> > Am 26.02.19 um 16:48 schrieb Pekka Paalanen:
> > > On Sun, 22 Jan 2017 13:31:35 +0100
> > > 
> > > Niels Ole Salscheider <niels_ole@salscheider-online.de> wrote:
> > >> Signed-off-by: Niels Ole Salscheider <niels_ole@salscheider-online.de>
> > > 
> > > My failing is that I haven't read about what ICC v4 definition actually
> > > describes, does it characterise content or a device, or is it more
> > > about defining a transformation from something to something without
> > > saying what something is.
> > 
> > ICC v4 is a specification from 2010 and became in the same year ISO
> > 15076-1:2010. Both specs are technically identical. The standard
> > describes the content of a color profile format and gives some hints on
> > how to handle color transforms. The v4 ICC profiles, as previous ICC v2
> > profiles too, can describe device color characteristics in relation to a
> > reference color space (ProfileConnectionSpace PCS CIE*XYZ or CIE*Lab).
> > This most often variant is used for color space characterisation, e.g.
> > sRGB or device characterisation (monitors, cameras, ...). With this
> > variant the compositor takes over responsibility and uses intelligence
> > to combine the input source profile with perhaps effect profiles, for
> > white point adjustment, red light reduction etc... and a final output
> > profile into one color transform. The transform is then applied as 3D
> > texture/shader depending on the actual compositor implementation.
> > 
> > A ICC profile class variant, called device link profiles, can describe a
> > color conversion without a reference color space, e.g. RGB->RGB. More
> > below
> 
> ...
> 
> > >> +    <request name="set_device_link_profile">
> > >> +      <description summary="set a device link profile for a wl_surface
> > >> and wl_output"> +        With this request, a device link profile can
> > >> be attached to a +        wl_surface. For each output on which the
> > >> surface is visible, the +        compositor will check if there is a
> > >> device link profile. If there is one +        it will be used to
> > >> directly convert the surface to the output color +        space.
> > >> Blending of this surface (if necessary) will then be performed in +   
> > >>     the output color space and after the normal blending operations.> > 
> > > Are those blending rules actually implementable?
> > > 
> > > It is not generally possible to blend some surfaces into a temporary
> > > buffer, convert that to the next color space, and then blend some more,
> > > because the necessary order of blending operations depends on the
> > > z-order of the surfaces.
> > > 
> > > What implications does this have on the CRTC color processing pipeline?
> > > 
> > > If a CRTC color processing pipeline, that is, the transformation from
> > > framebuffer values to on-the-wire values for a monitor, is already set
> > > up by the compositor's preference, what would a device link profile
> > > look like? Does it produce on-the-wire or blending space?
> > > 
> > > If the transformation defined by the device link profile produced
> > > values for the monitor wire, then the compositor will have to undo the
> > > CRTC pipeline transformation during composition for this surface, or it
> > > needs to reset CRTC pipeline setup to identity and apply it manually
> > > for all other surfaces.
> > > 
> > > What is the use for a device link profile?
> > 
> > A device link profile is useful to describe a transform from a buffer to
> > a match one specific output. Device links can give a very fine grained
> > control to applications to decide what they want with their colors. This
> > is useful in case a application want to circumvent the default gamut
> > mapping optimise for each output connected to a computer or add color
> > effects like proofing. The intelligence is inside the device link
> > profile and the compositor applies that as a dump rule.
> 
> Hi Kai-Uwe,
> 
> right, thank you. I did get the feeling right on what it is supposed to
> do, but I have hard time imagining how to implement that in a compositor
> that also needs to cater for other windows on the same output and blend
> them all together correctly.
> 
> Even without blending, it means that the CRTC color manipulation
> features cannot really be used at all, because there are two
> conflicting transformations to apply: from compositor internal
> (blending) space to the output space, and from the application content
> space through the device link profile to the output space. The only
> way that could be realized without any additional reverse
> transformations is that the CRTC is set as an identity pass-through,
> and both kinds of transformations are done in the composite rendering
> with OpenGL or Vulkan.
> 
> If we want device link profiles in the protocol, then I think that is
> the cost we have to pay. But that is just about performance, while to
> me it seems like correct blending would be impossible to achieve if
> there was another translucent window on top of the window using a
> device link profile. Or even worse, a stack like this:

We added the device link profiles for "professional" applications that want to 
have full control. I think these will mostly be photo or video editing 
applications. The advantage is that the application can be sure that the 
compositor does not "mess" with the colors and that it can also provide device 
links for multiple outputs and look completely correct on multiple outputs 
without help of the compositor.
In these situations you would not want any sort of blending of the surface 
that contains the photo / video frame / ... I don't know if we want to 
guarantee that it does never happen (maybe the compositor makes the window a 
bit transparent while moving it around?) - but the assumption was that these 
would be rare cases and that we would accept blending in the output color 
space then.

I don't know if the device link profiles are *really* needed but I am not the 
right one to answer that...

> window B (color profile)
> window A (device link profile)
> wallpaper (color profile)
> 
> If both windows have translucency somewhere, they must be blended in
> that order. The blending of window A cannot be postponed after the
> others.
> 
> I guess that implies that if even one surface on an output uses a
> device link profile, then all blending must be done in the output color
> space instead of an intermediate blending space. Is that an acceptable
> trade-off?
> 
> Does that even make any difference if the output space was linear at
> blending step, and gamma was applied after that?
> 
> > > If the client says the content is according to a certain specification
> > > (set_color_profile) and the compositor combines that with the output's
> > > color profile to produce the on-the-wire pixel values, how would
> > > setting a device link profile produce something different?
> > > 
> > >> +        The device link profile is applied after the next
> > >> wl_surface.commit +        request.
> > > 
> > > "The device link profile is double-buffered state, see
> > > wl_surface.commit."
> > > 
> > >> +      </description>
> > >> +      <arg name="surface" type="object" interface="wl_surface"
> > >> +           summary="the surface for which the device link profile
> > >> should be used" /> +      <arg name="output" type="object"
> > >> interface="wl_output"
> > >> +           summary="the output for which the device link profile was
> > >> created" /> +      <arg name="device_link_profile" type="object"
> > >> +           interface="zwp_color_profile_v1" summary="the device link
> > >> profile" /> +    </request>
> 
> Thanks,
> pq
Chris Murphy wrote:

> 1.0. So yeah for HDR that information is useless and is one of the
> gotchas with ICC display class profiles. There are optional tags
> defined in the spec for many years now to include measured display
> black and white luminance. For HDR applications it would seem it'd
> have to be required information.

It's easy enough to mandate that a profile for a display
marked as HDI include the luminance tag. Or simply
reject any display profile that is missing the tag.

Cheers,
	Graeme.
Pekka Paalanen wrote:
> Sebastian's protocol proposal includes render intent from applications.
> Conversion of client content to the blending space should ideally be
> lossless, so the render intent in that step should be irrelevant if I
> understand right. How to deal with render intent when converting from
> blending space to output space is not clear to me, since different
> windows may have different intents. Using the window's intent for the
> window's pixels works only if the pixel in the framebuffer comes from
> exactly one window and not more.

Conceptually the conversion from source space to destination
is a single step, with a single intent. So conceptually any
conversions after this for the purposes of blending don't
have to pay any regard to intent - it's already applied.

Blending could convert from the device space back
to XYZ, blend, and then convert back to device space.
It would use whatever intent is appropriate for blending
purposes, i.e. probably Relative Colorimetric.
But I doubt this is the best approach. (see my previous
post on blending.)

Cheers,
	Graeme Gill.
Pekka Paalanen wrote:

Hi,

> I'm still wondering, if an application uses an ICC profile for the
> content it provides and defines an intent with it, should a compositor
> apply that intent when converting from application color space to the
> blending color space in the compositor?

yes it should, since this is a standard expectation of applications
that make use of ICC profiles to do color management, and is
standardly supported by the CMM's used to perform such conversions.

> Should the same application provided intent be used when converting the
> composition result of the window to the output color space?

No, this wouldn't make sense. The applications intent in converting
from the source space to the displays space has been executed and
is then represented in the output space pixel values.

In any case, I don't think the compositor should be changing
colorspaces for the purposes of compositing buffers together
for an output. All the source elements to be composed should
be in the output colorspace before composition. The only
modifications I can imagine would be for the purposes of linear
light blending, and this should aim to have no impact
on the color fidelity.

> Do I understand correctly that an ICC profile can provide separate
> (A2B and B2A) cLUT for each intent?

Yes, that is how it is implemented in cLUT based profiles. There is
no normally no table for Absolute Colorimetric - the Relative
Colorimetric table is used with an adjustment based
on the white point tag. (Exception is for the ICC V4
floating point cLUTs - it's possible to have an explicit
Absolute Colorimetric table in that case.)

> Btw. wl_surface.enter/leave is not unambigous, because they may
> indicate multiple outputs simultaneously. I did talk with you about
> adding an event to define the one output the app should be optimizing
> for, but so far neither protocol proposal has that.

Right, neither proposed protocol seems to make allowances
for some of the suggestions discussed.

I'm maintaining a summary of my current thinking here:
<http://www.argyllcms.com/WaylandCM_v1.txt> for anyone interested.

Cheers,
	Graeme Gill.


Regards
Shashank

> -----Original Message-----
> From: wayland-devel [mailto:wayland-devel-bounces@lists.freedesktop.org] On
> Behalf Of Pekka Paalanen
> Sent: Wednesday, March 6, 2019 7:10 PM
> To: Graeme Gill <graeme2@argyllcms.com>; Niels Ole Salscheider
> <niels_ole@salscheider-online.de>; Sebastian Wick <sebastian@sebastianwick.net>;
> Nautiyal, Ankit K <ankit.k.nautiyal@intel.com>
> Cc: Kai-Uwe <ku.b-list@gmx.de>; wayland-devel@lists.freedesktop.org; Adam
> Jackson <ajax@redhat.com>; graeme@argyllcms.com; Chris Murphy
> <lists@colorremedies.com>; Sharma, Shashank <shashank.sharma@intel.com>
> Subject: Re: [RFC wayland-protocols v2 1/1] Add the color-management protocol
> 
> On Wed, 6 Mar 2019 21:26:56 +1100
> Graeme Gill <graeme2@argyllcms.com> wrote:
> 
> > I'm maintaining a summary of my current thinking here:
> > <http://www.argyllcms.com/WaylandCM_v1.txt> for anyone interested.
> 
> Hi Graeme,
> 
> so very good you made a write-up! The email threads are growing unwieldy to search
> for a specific topic or question. While your write-up has some things I'm unsure of and
> some things I cannot claim to understand yet, I do largely agree with it at this point.
> 
> Sebastian, Niels, Ankit, Shashank,
> 
> I hope that you will cooperate and work on a common protocol proposal, so that I do
> not have to try to pick a better one. ;-) I also hope that you cooperate on the Weston
> implementation side to avoid duplicated work.
> 
> Chris, Adam,
> 
> thank you very much for your input!
> 
> I have a feeling that there has been enough discussion for the moment, that I would
> like to see the next joint effort protocol proposal before my input could be useful
> again.
Agree, Pekka. We were already thinking of publishing a sample merge of HDR protocol patches, built on top of Sebastian's color management protocol, so that all can comment directly on the code. Ankit will soon publish the code. 
- Shashank
> 
> 
> Thank you everyone,
> pq
On Wed, Mar 6, 2019 at 5:28 AM Pekka Paalanen <ppaalanen@gmail.com> wrote:
>
> On Wed, 6 Mar 2019 13:44:39 +1100
> Graeme Gill <graeme2@argyllcms.com> wrote:
>
> > Blending could convert from the device space back
> > to XYZ, blend, and then convert back to device space.
> > It would use whatever intent is appropriate for blending
> > purposes, i.e. probably Relative Colorimetric.
> > But I doubt this is the best approach. (see my previous
> > post on blending.)
>
> Hi Graeme,
>
> yes, you're right. I wrote that before I understood that the
> destination space can also be used for blending, it only needs to be
> linearized.

If the destination color space is defined by an actual display profile
(actual being a profile for someone's physical display; rather than an
idealized space such as sRGB, opRGB, ROMM RGB) you have to be careful
about some things.

Real displays often are not gray balanced. Calibration should achieve
this, often it can't. What if the display is characterized and not
calibrated to force R=G=B to be a neutral. And also what is neutral?
Usually the black point xy (chromaticity, CIE xyY space) is ignored
and assumed to be the same as the white point xy, but what if it
isn't?

Also, the display profiles for displays can contain errors. In
particular the case where the xy for the primaries don't add up to the
white point xy; that is R+G+B!=white, which is a basic violation of
physics but does end up getting baked into some ICC profiles typically
because someone did adaptation incorrectly or made some wrong
assumption somewhere. Firefox has some code to disqualify such display
profiles and just use sRGB instead. I think colord can do this too but
I don't think the metrics for disqualification are the same. Ideally
I'd like to see whatever is used to set the display profile does this
kind of check and immediately disqualifies the display profile with a
user notification.

I tend to refer to sRGB, opRGB/Adobe RGB (1998), ROMM/Prophoto RGB, as
"manufactured" or "idealized" or "quasi device-independent" color
spaces because they are well behaved, you can assume equal amounts of
RGB will produce a neutral, you can assume they approximately
perceptual linear (if you're also assuming the reference medium and
reference environment their color image encodings specify, and why not
assume that since everyone else is?).

I'm not clear to me what is the advantage of blending in 32bpc XYZ
versus blending in a 32bpc "manufactured" RGB color space. I literally
can't think of anyone actually doing that. Plus aren't most blending
algorithms already RGB based? So why change that?

The potential to blend in such a space and end up with massively
chromatic colors that can only be clipped to the destination gamut
boundary when converting to displayRGB, is real. We already run into
this problem, sometimes, when the blending space is linear ProPhoto
RGB (same primaries as ROMM RGB, using gamma 1.0 function instead of
1.8).

A possibly important side topic to understand about rendering intent
transforms: it is not intelligent or dynamic.
Kai-Uwe wrote:

Hi Kai-Uwe,

> See above. Without offloading the conversion to the compositor, that
> will be typical slower. Maybe I am wrong? But I do not see too many
> applications doing GPU color management by their own. For certain they
> do not share the color transforms in memory.

but the point is to allow an application to do its own
color management if it needs to. You can't offload every
possible color conversion to the Compositor, it's too open
ended. It would have to handle up to 15 channels, and compose
minute elements of the application sources (i.e. consider
a PDF proofing application where different elements have
different source colorspaces and intents), etc.

Suggesting that an application convert to some sort
of intermediate colorspace has problems too :-
it doesn't save on execution time - in fact it increases
it by needing a double conversions, and it wrecks any
sort of nuanced gamut mapping - if the intermediate
colorspace is smaller than the display it will unnecessarily
limit the gamut, and if it is bigger it will clip or over
expand rather than ending up with the same result as
(say) an application that creates an "intelligent" gamut
mapping.

So the suggestion is that the compositor have limited
color conversion capabilities so that it can convert
between a buffer and a secondary display when a surface
is multiply mapped, as well as provide default color management
for applications that are color unaware, or do not wish
to implement color management themselves.
The limited scope makes compositor implementation
optimizing quite tractable too.

> As a implementation detail, the compositor needs to cache the color
> transforms in memory as they are expensive on first run or cache even on
> disk. The later happens typical as device link profile. So, the
> compositor might choose to support device links anyway. In my experience
> on desktop color management and application color management, the
> implementation effort for device link versus without is similar.

Yes, caching the link or (perhaps) matrix & Luts or 3D texture is a necessary performance
optimization in the compositor implementation. But this should be invisible
to the applications.

Cheers,
	Graeme Gill.
Niels Ole Salscheider wrote:

Hi,

> We added the device link profiles for "professional" applications that want to 
> have full control. 

Right, but with the availability of wl_surface.enter/leave events, the
client can keep track of which displays the surface is mapped to,
and do its own color conversion for the highest priority
display. So device links aren't needed. This simplifies the
compositor requirements considerably (no need to support
buffers with up to 15 channels).

> In these situations you would not want any sort of blending of the surface 
> that contains the photo / video frame / ... I don't know if we want to 
> guarantee that it does never happen (maybe the compositor makes the window a 
> bit transparent while moving it around?) - but the assumption was that these 
> would be rare cases and that we would accept blending in the output color 
> space then.

Blending isn't a problem if it doesn't alter the pixel values
for opaque surfaces.

Cheers,
	Graeme Gill.
Chris Murphy wrote:

Hi Chris,

> Real displays often are not gray balanced. Calibration should achieve
> this, often it can't. What if the display is characterized and not
> calibrated to force R=G=B to be a neutral. And also what is neutral?
> Usually the black point xy (chromaticity, CIE xyY space) is ignored
> and assumed to be the same as the white point xy, but what if it
> isn't?

Right, but I don't think that's relevant to alpha blending in a per channel
linearized device space. Given that the main alternative is blending
in the non-linear device space, anything closer to linear light will
give more natural blending behavior, and if you blend any combination
of the same color with alpha values that add up to 1.0, the color
should be untouched. (i.e. I'm not proposing that the space be
used for color edits!)

> The potential to blend in such a space and end up with massively
> chromatic colors that can only be clipped to the destination gamut
> boundary when converting to displayRGB, is real. We already run into
> this problem, sometimes, when the blending space is linear ProPhoto
> RGB (same primaries as ROMM RGB, using gamma 1.0 function instead of
> 1.8).

Properly ordered alpha blends should never add up to greater than 1.0,
so there is no possibility of the blended result exceeding the
space gamut. Of course it's far easier to do a per component
clip if needed in a linearised display space.

Cheers,
	Graeme.
On Wed, Mar 6, 2019 at 9:53 PM Graeme Gill <graeme2@argyllcms.com> wrote:
>
> Right, but I don't think that's relevant to alpha blending in a per channel
> linearized device space.

I totally missed that this is only alpha blending. I was thinking
general purpose for arbitrary blending, e.g. the now ISO defined (via
PDF) blending operations.
Am 07.03.19 um 02:47 schrieb Graeme Gill:
> Niels Ole Salscheider wrote:
>
> Hi,
>
>> We added the device link profiles for "professional" applications that want to 
>> have full control. 
> Right, but with the availability of wl_surface.enter/leave events, the
> client can keep track of which displays the surface is mapped to,
> and do its own color conversion for the highest priority
> display. So device links aren't needed. This simplifies the

Device links are a serialisation of a color transform. lcms supports to
dump color transforms to device links and load a device link into a
color transform. Device link profiles come with little effort. Device
links provide applications a entry to let the compositor do custom color
space conversions on the GPU.

> compositor requirements considerably (no need to support
> buffers with up to 15 channels).
Constrain device link to 3 in/out color channels as for the surface
color space too. No need for 15 channels here.

regards,
Kai-Uwe Behrmann
Kai-Uwe wrote:

Hi,

> Device links are a serialisation of a color transform. lcms supports to
> dump color transforms to device links and load a device link into a
> color transform. Device link profiles come with little effort. Device
> links provide applications a entry to let the compositor do custom color
> space conversions on the GPU.

The problem is that device links don't fulfill the tagging purpose
needed for the Compositor to be able to re-target a buffer at
a different display. So yes, you are right that there is scope
to off-load a 3D->3D link conversion to the Compositor, but it complicates
the protocol, and I don't think the extra complexity is necessary
for an initial implementation. But I will have a think about how it
would fit in.

And (correct me if I'm wrong but) isn't the whole purpose of Wayland that
the client is responsible for rendering, and is given the tools to be
able to do it ? i.e. it has access to the buffers and GPU if it wants to
implement its own color management that way.

Cheers,
	Graeme.