[xserver] os: Call FlushClient() before sending FD-passing messages

Submitted by Alexander Volkov on March 30, 2018, 11:18 a.m.

Details

Message ID 20180330111855.1849-1-a.volkov@rusbitech.ru
State New
Series "os: Call FlushClient() before sending FD-passing messages"
Headers show

Commit Message

Alexander Volkov March 30, 2018, 11:18 a.m.
Otherwise a client may receive data with an unrelated file
descriptor after calling recvmsg() if its input buffer is not
big enough. In libxcb it may lead to a situation when all
received messages fit the buffer while a message related to
the attached fd is not received yet. libxcb can't find the
corresponding message and fails with XCB_CONN_CLOSED_FDPASSING_FAILED
error.
---
 os/io.c | 3 +++
 1 file changed, 3 insertions(+)

Patch hide | download patch | download mbox

diff --git a/os/io.c b/os/io.c
index b099f0967..f4e80557d 100644
--- a/os/io.c
+++ b/os/io.c
@@ -488,6 +488,9 @@  WriteFdToClient(ClientPtr client, int fd, Bool do_close)
 #if XTRANS_SEND_FDS
     OsCommPtr oc = (OsCommPtr) client->osPrivate;
 
+    if (oc->output && oc->output->count > 0)
+        (void) FlushClient(client, oc, (char *) NULL, 0);
+
     return _XSERVTransSendFd(oc->trans_conn, fd, do_close);
 #else
     return -1;

Comments

Keith Packard March 31, 2018, 7 p.m.
Alexander Volkov <a.volkov@rusbitech.ru> writes:

> Otherwise a client may receive data with an unrelated file
> descriptor after calling recvmsg() if its input buffer is not
> big enough. In libxcb it may lead to a situation when all
> received messages fit the buffer while a message related to
> the attached fd is not received yet. libxcb can't find the
> corresponding message and fails with XCB_CONN_CLOSED_FDPASSING_FAILED
> error.

Thanks for looking in to this; it looks like a problem to me as well
because xcb does this:

        /* If we have any left-over file descriptors after emptying
         * the input buffer, then the server sent some that we weren't
         * expecting.  Close them and mark the connection as broken;
         */

However, fixing this in the server is harder than it appears...

> +    if (oc->output && oc->output->count > 0)
> +        (void) FlushClient(client, oc, (char *) NULL, 0);
> +

FlushClient doesn't empty the buffer when the kernel buffers are full,
so while calling it here may reduce the occurrence of the problem, it
won't eliminate it.

One way to fix this would be to have the X server OS layer *also* buffer
fds and ensure that they are presented to Xtrans just before the reply
which uses them. Alternatively, we could fix xcb so that it kept the fds
around for 'a while' instead of discarding them immediately. The latter
approach seems a lot easier to me?
Alexander Volkov April 3, 2018, 5:06 p.m.
31.03.2018 22:00, Keith Packard пишет:
>> +    if (oc->output && oc->output->count > 0)
>> +        (void) FlushClient(client, oc, (char *) NULL, 0);
>> +
> FlushClient doesn't empty the buffer when the kernel buffers are full,
> so while calling it here may reduce the occurrence of the problem, it
> won't eliminate it.
>
> One way to fix this would be to have the X server OS layer *also* buffer
> fds and ensure that they are presented to Xtrans just before the reply
> which uses them. Alternatively, we could fix xcb so that it kept the fds
> around for 'a while' instead of discarding them immediately. The latter
> approach seems a lot easier to me?
Yes, it would be easier to fix this in libxcb, but I believe that it 
would be more
correct to do this in the X server. At least I want to try to fix the X 
server.
Keith Packard April 3, 2018, 6:57 p.m.
Alexander Volkov <a.volkov@rusbitech.ru> writes:

> Yes, it would be easier to fix this in libxcb, but I believe that it
> would be more correct to do this in the X server. At least I want to
> try to fix the X server.

Hrm.

The problem is that there are two streams of data here -- the stream of
fds and the stream of replies. xcb currently insists that they remain
aligned so that any fds are associated with the reply data received in
the same recvmsg operation. This allows an X server to send fds which
the client is not expecting, with the client can silently closing
them.

If we let the two streams run un-aligned, then there will be a queue of
received fds that should (unless there's a bug) eventually get
associated with the correct reply. The requirement here is looser -- we
just need to make sure the fds arrive no later than the reply data. This
seems much easier to achieve, and in fact the current X server code does
this today.

Changing xcb to allow early fds involves removing code that closes fds
received before the associated reply.

Changing the X server to ensure that fds are delivered exactly with the
associated reply data involves adding code to queue the fds and insert
additional flushes to make sure the kernel writes the fds with the start
of the reply, and then making sure that xcb doesn't discard fds received
with a partial reply.

Given these two possible solutions, I'd like to suggest that we might
prefer the simpler one.
Alexander Volkov April 9, 2018, 1:59 p.m.
03.04.2018 21:57, Keith Packard пишет:
> Alexander Volkov <a.volkov@rusbitech.ru> writes:
>
>> Yes, it would be easier to fix this in libxcb, but I believe that it
>> would be more correct to do this in the X server. At least I want to
>> try to fix the X server.
> Hrm.
>
> The problem is that there are two streams of data here -- the stream of
> fds and the stream of replies. xcb currently insists that they remain
> aligned so that any fds are associated with the reply data received in
> the same recvmsg operation. This allows an X server to send fds which
> the client is not expecting, with the client can silently closing
> them.
>
> If we let the two streams run un-aligned, then there will be a queue of
> received fds that should (unless there's a bug) eventually get
> associated with the correct reply. The requirement here is looser -- we
> just need to make sure the fds arrive no later than the reply data. This
> seems much easier to achieve, and in fact the current X server code does
> this today.
>
> Changing xcb to allow early fds involves removing code that closes fds
> received before the associated reply.
>
> Changing the X server to ensure that fds are delivered exactly with the
> associated reply data involves adding code to queue the fds and insert
> additional flushes to make sure the kernel writes the fds with the start
> of the reply, and then making sure that xcb doesn't discard fds received
> with a partial reply.
>
> Given these two possible solutions, I'd like to suggest that we might
> prefer the simpler one.
libxcb stores received file descriptors in the buffer of size 16 
(XCB_MAX_PASS_FD).
Whether it's possible that the X server will send more than 16 fds in a 
single reply
and overflow the libxcb's buffer?
Keith Packard April 10, 2018, 3:14 a.m.
Alexander Volkov <a.volkov@rusbitech.ru> writes:

> libxcb stores received file descriptors in the buffer of size 16 
> (XCB_MAX_PASS_FD).
> Whether it's possible that the X server will send more than 16 fds in a 
> single reply
> and overflow the libxcb's buffer?

It wouldn't be if the X server were careful in flushing things, but as
that seems 'hard', we should probably fix xcb. I don't think that's
terribly urgent as it would take a very strange situation to pass 16 fds
in a short amount of time, especially in such close proximity as to end
up not getting a reply that processes any of them in the middle of the
sequence.
Giuseppe Bilotta April 10, 2018, 5:29 a.m.
On Tue, Apr 10, 2018 at 5:14 AM, Keith Packard <keithp@keithp.com> wrote:
>> libxcb stores received file descriptors in the buffer of size 16
>> (XCB_MAX_PASS_FD).
>> Whether it's possible that the X server will send more than 16 fds in a
>> single reply
>> and overflow the libxcb's buffer?
>
> It wouldn't be if the X server were careful in flushing things, but as
> that seems 'hard', we should probably fix xcb. I don't think that's
> terribly urgent as it would take a very strange situation to pass 16 fds
> in a short amount of time, especially in such close proximity as to end
> up not getting a reply that processes any of them in the middle of the
> sequence.

Unless this is done intentionally by a malicious server to overflow
the client's xcb buffer.