libdrm amdgpu semaphores questions

Submitted by Zhou, David(ChunMing) on Dec. 1, 2016, 6:11 a.m.

Details

Message ID 583FBF14.8000506@amd.com
State New
Headers show
Series "libdrm amdgpu semaphores questions" ( rev: 1 ) in AMD X.Org drivers

Not browsing as part of any series.

Commit Message

Zhou, David(ChunMing) Dec. 1, 2016, 6:11 a.m.
Hi Dave,

As the attached, our Vulkan team is verifying it.

Thanks,
David Zhou

On 2016年12月01日 13:44, Dave Airlie wrote:
>
> On 1 Dec. 2016 15:22, "zhoucm1" <david1.zhou@amd.com 
> <mailto:david1.zhou@amd.com>> wrote:
> >
> > Yes, the old implementation which is already in upstream libdrm is 
> out of data, there isn't other user, so we want to drop it when new 
> semaphore is verified OK.
>
> Could you post some patches for the new one? Otherwise I'll have to 
> write one for radv.
>
> Dave.
> >
> > Thanks,
> > David Zhou
> >
> >
> > On 2016年12月01日 10:36, Mao, David wrote:
> >>
> >> Hi Dave,
> >> i believe your first attempt is correct.
> >> The export/import semaphore needs refine of the semaphore 
> implementation.
> >> We are working on that.
> >>
> >> Thanks.
> >> Best Regards,
> >> David
> >>>
> >>> On 1 Dec 2016, at 10:12 AM, Dave Airlie <airlied@gmail.com 
> <mailto:airlied@gmail.com>> wrote:
> >>>
> >>> Hey all,
> >>>
> >>> So I've started adding semaphore support to radv but I'm not really
> >>> sure what the API to the semaphore code is.
> >>>
> >>> the Vulkan API is you get a command submission of a number of submit
> >>> units which have a 0-n wait semaphore, 0-n command buffers and 0-n
> >>> signal semaphores.
> >>>
> >>> Now I'm not sure how I should use the APIs with those.
> >>>
> >>> My first attempt is
> >>>
> >>> call amdgpu_cs_wait_semaphore on all the wait ones, call the cs submit
> >>> API, then call the amdgpu_cs_signal_semaphore on all the signal ones?
> >>>
> >>> or should I be up front calling wait/signal then submitting the 
> command streams?
> >>>
> >>> Also upcoming work requires possibly sharing semaphores between
> >>> processes, is there any indication how this might be made work with
> >>> the libdrm_amdgpu semaphore implementation?
> >>>
> >>> Thanks,
> >>> Dave.
> >>> _______________________________________________
> >>> amd-gfx mailing list
> >>> amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >>
> >> _______________________________________________
> >> amd-gfx mailing list
> >> amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >
> >
>

Patch hide | download patch | download mbox

From 4fe868d8927dcda425179bb4840217c23960d429 Mon Sep 17 00:00:00 2001
From: Chunming Zhou <David1.Zhou@amd.com>
Date: Thu, 25 Aug 2016 17:06:37 +0800
Subject: [PATCH 2/2] tests/amdgpu: add sem test

Change-Id: Ibeb173d980a516845d4df7dd23dc54ff1c06f63a
Signed-off-by: Chunming Zhou <David1.Zhou@amd.com>
---
 tests/amdgpu/basic_tests.c | 130 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 130 insertions(+)

diff --git a/tests/amdgpu/basic_tests.c b/tests/amdgpu/basic_tests.c
index e1aaffc..6cc8442 100644
--- a/tests/amdgpu/basic_tests.c
+++ b/tests/amdgpu/basic_tests.c
@@ -50,6 +50,7 @@  static void amdgpu_command_submission_sdma(void);
 static void amdgpu_command_submission_multi_fence(void);
 static void amdgpu_userptr_test(void);
 static void amdgpu_semaphore_test(void);
+static void amdgpu_sem_test(void);
 static void amdgpu_svm_test(void);
 static void amdgpu_multi_svm_test(void);
 static void amdgpu_va_range_test(void);
@@ -63,6 +64,7 @@  CU_TestInfo basic_tests[] = {
 	{ "Command submission Test (SDMA)", amdgpu_command_submission_sdma },
 	{ "Command submission Test (Multi-fence)", amdgpu_command_submission_multi_fence },
 	{ "SW semaphore Test",  amdgpu_semaphore_test },
+	{ "sem Test",  amdgpu_sem_test },
 	{ "VA range Test", amdgpu_va_range_test},
 	{ "SVM Test", amdgpu_svm_test },
 	{ "SVM Test (multi-GPUs)", amdgpu_multi_svm_test },
@@ -646,6 +648,134 @@  static void amdgpu_semaphore_test(void)
 	CU_ASSERT_EQUAL(r, 0);
 }
 
+static void amdgpu_sem_test(void)
+{
+	amdgpu_context_handle context_handle[2];
+	amdgpu_sem_handle sem;
+	amdgpu_bo_handle ib_result_handle[2];
+	void *ib_result_cpu[2];
+	uint64_t ib_result_mc_address[2];
+	struct amdgpu_cs_request ibs_request[2] = {0};
+	struct amdgpu_cs_ib_info ib_info[2] = {0};
+	struct amdgpu_cs_fence fence_status = {0};
+	uint32_t *ptr;
+	uint32_t expired;
+	amdgpu_bo_list_handle bo_list[2];
+	amdgpu_va_handle va_handle[2];
+	int r, i;
+
+	r = amdgpu_cs_create_sem(device_handle, &sem);
+	CU_ASSERT_EQUAL(r, 0);
+	for (i = 0; i < 2; i++) {
+		r = amdgpu_cs_ctx_create(device_handle, &context_handle[i]);
+		CU_ASSERT_EQUAL(r, 0);
+
+		r = amdgpu_bo_alloc_and_map(device_handle, 4096, 4096,
+					    AMDGPU_GEM_DOMAIN_GTT, 0,
+					    &ib_result_handle[i], &ib_result_cpu[i],
+					    &ib_result_mc_address[i], &va_handle[i]);
+		CU_ASSERT_EQUAL(r, 0);
+
+		r = amdgpu_get_bo_list(device_handle, ib_result_handle[i],
+				       NULL, &bo_list[i]);
+		CU_ASSERT_EQUAL(r, 0);
+	}
+	/* 1. same context different engine */
+	ptr = ib_result_cpu[0];
+	ptr[0] = SDMA_NOP;
+	ib_info[0].ib_mc_address = ib_result_mc_address[0];
+	ib_info[0].size = 1;
+
+	ibs_request[0].ip_type = AMDGPU_HW_IP_DMA;
+	ibs_request[0].number_of_ibs = 1;
+	ibs_request[0].ibs = &ib_info[0];
+	ibs_request[0].resources = bo_list[0];
+	ibs_request[0].fence_info.handle = NULL;
+	r = amdgpu_cs_submit(context_handle[0], 0,&ibs_request[0], 1);
+	CU_ASSERT_EQUAL(r, 0);
+	r = amdgpu_cs_signal_sem(device_handle, context_handle[0], AMDGPU_HW_IP_DMA, 0, 0, sem);
+	CU_ASSERT_EQUAL(r, 0);
+	r = amdgpu_cs_wait_sem(device_handle, context_handle[0], AMDGPU_HW_IP_GFX, 0, 0, sem);
+	CU_ASSERT_EQUAL(r, 0);
+	ptr = ib_result_cpu[1];
+	ptr[0] = GFX_COMPUTE_NOP;
+	ib_info[1].ib_mc_address = ib_result_mc_address[1];
+	ib_info[1].size = 1;
+
+	ibs_request[1].ip_type = AMDGPU_HW_IP_GFX;
+	ibs_request[1].number_of_ibs = 1;
+	ibs_request[1].ibs = &ib_info[1];
+	ibs_request[1].resources = bo_list[1];
+	ibs_request[1].fence_info.handle = NULL;
+
+	r = amdgpu_cs_submit(context_handle[0], 0,&ibs_request[1], 1);
+	CU_ASSERT_EQUAL(r, 0);
+
+	fence_status.context = context_handle[0];
+	fence_status.ip_type = AMDGPU_HW_IP_GFX;
+	fence_status.fence = ibs_request[1].seq_no;
+	r = amdgpu_cs_query_fence_status(&fence_status,
+					 500000000, 0, &expired);
+	CU_ASSERT_EQUAL(r, 0);
+	CU_ASSERT_EQUAL(expired, true);
+	r = amdgpu_cs_destroy_sem(device_handle, sem);
+	CU_ASSERT_EQUAL(r, 0);
+
+	/* 2. same engine different context */
+	r = amdgpu_cs_create_sem(device_handle, &sem);
+	CU_ASSERT_EQUAL(r, 0);
+	ptr = ib_result_cpu[0];
+	ptr[0] = GFX_COMPUTE_NOP;
+	ib_info[0].ib_mc_address = ib_result_mc_address[0];
+	ib_info[0].size = 1;
+
+	ibs_request[0].ip_type = AMDGPU_HW_IP_GFX;
+	ibs_request[0].number_of_ibs = 1;
+	ibs_request[0].ibs = &ib_info[0];
+	ibs_request[0].resources = bo_list[0];
+	ibs_request[0].fence_info.handle = NULL;
+	r = amdgpu_cs_submit(context_handle[0], 0,&ibs_request[0], 1);
+	CU_ASSERT_EQUAL(r, 0);
+	r = amdgpu_cs_signal_sem(device_handle, context_handle[0], AMDGPU_HW_IP_GFX, 0, 0, sem);
+	CU_ASSERT_EQUAL(r, 0);
+	r = amdgpu_cs_wait_sem(device_handle, context_handle[1], AMDGPU_HW_IP_GFX, 0, 0, sem);
+	CU_ASSERT_EQUAL(r, 0);
+	ptr = ib_result_cpu[1];
+	ptr[0] = GFX_COMPUTE_NOP;
+	ib_info[1].ib_mc_address = ib_result_mc_address[1];
+	ib_info[1].size = 1;
+
+	ibs_request[1].ip_type = AMDGPU_HW_IP_GFX;
+	ibs_request[1].number_of_ibs = 1;
+	ibs_request[1].ibs = &ib_info[1];
+	ibs_request[1].resources = bo_list[1];
+	ibs_request[1].fence_info.handle = NULL;
+	r = amdgpu_cs_submit(context_handle[1], 0,&ibs_request[1], 1);
+
+	CU_ASSERT_EQUAL(r, 0);
+
+	fence_status.context = context_handle[1];
+	fence_status.ip_type = AMDGPU_HW_IP_GFX;
+	fence_status.fence = ibs_request[1].seq_no;
+	r = amdgpu_cs_query_fence_status(&fence_status,
+					 500000000, 0, &expired);
+	CU_ASSERT_EQUAL(r, 0);
+	CU_ASSERT_EQUAL(expired, true);
+	r = amdgpu_cs_destroy_sem(device_handle, sem);
+	CU_ASSERT_EQUAL(r, 0);
+	for (i = 0; i < 2; i++) {
+		r = amdgpu_bo_unmap_and_free(ib_result_handle[i], va_handle[i],
+					     ib_result_mc_address[i], 4096);
+		CU_ASSERT_EQUAL(r, 0);
+
+		r = amdgpu_bo_list_destroy(bo_list[i]);
+		CU_ASSERT_EQUAL(r, 0);
+
+		r = amdgpu_cs_ctx_free(context_handle[i]);
+		CU_ASSERT_EQUAL(r, 0);
+	}
+}
+
 static void amdgpu_command_submission_compute(void)
 {
 	amdgpu_context_handle context_handle;
-- 
1.9.1


Comments

Hi Dave,

the problem with that approach is that it duplicates the effort done 
with android fences but moves some of the logic found in there into the 
kernel.

E.g. you could do the same with in/out fences on command submission, 
just not share the handle itself with other process before it is 
signaled (which is actually desirable).

Regards,
Christian.

Am 01.12.2016 um 07:11 schrieb zhoucm1:
> Hi Dave,
>
> As the attached, our Vulkan team is verifying it.
>
> Thanks,
> David Zhou
>
> On 2016年12月01日 13:44, Dave Airlie wrote:
>>
>> On 1 Dec. 2016 15:22, "zhoucm1" <david1.zhou@amd.com 
>> <mailto:david1.zhou@amd.com>> wrote:
>> >
>> > Yes, the old implementation which is already in upstream libdrm is 
>> out of data, there isn't other user, so we want to drop it when new 
>> semaphore is verified OK.
>>
>> Could you post some patches for the new one? Otherwise I'll have to 
>> write one for radv.
>>
>> Dave.
>> >
>> > Thanks,
>> > David Zhou
>> >
>> >
>> > On 2016年12月01日 10:36, Mao, David wrote:
>> >>
>> >> Hi Dave,
>> >> i believe your first attempt is correct.
>> >> The export/import semaphore needs refine of the semaphore 
>> implementation.
>> >> We are working on that.
>> >>
>> >> Thanks.
>> >> Best Regards,
>> >> David
>> >>>
>> >>> On 1 Dec 2016, at 10:12 AM, Dave Airlie <airlied@gmail.com 
>> <mailto:airlied@gmail.com>> wrote:
>> >>>
>> >>> Hey all,
>> >>>
>> >>> So I've started adding semaphore support to radv but I'm not really
>> >>> sure what the API to the semaphore code is.
>> >>>
>> >>> the Vulkan API is you get a command submission of a number of submit
>> >>> units which have a 0-n wait semaphore, 0-n command buffers and 0-n
>> >>> signal semaphores.
>> >>>
>> >>> Now I'm not sure how I should use the APIs with those.
>> >>>
>> >>> My first attempt is
>> >>>
>> >>> call amdgpu_cs_wait_semaphore on all the wait ones, call the cs 
>> submit
>> >>> API, then call the amdgpu_cs_signal_semaphore on all the signal ones?
>> >>>
>> >>> or should I be up front calling wait/signal then submitting the 
>> command streams?
>> >>>
>> >>> Also upcoming work requires possibly sharing semaphores between
>> >>> processes, is there any indication how this might be made work with
>> >>> the libdrm_amdgpu semaphore implementation?
>> >>>
>> >>> Thanks,
>> >>> Dave.
>> >>> _______________________________________________
>> >>> amd-gfx mailing list
>> >>> amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
>> >>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> >>
>> >> _______________________________________________
>> >> amd-gfx mailing list
>> >> amd-gfx@lists.freedesktop.org <mailto:amd-gfx@lists.freedesktop.org>
>> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> >
>> >
>>
>
>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On 1 December 2016 at 06:11, zhoucm1 <david1.zhou@amd.com> wrote:
> Hi Dave,
>
> As the attached, our Vulkan team is verifying it.
>
David, please read through the following documents when designing
ioctls [1] and [im]porting the UABI to libdrm [2].

Thanks
Emil

[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/ioctl/botching-up-ioctls.txt
"Prerequisites#1"

[2] https://cgit.freedesktop.org/mesa/drm/tree/include/drm/README
"When and how to update these files" and "amdgpu_drm.h"
Hi David,

Some one major review suggestion, don't use file descriptors for
semaphore, as fd's are a limited resource and we don't want to use
them all up.

You create semaphore objects and use them in a single process without
them being fds, then when userspace wants to share with another
process
you convert the semaphore object to an fd, pass it to the other
process and have it convert it back into a semaphore object.

Dave.

On 1 December 2016 at 16:11, zhoucm1 <david1.zhou@amd.com> wrote:
> Hi Dave,
>
> As the attached, our Vulkan team is verifying it.
>
> Thanks,
> David Zhou
>
>
> On 2016年12月01日 13:44, Dave Airlie wrote:
>
> On 1 Dec. 2016 15:22, "zhoucm1" <david1.zhou@amd.com> wrote:
>>
>> Yes, the old implementation which is already in upstream libdrm is out of
>> data, there isn't other user, so we want to drop it when new semaphore is
>> verified OK.
>
> Could you post some patches for the new one? Otherwise I'll have to write
> one for radv.
>
> Dave.
>>
>> Thanks,
>> David Zhou
>>
>>
>> On 2016年12月01日 10:36, Mao, David wrote:
>>>
>>> Hi Dave,
>>> i believe your first attempt is correct.
>>> The export/import semaphore needs refine of the semaphore implementation.
>>> We are working on that.
>>>
>>> Thanks.
>>> Best Regards,
>>> David
>>>>
>>>> On 1 Dec 2016, at 10:12 AM, Dave Airlie <airlied@gmail.com> wrote:
>>>>
>>>> Hey all,
>>>>
>>>> So I've started adding semaphore support to radv but I'm not really
>>>> sure what the API to the semaphore code is.
>>>>
>>>> the Vulkan API is you get a command submission of a number of submit
>>>> units which have a 0-n wait semaphore, 0-n command buffers and 0-n
>>>> signal semaphores.
>>>>
>>>> Now I'm not sure how I should use the APIs with those.
>>>>
>>>> My first attempt is
>>>>
>>>> call amdgpu_cs_wait_semaphore on all the wait ones, call the cs submit
>>>> API, then call the amdgpu_cs_signal_semaphore on all the signal ones?
>>>>
>>>> or should I be up front calling wait/signal then submitting the command
>>>> streams?
>>>>
>>>> Also upcoming work requires possibly sharing semaphores between
>>>> processes, is there any indication how this might be made work with
>>>> the libdrm_amdgpu semaphore implementation?
>>>>
>>>> Thanks,
>>>> Dave.
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>
>
>
Thanks Dave, got your suggestion.

Regards,
David Zhou

On 2016年12月02日 03:44, Dave Airlie wrote:
> Hi David,
>
> Some one major review suggestion, don't use file descriptors for
> semaphore, as fd's are a limited resource and we don't want to use
> them all up.
>
> You create semaphore objects and use them in a single process without
> them being fds, then when userspace wants to share with another
> process
> you convert the semaphore object to an fd, pass it to the other
> process and have it convert it back into a semaphore object.
>
> Dave.
>
> On 1 December 2016 at 16:11, zhoucm1 <david1.zhou@amd.com> wrote:
>> Hi Dave,
>>
>> As the attached, our Vulkan team is verifying it.
>>
>> Thanks,
>> David Zhou
>>
>>
>> On 2016年12月01日 13:44, Dave Airlie wrote:
>>
>> On 1 Dec. 2016 15:22, "zhoucm1" <david1.zhou@amd.com> wrote:
>>> Yes, the old implementation which is already in upstream libdrm is out of
>>> data, there isn't other user, so we want to drop it when new semaphore is
>>> verified OK.
>> Could you post some patches for the new one? Otherwise I'll have to write
>> one for radv.
>>
>> Dave.
>>> Thanks,
>>> David Zhou
>>>
>>>
>>> On 2016年12月01日 10:36, Mao, David wrote:
>>>> Hi Dave,
>>>> i believe your first attempt is correct.
>>>> The export/import semaphore needs refine of the semaphore implementation.
>>>> We are working on that.
>>>>
>>>> Thanks.
>>>> Best Regards,
>>>> David
>>>>> On 1 Dec 2016, at 10:12 AM, Dave Airlie <airlied@gmail.com> wrote:
>>>>>
>>>>> Hey all,
>>>>>
>>>>> So I've started adding semaphore support to radv but I'm not really
>>>>> sure what the API to the semaphore code is.
>>>>>
>>>>> the Vulkan API is you get a command submission of a number of submit
>>>>> units which have a 0-n wait semaphore, 0-n command buffers and 0-n
>>>>> signal semaphores.
>>>>>
>>>>> Now I'm not sure how I should use the APIs with those.
>>>>>
>>>>> My first attempt is
>>>>>
>>>>> call amdgpu_cs_wait_semaphore on all the wait ones, call the cs submit
>>>>> API, then call the amdgpu_cs_signal_semaphore on all the signal ones?
>>>>>
>>>>> or should I be up front calling wait/signal then submitting the command
>>>>> streams?
>>>>>
>>>>> Also upcoming work requires possibly sharing semaphores between
>>>>> processes, is there any indication how this might be made work with
>>>>> the libdrm_amdgpu semaphore implementation?
>>>>>
>>>>> Thanks,
>>>>> Dave.
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>
>>
Mhm, with that design there is only a minor difference any more to the 
sync_file implementation.

Guys, what about the idea to change the behavior of the sync_file 
implementation with a flag so that it matches what Vulkan expect?

As far as I can see the only difference is that when you can have a 
Vulkan semaphore object which isn't signaled, but at the moment you 
can't have a sync_file without any fence in it.

Regards,
Christian.

Am 02.12.2016 um 02:41 schrieb zhoucm1:
> Thanks Dave, got your suggestion.
>
> Regards,
> David Zhou
>
> On 2016年12月02日 03:44, Dave Airlie wrote:
>> Hi David,
>>
>> Some one major review suggestion, don't use file descriptors for
>> semaphore, as fd's are a limited resource and we don't want to use
>> them all up.
>>
>> You create semaphore objects and use them in a single process without
>> them being fds, then when userspace wants to share with another
>> process
>> you convert the semaphore object to an fd, pass it to the other
>> process and have it convert it back into a semaphore object.
>>
>> Dave.
>>
>> On 1 December 2016 at 16:11, zhoucm1 <david1.zhou@amd.com> wrote:
>>> Hi Dave,
>>>
>>> As the attached, our Vulkan team is verifying it.
>>>
>>> Thanks,
>>> David Zhou
>>>
>>>
>>> On 2016年12月01日 13:44, Dave Airlie wrote:
>>>
>>> On 1 Dec. 2016 15:22, "zhoucm1" <david1.zhou@amd.com> wrote:
>>>> Yes, the old implementation which is already in upstream libdrm is 
>>>> out of
>>>> data, there isn't other user, so we want to drop it when new 
>>>> semaphore is
>>>> verified OK.
>>> Could you post some patches for the new one? Otherwise I'll have to 
>>> write
>>> one for radv.
>>>
>>> Dave.
>>>> Thanks,
>>>> David Zhou
>>>>
>>>>
>>>> On 2016年12月01日 10:36, Mao, David wrote:
>>>>> Hi Dave,
>>>>> i believe your first attempt is correct.
>>>>> The export/import semaphore needs refine of the semaphore 
>>>>> implementation.
>>>>> We are working on that.
>>>>>
>>>>> Thanks.
>>>>> Best Regards,
>>>>> David
>>>>>> On 1 Dec 2016, at 10:12 AM, Dave Airlie <airlied@gmail.com> wrote:
>>>>>>
>>>>>> Hey all,
>>>>>>
>>>>>> So I've started adding semaphore support to radv but I'm not really
>>>>>> sure what the API to the semaphore code is.
>>>>>>
>>>>>> the Vulkan API is you get a command submission of a number of submit
>>>>>> units which have a 0-n wait semaphore, 0-n command buffers and 0-n
>>>>>> signal semaphores.
>>>>>>
>>>>>> Now I'm not sure how I should use the APIs with those.
>>>>>>
>>>>>> My first attempt is
>>>>>>
>>>>>> call amdgpu_cs_wait_semaphore on all the wait ones, call the cs 
>>>>>> submit
>>>>>> API, then call the amdgpu_cs_signal_semaphore on all the signal 
>>>>>> ones?
>>>>>>
>>>>>> or should I be up front calling wait/signal then submitting the 
>>>>>> command
>>>>>> streams?
>>>>>>
>>>>>> Also upcoming work requires possibly sharing semaphores between
>>>>>> processes, is there any indication how this might be made work with
>>>>>> the libdrm_amdgpu semaphore implementation?
>>>>>>
>>>>>> Thanks,
>>>>>> Dave.
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>>>
>>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx