Quantcast
Channel: AMD Developer Forums: Message List
Viewing all articles
Browse latest Browse all 4104

default vs. specified work-group size; global size multiples

$
0
0

Hi,

 

The AMD OpenCL programming guide states that (5.6.3) explicitly specifying work-group sized instead of letting the OpenCL implementation choose it automatically (through local_work_size argument of NULL) is preferred. In how far is this a practical issue?

Consider a relatively large number of threads to be launched, but no workgroups specified:

 

someQueue.enqueueNDRangeKernel(someKernel, cl::NDRange(0), cl::NDRange(64000), cl::NullRange, 0, 0);

 

obviously all work-items in the kernel are fully independent of each other, so no need for local memory, synchronizations of any kind etc.

 

First question: can we reasonably expect this to be slower than, say, a scheme launching 1000 workgroups of size {1, 64} and the only kernel code difference being some additional calculation to get back the "original" work-item id (a value in [0, 63999])? My gut feeling would have been that it makes no difference (or, if anything, the extra kernel calculations make it slower) as the runtime cleverly splits the workitems across CUs anyway, but the manual makes me get doubts.

 

Second, assume that the number of workitems in a workgroup is always an integer multiple of say 64, but the global work size is not (irrelevant of whether we use work-groups at all or not, so for the latter e.g. just take above example with say cl::NDRange(63997) for global size). Does this normally have a clear performance impact as compared to a scheme where the global work size is adjusted to be an integer multiple of say 64, and those work-items which are exceeding the "real" global work size number just do nothing (return immediately), Again, my gut feeling is it does not matter, as all except the very last wavefront make full utilization of hardware anyway (are groups of 64 workitems on a Tahiti device), but again I might be wrong.

 

thanks !


Viewing all articles
Browse latest Browse all 4104

Trending Articles