Unspecified launch failure on Memcpy

When I compile and run your code, I get:

an illegal memory access was encountered-3

printed out.

You may indeed be getting “unspecified launch failure” instead. The exact error reporting will depend on CUDA version, GPU, and platform. But we can proceed forward regardless.

Either message indicates that the kernel launched but encountered an error, and so failed to complete successfully. You can debug kernel execution problems using a debugger, such as cuda-gdb on linux, or Nsight VSE on windows. But we don’t need to pull out the debugger just yet.

A useful tool is cuda-memcheck. (On newer GPUs, e.g. cc7.0 or newer, you should use compute-sanitizer instead of cuda-memcheck, but otherwise the process here is identical.) If we run your program with cuda-memcheck, we get some additional output that indicates that the kernel is doing invalid global reads of size 4. This means that you are making an out-of-bounds memory access. We can get additional clarity if we recompile your code adding the -lineinfo switch (or alternatively with -G), and then re-run your code with cuda-memcheck. Now we get output that looks like this:

$ nvcc -arch=sm_20 -lineinfo -o t615 t615.cu
$ cuda-memcheck ./t615 |more
========= CUDA-MEMCHECK
========= Invalid __global__ read of size 4
=========     at 0x00000070 in /home/bob/misc/t615.cu:34:SolverGPU(float*, float*)
=========     by thread (31,0,0) in block (3,0,0)
=========     Address 0x4024fe1fc is out of bounds
=========     Saved host backtrace up to driver entry point at kernel launch time
=========     Host Frame:/usr/lib64/libcuda.so.1 (cuLaunchKernel + 0x2cd) [0x150a7d]
=========     Host Frame:./t615 [0x11ef8]
=========     Host Frame:./t615 [0x3b143]
=========     Host Frame:./t615 [0x297d]
=========     Host Frame:./t615 (__gxx_personality_v0 + 0x378) [0x26a0]
=========     Host Frame:./t615 (__gxx_personality_v0 + 0x397) [0x26bf]
=========     Host Frame:./t615 [0x2889]
=========     Host Frame:/lib64/libc.so.6 (__libc_start_main + 0xf4) [0x1d994]
=========     Host Frame:./t615 (__gxx_personality_v0 + 0x111) [0x2439]
=========
--More--

(and there is much more error output)

This means that the very first error encountered by your kernel was an invalid global read of size 4 (i.e. an out of bounds access trying to read an int or float quantity, for example). With the lineinfo information, we can see that this occurred:

=========     at 0x00000070 in /home/bob/misc/t615.cu:34:SolverGPU(float*, float*)

i.e. at line 34 in the file. This line happens to be this line of kernel code:

    float M1_IndexRight = M1[i + ROOM_X *(j-1)];

we could debug further, perhaps using in-kernel printf statements to discover where the problem is. But we already have a clue that we are indexing out-of-bounds, so let’s inspect the indexing:

  i + ROOM_X *(j-1)

what does this evaluate to when i=0 and j=0 (ie. for thread (0,0) in your 2D thread array)? It evaluates to -2048 (i.e. –ROOM_X) which is an illegal index. Trying to read from M1[-2048] will create a fault.

You’ve got lots of complicated indexing going on in your kernel, so I’m pretty sure there are other errors as well. You can use a similar method to track those down (perhaps using printf to spit out the computed indexes, or else testing the indexes for validity).

Although the above description uses cuda-memcheck, the compute-sanitizer tool works similarly, and is the recommended one at the time of this edit.

For another example of how to use this method to narrow down the source of a problem, see here.

Leave a Comment