Convert a 2D array index into a 1D index

Think of it this way:

You have one array that happens to be a 1 dimensional array, which really, is just a long concatenation of items of a two dimensional array.

So, say you have a two dimensional array of size 5 x 3 (5 rows, 3 columns). And we want to make a one dimensional array. You need to decide whether you want to concatenate by rows, or by columns, for this example we’ll say the concatenation is by rows. Therefore, each row is 3 columns long, so you need to think of your one-dimensional array as being defined in “steps” of 3. So, the lengthy of your one dimensional array will be 5 x 3 = 15, and now you need to find the access points.

So, say you are accessing the 2nd row and the 2nd column of your two dimensional array, then that would wind up being 3 steps (first row) + the number of steps in the second row, or 3 + 2 = 5. Since we are zero-based indexing that is -1, so that would be at index 4.

Now for the specific formulation:

int oneDindex = (row * length_of_row) + column; // Indexes

So, as an example of the above you would wind up having

oneDindex = (1 * 3) + 1

And that should be it

Leave a Comment