Here is the interview question prompt, presented for reference.
Here's a fun one: let's say we have a 2D array matrix that is a size of m
rows and n
columns. It is initially prefilled with 0
s and looks like the following:
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
We are then given a list of increment operations to be performed on the nested matrix array. An increment operation is exactly as it sounds-- it will dictate when to add 1
to a number in the matrix.
Each increment operation will consist of two numbers, a row
and column
value, that dictate to what extent the array's cells under that range should increment by 1.
For example, given the above matrix, if we get [1, 1]
as an operation, it results in:
[[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
This is because we look at matrix[1][1]
, and look northwest to see what the "range" is. So what we've done is incremented all cells from matrix[0][0]
to matrix[row][col]
where 0 <= row < m
, and 0 <= column < n
.
If we then got another operation in the form of [2, 2]
, we'd get:
[[2, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 0, 0]]
Can you write a method that returns the frequency of the max integer in the matrix? In the above example, it would be 1
(since 2
shows up just once).
operations
would be <= 100000
n
and m
<= 10000
1
and 10000
O(n)
O(1)
You can see the full challenge with visuals at this link.
Challenges • Asked almost 7 years ago by Jake from AlgoDaily
This is the main discussion thread generated for Matrix Operations.