A Thought on Searching Random Lists
Consider searching a random list. Suppose it has 2^10 elements,
and we have only 10^2 peeks. Suppose the problem is to see if
any of the numbers in the list is zero. If it is known in
advance which 10^2 elements will be inspected, then we can
simply present our search with a list where there is a single
zero in one of the 2^10-10^2 places not inspected. Then
our search will fail to find an example of a zero, despite
it being certain that there is one there. This is the essence
of how you show that P does not equal NP.
The issue we must then consider is whether it is feasible to
create such a scenario. The issue of whether genuine randomness
exists is a philosophical one. The issue of whether an algorithmic
test for randomness can be passed by non-random data is another one.
By non-random data here I mean the output of a deterministic
This is really just a more complicated version of the old problem
of solving equations (and is in principle identical to it). Given
a deterministic procedure for mapping an input number to an output
number, each potential output number gives us a decision problem
of the form: `does there exists an input to this procedure with
that output'. In the case of cryptographic hashing functions, we
can simply add without carry (that is, modulo 2^64 on a 64bit processor)
to get an equivalently complex hashing function so that this
problem is essentially the same as whether a given hashing function
will ever generate zero output (or equivalently
generate zero output for nonzero input).
If we have an algorithm which can only search 64^3 possible inputs,
we can form an equivalently complex algorithm which is guaranteed
to fail by simply permuting the input space so that the desired
output is not one of the 64^3 guesses.
Thus given a polynomial time solution to an NP-complete problem,
we can try to use this algorithm A to solve the 'does this hash
ever output zero' problem, set the input size large enough,
and brute force an input for which A cannot get the right answer,
for the reason that we can exhibit two distinct inputs, both
of which will cause A to run through the same computation, but
for which the actual answer to the decision process is different.
For a more precise answer, there will be an effective method which,
given a putative P=NP algorithm, will (by repeatedly simulating
this algorithm on all possible inputs and inductively constructing
a pathological case) generate an input for which it makes a mistake.
In short, choose a hash with an image significantly larger than
the number of steps our polynomially bounded P=NP algorithm takes.