Why do you insist that scientific hypotheses have to be falsifiable? Doesn’t disproving something just prove something else?
Let us examine an hypothesis: “All rocks immediately rise when released from a human hand.”
This is a good, if very primitive, scientific hypothesis: it is both testable and falsifiable. We may test it by picking up a rock and dropping it; unless something very strange is going on (or we are somehow outside of a significant gravity-well), the rock will immediately falsify our hypothesis by falling. But have we actually proved something else?
Logically, we may refer to the hypothesis “Rocks immediately rise when released from a human hand” as (x). Since we have just falsified (x), we may say (not-x) is true. But if disproving something just proves something else, then falsifying (x) must also mean that (y) is true, where (y) is the opposite of (x). This seems very intuitive, but let’s look more closely:
The negation of (x) is: “All rocks do not immediately rise when released from a human hand.” We state this as (not-x).
The opposite of (x) is: “All rocks immediately fall when released from a human hand.” We state this as (y).
We may see immediately that (not-x) is true, because we just witnessed a rock not immediately rising when released from a human hand. But this is not proving something else true; this is simply negating something which we thought might have been true. In the study of logic, this is referred to as “The Law of the Excluded Middle”, and is actually one of the fundamental laws which governs all logical operations. A thing is either true, or it is false. It cannot be both.
But what about (y)? Again, it seems intuitively correct—rocks fall when you release them, right? But what if you throw a rock—say, directly upward? You have released it from your hand, but it will not actually begin to fall until gravity has overcome all of the upward momentum you have imparted. Therefore, we may see that just because (x) is not true (which is an easier way of saying “(not-x) is true”), does not mean that (y) is true. Logically, the assumption that (y) must be true if (x) is not, is referred to as the “Disjunction Fallacy”—a “fallacy” being a word for an error in reasoning.
To continue, let’s take a look at what happens when we build an hypothesis based on the Disjunction Fallacy. Let us hypothesize that “All rocks immediately fall when released from a human hand -OR- all rocks immediately rise when released from a human hand.” Now we see an unfalsifiable hypothesis: if the first condition is not met, the other condition is considered to be proven.
So, what happens if I lay a rock on a table? It neither rises nor falls. If we had stated two proper hypotheses (x) and (y), and had tested each, then we could have falsified each and moved forward with empirical knowledge. But because of the fallacious nature of our hypothesis, we cannot do this. Since placing the rock on the table negated (x) (the rock did not immediately fall), according to our hypothesis, we have proved (y): (all rocks immediately rise when released from a human hand). And we have proved it even though the rock we used did not rise when it was released.
But the scenario above only considers a situation in an area of knowledge which is well-understood and we can make guesses about the consequences of negation. What happens if we are in a new area of knowledge, where the rules of operation are not well-understood? For example, consider this non-falsifiable hypothesis:
“All rocks immediately rise when released from a human hand -OR- all dogs should be fed only chocolate.”
Of course, the fact that chocolate is bad for dogs is well-known, and used simply for illustration. And one might state, at this point, that you understand issues with testing and falsifiability—but what if you just don’t test? If you just go out and find information to support your hypothesis without attempting to falsify it, can’t you show that it is true without inadvertently creating a fallacy?
Well, certainly you can (and should) find evidence to support your hypothesis. But that is called research, not science, and is a fertile breeding ground for confirmation error. The testing of falsifiable hypotheses is the sine qua non of science–it is the testing itself which separates the scientific method from all previous modes of investigation. Until you actually formulate a testable, falsifiable hypothesis and then attempt to disprove it, what you are doing is not science. Even if you are a scientist by profession.