I had to be told by Richard Zach that today was the birthday of two of the greatest philosophers of the 20th century, possibly the two greatest, Rudolf Carnap and Bertrand Russell. They also died the same year, a year which would have been an unmitigated tragedy for philosophy were it not for my own birth just before it ended.
The most important work of both philosophers was in philosophical logic, and while logic continues to be a going concern with fascinating new work being done all the time, I tend to think that some of the most important lessons of the early 20th century explosion in logic are being forgotten. Both more and less can be done with logic than earlier philosophers naively thought, yet modern philosophers, even having all taken their mandatory graduate courses in symbolic logic, continue to try to do what cannot be done, or insist on the impossibility of what has already been done.
Logical symbols can be made to represent anything. Really anything. There's no right way to interpret a set of logical symbols; their interpretation is utterly up to us, and if even with that freedom we find it tricky to match up what we're trying to interpret with a given set of symbols, we can just invent more. Other sets of symbols, with seemingly different sets of rules, can do exactly the same thing as the sets of symbols we're gotten in the habit of using. Proofs of this are plentiful.
Thus, logic cannot reveal to us the structure of any part of reality. Completely different logical structures can represent the same reality exactly as well. That is the sense in which logic does less than some had hoped; there is no legitimate a priori metaphysics. But there are consequences of this fact. The extent to which logic can't capture the structure of reality is the extent to which we can't represent the structure of reality; this limitation of logic is not an invitation for "semantic glue" to stick our words to the things they represent, in Putnam's mocking phrase, nor for anything else extra-logical to do the job. Logic is the story of structures. If logic says two structures which seem completely different amount to the same thing, then they really do, and one's intuition that one structure is correctly representing reality while the other isn't simply has to be an illusion. Again, there are in many cases proofs of the equivalence of intuitively different structures, proofs whose validity nobody questions. If their validity isn't questioned, the consequences must be accepted.
A tiny example of how the great logicians were more perceptive on that point than many since; Carnap used "=" for the biconditional in his Logical Syntax of Language. Most philosophers since have not done so, because of an intuition that "=" should be used to represent the things on either side being the same, in a stronger sense than logical equivalence. But logical equivalence means possession of the same truth value, and truth values are ultimately what complete wffs represent. In what sense are "1+1" and "2" the "same thing" in "1+1=2," such that the left and right formulas in "(p->p)=(pv~p)" are not the same thing? Certainly some such difference could be built into a system, but in the standard interpretation it is not. Even logicians seem to attribute spooky properties to identity these days, when they of all people should know better (some of Kripke's work is afflicted with this problem).
Anyway, enough ranting. Some of us are trying not to forget the lessons of the new logic, and we should celebrate those who helped discover those lessons. Happy May 18th!