As we enter 2001, an obvious gap between past fantasy and present reality is our lack of computers that care about telling the truth.
The HAL 9000 computer in "2001: A Space Odyssey" became psychotic when it was ordered to conceal facts from users. The robots in Isaac Asimovs stories are incapable of disobedience; one experimental, empathetic model goes insane when ordered to tell people things that it knows will hurt them to learn. For decades, writers have assumed that the next generation of computers would have principles.
By contrast, the systems were still using are all too ready to lie to us, whether through error (say, by misreading a punched-card ballot), through repeating whatever falsehood they were last told (as in many cases of identity theft), or by showing less capacity than even a child has to know contradictory nonsense when they hear it (with innumerable examples of inconsistent input that produces, not a request for clarification, but terribly misleading results).
Perhaps I err in blaming the pace of technology for this weakness. Were capable of building hardware that knows the difference between "yes," "no" and "dont know." Were capable of writing software that uses such knowledge, as in many statistics packages.
Perhaps the fault is in the way that most applications are written—ultimately, in the way that most programmers are taught. Programmers arent indoctrinated to deal well with inconsistency, let alone deliberate misrepresentation. Theyre not taught, like any rookie police officer, to give people (or computers) room to betray themselves through failure to keep their lies straight.
As our systems deal more directly with the world around them, they need to learn to discern truth—and to preserve it.