It’s no secret that I’m a big fan of Star Trek. As someone who was largely a social outcast growing up, it was in some ways a default — Gene Roddenberry’s vision represents a future that is largely hopeful and where differences are accepted and encouraged within Federation’s society rather than intentionally muted, and that was a source of comfort.
My real entrée into that universe (so to speak) was through Star Trek: The Next Generation, the sequel series that in many ways exceeded the original, and which was commonly on television through my formative years. Indeed, I’ve always preferred the command of the measured Picard to the often reckless Kirk, and the android Data’s discovery of humanity over the Vulcan Spock. But still, after high school I ventured far from science fiction in the pursuit of other interests.
With the availability of the entire series of TNG on various streaming services for the past several years, I’ve reacquainted myself with the series, several times. As I am at a different stage of my own “discovery of humanity” than when I originally viewed the series, other questions and thoughts have emerged. One that I’m tackling today involves the android Data, and what his technical specifications suggest about one particular part of the future: the future of technology.
Who is Data?
Data, played by Brent Spiner, is an android character, constructed by odd “mad scientist” Dr. Noonian Soong. Of the prototypes and models produced by Soong, only Data, his (evil) “brother” Lore and a later approximation of Soong’s fatally ill wife Juliana Tainer, survived activation and were able to truly approximate human characteristics. The reason cited for Soong’s repeated failures with few little successes was the complexity of the “positronic brain,” a technology that made such a human-like android possible. Data, Lore and Tainer were ultimately Soong’s only successes, with a fourth activated android named B-4 unable to process the complex thoughts needed for truly social interaction.
Data was arguably the most successful of Soong’s creations, and certainly the most visible in the series. He joined Starfleet and achieved the rank of lieutenant commander through a decorated career, with his service on the Enterprise making him a regular character.[1. “Data,” Memory Alpha, the Star Trek Wiki. http://en.memory-alpha.org/wiki/Data.] Lore, Soong’s first successful android, wandered the universe after re-activation and generally caused problems — his misadventures providing an interesting recurring character to the TNG series. Tainer only appeared in a fairly lame season 7 episode of TNG, the storyline of which was more exploratory of Soong’s madness than anything else. B-4 had it even worse, appearing only in the abysmal Star Trek: Nemesis film, which remains the worst performing and most poorly reviewed Trek film to-date.[2. “Star Trek Nemesis,” Memory Alpha, the Star Trek Wiki. http://en.memory-alpha.org/wiki/Star_Trek_Nemesis.]
Many of the storylines about Data during TNG (and the mostly disappointing TNG-era feature films that followed) involved his search for humanity, as this was handicapped by his being built and programmed without the capability of experiencing emotion. (Of course, part of Lore’s problem was that he was provided with this ability, inadequately controlled.[3. “Descent (episode),” Memory Alpha, the Star Trek Wiki. http://en.memory-alpha.org/wiki/Descent_(episode)]) Data’s quest for humanity, which invited comparisons to both Spock from the original series, and Pinocchio of literature[4. Yes, even in the show itself — first, in the pilot episode “Encounter at Farpoint,” and again in the second-season finale’s poorly disguised clip episode “Shades of Gray.”] is ultimately a compelling storyline throughout the TNG fictional universe.
In one of the best episodes of the series, “The Measure of a Man” (which admittedly draws heavily from To Kill a Mockingbird, but I digress…), Data’s legal status within Starfleet is the subject of a JAG hearing, which sought to determine whether he was an independent sentient being, or property of the organization. As a part of this hearing, Data is compelled to testify by Commander William Riker (who himself is compelled by the JAG to serve as prosecutor for the hearing) and is required to list his technical specifications:
Riker: Commander, what is the capacity of your memory, and how fast can you access information?
Data: I have an ultimate storage capacity of eight hundred quadrillion bits. My total linear computational speed has been rated at sixty trillion operations per second.[5. “Next Generation Transcripts – The Measure of a Man,” http://www.chakoteya.net/nextgen/135.htm]
With a little calculation, we can put this quote into both contemporary and current contexts. His memory storage of “eight hundred quadrillion bits” certainly does not sound as impressive today, in late 2014, as it likely did in February 1989 when the episode aired.
While memory (storage) is an interesting spec to put into context, it’s not nearly as crucial for understanding Data’s technology as his processing power.
First, an important note: Data’s reported specifications of “sixty trillion operations per second” uses a measure which may be related to the now-antiquated Instructions per Second[8. “Instructions per Second,” Wikipedia. http://en.wikipedia.org/wiki/Instructions_per_second.], obsolete because it measures raw processing power without regard to other system architecture or realistic processing use. Though this was more common in earlier parts of computing history (hello 1980s!), it has largely been abandoned for measures such as Floating Point Operations Per Second (FLOPS). Equally important to note here is that the operations used to measure FLOPS are more difficult for a processor than simple IPS[9. “FLOPS,” Wikipedia. http://en.wikipedia.org/wiki/FLOPS.], which makes a measure in FLOPS more impressive than the same number in IPS. Though Data didn’t specify that he was citing his processing power using FLOPS, we’re going to assume that he was, for sake of simplicity.
Not so fast. Let’s put this in context for today: the fastest supercomputer in the world currently (December 2014) is the Tianhe-2 housed at China’s National University of Defense Technology, which operates as fast as 30.7 petaflops (1 petaflop = 1,000 teraflops) and may well be left in the dust if Google gets its quantum supercomputer off the ground.[11. “Google is Building the World’s Fastest Supercomputer,” CNN Money. http://money.cnn.com/2014/09/03/technology/google-quantum/] We’ve built supercomputers that’ve already left Data hopelessly obsolete by today’s standards.
Admittedly, supercomputers are generally not as mobile or contained as Data is, often taking up entire rooms, floors or buildings to assemble the processors necessary to achieve this kind of speed. What about desktops or something within reach of an average consumer? Well, the newest high-end Mac Pro achieves approximately 7 teraflops in its main processor, and up to 3.5 more in its graphics processor.[12. “Mac Pro – Technical Specifications,” Apple. http://www.apple.com/mac-pro/specs/] The Mac Pro isn’t up to Data’s standards… yet. But it’s coming remarkably close, considering how far off it seemed in that episode’s contemporary context.
Predicted by Intel founder Gordon Moore in 1965, “Moore’s Law” is the conjecture that the number of transistors in a circuit doubles approximately every two years.[13. “Moore’s Law,” Wikipedia. http://en.wikipedia.org/wiki/Moore%27s_law] While based on the history of computing to that point, it was also predictive and was used as a planning target for technology firms, becoming something of a self-fulfilling prophecy beyond Moore’s expectations: since his initial paper was released nearly 50 years ago, the “Law” has been upheld by constant exponential growth in transistors in such circuits as recently as this year. The Wikipedia-sourced chart below illustrates this, though it only comes to 2011:
Unfortunately, Moore’s Law is technically somewhat worthless for this context, because Data’s brain is a “positronic matrix” that is, presumedly, far beyond silicon transistors and semiconductors. Plus, importantly, we have no idea from the evidence in Star Trek canon how many circuits Data has.
What we can do is take what I call “House’s Corollary” to Moore’s Law and look at it a bit. What is House’s Corollary? You’ve probably heard of it, and may have even heard it called Moore’s Law. It’s a 1975 statement by David House, another Intel executive at the time, who suggested all changes (including Moore’s proposed processor trajectory, but also other architectural considerations) would result in computer performance doubling approximately every 18 months.[15. “Moore’s Law to Roll On for Another Decade,” cnet.com. http://news.cnet.com/2100-1001-984051.html.] Both Moore and House were close — Moore’s been pretty much dead-on, while House’s prognostication was only off slightly, witnessing a doubling every 20 months since 1975. Several have suggested this alteration to Moore’s Law be used.[16. Moore’s Law and Computer Processing Power,” datascience@Berkeley Blog. http://datascience.berkeley.edu/moores-law-processing-power/.]
What does this mean for Data and the Star Trek universe? To dig into that question, let’s start with some basic assumptions:
At the beginning of World War III in 2026 (only 11 years away, yikes!), consumer-available computers will be performing at 336 teraflops, far in excess of Data’s 60 teraflop capabilities.
Let’s assume that the war puts computing back to 1989 levels (the NEC SX-3/44R’s 1.7 gigaflops) by its end in 2053, which makes the war truly destructive of human technology, perhaps even too far. Then, let’s assume that Moore’s Law is re-established through Data’s construction in 2336, up to the report of his specs in 2365.
Where does that put us? In 2336, the most powerful computer would be clocking speeds of 7.764 x 10^43 teraflops, just a touch faster than Data’s 60 teraflops. Of course in 2365, the difference is more pronounced, with the top computer clocking 1.908 x 10^48 teraflops.
How far under expectations was Data? Check out this chart. The blue line is the expected, while Data’s specs are marked (click for full-size):
Assuming 300+ years of exponential growth unsurprisingly gets us some pretty crazy numbers, with which Data’s simply don’t compare. Of course, no current analysts expect Moore’s Law to continue for this long, but then, Moore didn’t expect it to continue for 50 years after 1965, either.
The obvious conclusion is that Star Trek’s timeline does not foresee Moore’s Law continuing until the creation of Data in 2336. Fair enough, and obviously today’s brightest minds agree that Moore’s Law has a limited shelf-life for the coming years.
What’s more interesting about this is that, despite Data’s apparent failure to meet the prediction of Moore’s Law and despite the fact that our top computing power today is beyond Data’s abilities, Data’s artificial intelligence is far more advanced than even the greatest supercomputers are capable of achieving currently. I’ve touched on this in some extremely superficial looks at AI on this blog, but I’ve honestly been entirely underwhelmed with the development of AI technology through the past 20 years. To me, it’s been even more disappointing than the failures of our culture to create flying cars, hoverboards and automatically fitting jackets predicted by 2015 in Back to the Future 2.
Then again, and this might be the most important conclusion of all: I’ve probably thought way too much about how the dream represented in TNG by Data’s existence seems entirely unachievable, even given the current trajectory of technology. It’s science fiction — a hopeful story of the future with some limited basis in reality, which is supposed to be an escape. And ultimately, thinking too much about all this served as a a nice mental escape for a lazy Sunday afternoon.