Computers in Futuristic RPGs

by Jim Vassilakos

Let me begin with a disclaimer. I’m not a technologist. I’m a generalist, which is another way of saying that I know just enough to get myself in trouble. To compound this sorry state of affairs, I’m also an extreme pessimist, and it should be noted that when it comes to predictions about the future of computing, the pessimists have always been wrong. Hence, in summary, I am probably the last person who should be expounding on this topic. With that said, I will now expound on this topic.

As I see it, the fundamental problem with depictions of computers in futuristic RPGs has to do with the inability of RPG designers to know for how long Moore’s Law will extend and what exactly this means for the future society. For most of you, I probably don’t need to explain Moore’s Law, but for the few who are not in-the-know, Moore’s Law was a term coined by Gorden E. Moore, one of the founders of Intel, who noticed back in 1965 that computing power seemed to be effectively doubling with respect to price around every two years or so (actually, he was focusing on component counts per chip, but the practical implication was always that performance was increasing exponentially), and based on this trend, he predicted that it would continue doing so for some indeterminate time to come. As I write this, it is 2013, some 48 years since Moore made his famous prediction, and while the trend may be slowing down, it has by no means stopped.

Ray Kurzweil has written extensively on the topic of what might happen if Moore’s Law continues for a few more decades. He argues that fairly soon, a $1000 computer will have the same computing power as the average human brain, and few decades thereafter, a $1000 computer will have greater computing power than that of every human brain in existence (yes, all of humanity put together). At such a point, computers would be intellectual gods, and the human mind, virtually obsolete. But, of course, all this is predicated on Moore’s Law continuing, and nobody yet knows whether or not that will happen, and if so, for how long.

One of the problems that we’re facing is that the space between individual transistors on a silicon microchip has gotten so small that we’re entering the realm where quantum effects such as electron tunneling can occur, which would make these microchips so error-prone that they would be effectively unusable. Another problem has to do with the dissipation of waste heat. There are various avenues computing could take to try to overcome these problems, and I’m going to talk about them in roughly the reverse of the usual order. In other words, I’m going to address the most esoteric ideas first and then move on to the more practical ones, many of which are already proven.

1. Quantum Computing

When it comes to quantum computing, there are really three questions that need to be answered. First, what are the inherent, insurmountable limitations, problems that will impose themselves regardless of how much R&D we dump into this technology? Second, even assuming we can get a satisfactory answer to the first question, what are they going to be able to do for us above and beyond what classical computers can do? And third, how much R&D will it take to get them to the point where they can do this? Of course, the answer to all three of these questions is that we just don’t know, but this hasn’t stopped people from speculating.

Some observers suspect that maintaining coherence across qubits becomes exponentially more difficult as the number of qubits increases. Whether or not this is actually true and will turn out to be an insurmountable problem is unknown, but if it is, it would likely result in quantum computing being a stillbirth science, a technological dead-end.

Other observers take a more optimistic view, hoping that after sufficient R&D, quantum computers will have a few, narrow niches of applicability, particularly when it comes to handling very large problems, the sorts of problem that modern machines would need years, centuries, or even longer to solve. By contrast, at least theoretically, a quantum computer might be able to reach an answer to such problems in just hours or days, but, somewhat perversely, classical computers would be still quicker at solving all the small problems, the everyday sort of computations that computers do all the time. Hence, the idea that quantum computers will ever reach into the general consumer market seems very unlikely, and, ultimately, the general consumer market is where the big money is, as pure research and niche applications can only attract so much investment.

Nonetheless, if we could run extremely complex simulations, simulations which we just don’t have the computing power to run at this time, it could revolutionize our understanding of physics and weather systems and… well… I just don’t know enough to be able to speculate. But that’s the whole point of quantum computing. If we can achieve it, it will open doorways to scientific inquiry that we just can’t open any other way, but at this stage in the game, the technology is still too young to make any useful predictions.

2. Optical Computing

All of the computers we have today are electrical, and electricity is made of electrons. Electrons transistors are pretty fast, but photonic (i.e. light) transistors could be even faster as well as being more energy efficient (i.e. less heat). There are numerous problems, however. For example, how do you guide the light so that it goes exactly where you want it to go?

So far, we’ve addressed this problem by using isolators, substances that absorb the photons that are moving in the wrong direction, but the consequence of this, aside from the problem that our best isolators are made from very rare materials that have to be placed within a magnetic field in order to work (which limits scaling), is that you end up losing photons from the system precisely because they’re getting absorbed, and this diminishes the strength of the photonic signal. In short, to put it in layman’s terms, isolators are a pain in the ass, and they suck.

However, Scientists in China have recently made a breakthrough, figuring out how to redirect the stray photons rather than absorbing them, and doing it without rare natural materials and without a magnetic field. The prototype is currently somewhat limited in terms of the wavelengths of light it can redirect, but it’s a good first step.

Likewise, an international team of scientists working at MIT, Harvard, and the Vienna University of Technology have created an optical switch that can be controlled by a single photon, a technology that could be applied to both conventional and well as quantum computing. This is essentially a photonic transistor.

Another international team working from both Australia and Germany drew inspiration from biology in copying the coiled, interconnected, nano-scale springs in the wings of the Callophyrs Rubi butterfly to create a tiny, photonic, crystal beam-splitter that’s thinner than a human hair yet packed with over three-quarters of a million polymer nanorods, which together are able to control whether or not light can pass through it depending on the light’s circular (rather than merely linear) polarization. Once again, this could prove useful in both conventional as well as quantum computing.

I can’t tell you with certainty if optical computing will ever become a reality, but scientists are certainly laying the initial groundwork, so I would say that it looks promising but add that fully optical processors are probably decades away, at best.

3. Carbon Nanotubes

Scientists from Stanford have recently created a working, 178-transistor computer assembled from carbon nanotubes. Albeit a very tiny first step, it’s a pretty big deal all the same, because carbon nanotubes are so notoriously difficult to work with. For starters, they grow in a crisscross pattern, forming unpredictable connections, and as if that weren’t bad enough, about a third of the time, they come out malformed and basically unusable. The Stanford team found workarounds to both of these problems, opening up the possibility that we may one day have processors that clock at hundreds of gigahertz, and just as importantly, we may be able to make them small enough to fit into objects that we wouldn’t normally associate with computers.

4. Graphene

Graphine is basically a flat sheet of carbon atoms. Chemists had theorized that carbon could take this form for well over a century, but nobody had figured out how to synthesize it. After all, it’s only a single atom thick. In the 1970s, researcher tried “growing” it on top of other materials, but they were never quite successful. Then, in the 1990s, they tried to obtain it through the exfoliation of soot, where it had been observed through electron microscopy as occurring naturally, but once again, these efforts proved unsuccessful. Finally, in 2004, two scientists at the University of Manchester figured it out. Their method was to use micromechanical cleavage, or what they called in layman’s terms, the scotch tape technique. That’s right; they literally pulled layers of graphine from graphite and then transferred these onto silicon wafers using nothing more esoteric than some sticky tape. For this feat of engineering and subsequent research, they received the 2010 Nobel Prize for Physics, and suddenly everybody could start playing with this new substance. Since then, even better synthesis techniques have been invented, but the important thing is that graphine seems to have some very special properties, some of which may revolutionize microchip architecture, not to mention a number of other industries.

To put it a nutshell, graphine is the strongest material in the world, harder than diamond, yet bendable and stretchable. Light and transparent, it is one of the best conductors of heat and electricity, making it a supercapacitor, in effect, a battery with very quick charge/discharge cycles. Furthermore, since it’s merely carbon, it can be easily recycled. It has also spawned research into other ultra-thin substances as well as substances that can be used in ultra-thin applications, such as Fluorographine, Graphane, Boron Nitride Nanomesh, Monolayer NbSe2, Monolayer MoS2, Monolayer WSe2, Superconducting MgB2, etc., and these can be layered together in various combinations to engineer materials with entirely new properties. What is most important to the future of computing, however, is that graphine transistors can be manipulated through negative resistance and are much faster than silicon. According to a recent report, the prototype, developed at my alma mater, U.C. Riverside, is fifty times faster than anything we currently have, and yes, that’s just the prototype. Likewise, scientists at Stanford have just invented a new way to create graphitic ribbons using DNA to provide a scaffold, and from there, they’ve managed through a somewhat complicated and imperfect process to transform these ribbons into working graphene transistors. The technology is obvious still in its infancy, but it’s now proven, it’s apparently easily scalable, and after more research, who knows? Moving over to data storage, scientists in Australia have recently used graphine to create a holographic optical disc with a data density that is even higher than that of our best hard drives, and once again, that’s just the prototype.

It will probably be awhile before we’ll see graphine make its way into actual production, but it now seems pretty certain that it will eventually happen, and when it does, I think we’ll see a sudden increase in both processor speeds and data capacity. This will, in effect, be a further extension of Moore’s Law (at least in performance terms), and I’m guessing that it will probably occur mainly in the 2020s and/or 2030s.

5. 3D Chips

As everyone knows, silicon chips are flat. They need to be that way so the waste heat that they generate can just float away, like smog floating up from a busy industrial city. But this creates a problem. In order to put more transistors on a chip, chip-makers have to build them smaller and closer together, and there’s just a physical limit to how far they can go as well as the practical cost of having a retool the production lines with new lithography every time they want to push the technology. Realizing that they can’t push it any further, they’ve finally figured out, after decades of research, various ways to deal with the problem of the waste heat so that they can start building up, adding layers to the chip so that it has multiple surfaces upon which to process data. The new technology is called 3D wafer-level chip packaging or chip stacking for short.

Earlier this year, Samsung released a 128-gigabit (16GB) V-NAND SSD (vertical NAND solid state drive), which includes a 24-layer chip that according to the company could eventually be scaled up to a terabit (128GB). According to Samsung, compared to the latest single-layer chips these new 3D chips are faster, have twice as many transistors per square millimeter, use just a little over half the power, and with an estimated 35,000 program/erase cycles will supposedly last ten times as long. In order to manage the heat problem, each layer of Samsung’s chip is separated by a 50nm, Silicon Nitride dielectric (electric insulator).

One of the questions that will ultimately determine the fate of this technology will revolve around how many layers can be reliably stacked before fabrication problems or the heat problem become overwhelming. Right now, we’re just at the beginning of actual production. The important thing is that everyone knows the technology works, and since none of the other chip-makers want to be left behind, they’re all pouring lots of R&D money into chip stacking so that they can catch-up to and hopefully even overtake Samsung. It will be interesting to watch what happens.

6. Parallel Processing

I don’t think I really need to describe this last one, because pretty much everyone already knows about it, but for those who need a definition, parallel processing is just the ability to carry out multiple operations or tasks simultaneously, and this is most easily done when a problem involves doing the same operation over and over and over on a relatively large data set. Computers have been doing this for decades, in the past by splitting a problem into its component parts and distributing them over a network, and more recently by putting multiple processors into a single computer or even multiple cores into a single processor. For example, the latest video cards now include over two thousand stream processors, because video games involve doing the same operation over and over and over on a relatively large set of data. However, this sort of problem isn’t unique to just video games. There are all sorts of problems, particularly in the sciences, that involve doing lots of calculations. For this reason, multicore vectorized CPUs and massively multicore GPGPUs are increasing in popularity.

Likewise, during the past few years Apple and a number of chip manufacturers created OpenCL (Open Computing Language), a framework for writing programs that can run on a variety of different processors. To put it succinctly, it’s basically a free, cross-platform standard for parallel programming.

Now why is this important? It’s important because as Timothy Little, one of my favorite Traveller Mailing List members, writes, “Artificial intelligence seems likely to be a massively parallelizable problem, since natural intelligence seems to be based in a massively parallel system that operates comparatively slowly. So a mesh of tiny, efficient CPU cores may be just as good for the purpose of matching or exceeding human brainpower as a single CPU with a million times the speed. The latter is quite possibly not achievable, but the former seems reachable within decades.”

7. Conclusion

Being an incurable pessimist, I would have loved nothing more than to mournfully shake my head and explain that Moore’s Law will break down in the next few years, that Kurzweil’s dream of a technological singularity will not unfold as hoped, and that we’d be facing a technology ceiling through which artificial intelligence would not emerge or would emerge only partially, in some limited form. That would have been nice, because it would have resulted in future that is at least intelligible to me, because to try to comprehend a technological singularity where machines evolve into post-sentient gods is virtually unimaginable. I can say it, but I can’t very easily envision what it would be like to live there. After all, what does it mean for humanity if that happens? And even should we put humanity to the side, which would be quite reasonable in such a happenstance, what would a future AI society be like? Easier, I think, to explain cars, the Internet, and cell-phones to a caveman than to explain to myself what it would be like to exist in such an amazing future.

If this were a year earlier, I might have been able to pull it off and convince myself that Moore’s Law is on its last legs, but there have been so many scientific and engineering breakthroughs this past year that I’m just not comfortable with that position. It now seems likely to me that Moore’s Law will continue for at least one or two more decades, and beyond that, I can’t really make any predictions, but one or two more decades of Moore’s Law is probably enough to get us to the point that we can begin to seriously undertake the task of building a strong-AI. So, despite being a pessimist, I think this is really going to happen. We’re going to have artificial intelligence here among us, but what exactly that means is an open question. That’s the proverbial 800-pound gorilla in the middle of the road up ahead, and nobody seems really sure what to make of it.