Technology is a big part of our lives—in fact, it’s responsible for many of our livelihoods—so it’s a little bit strange to realize how vague the word “technology” is. Everyone has a similar idea of what it means, but nobody stops to define it.
When you DO stop to define “technology,” it’s easier to understand why it’s such a broad umbrella. Literally speaking, technology is the application of scientific knowledge for practical purposes; in other words, anything humans have ever invented to solve a problem counts as technology. We started a hundred thousand years ago with the gift of Prometheus — fire — and invented ourselves all the way to the fire emoji.
So: we have some sense of where technology begins and how (very) far it has come. This naturally raises one ominous-sounding question: where does technology end? In other words, what are the limits of technology?
We’ll visit three of the proposed answers:
The limits of technology are ultimately tied to the limits of the human body and brain. There are, if you will, technological limits to our brains and bodies: our eyes can only see so many pixels, our ears can only hear so many frequencies, our brains can only compute so much information. So one way to answer the question—taking our eyes for example—is that screen technology “ends” when our eyes can no longer detect improvements in it, like an infinite-pixel screen.
Let’s take a simpler example: during the 2000s, cell phones got smaller and smaller, but then they started getting bigger again during the 2010s. Why not keep going smaller? The reason should be obvious: because hands stay the same size. It’s technically possible to make thumbnail-sized cell phones, but they’d be pointless—so there’s a practical limit here, even if there isn’t a scientific one. (And nobody wants to watch Hulu on their thumb.)
The big possible exception to this answer: AI, especially the possibility of intelligent AI (matching humans) or super-intelligent AI (surpassing us). The usual response from this camp is that we’re a long ways from being able to create any “dangerous AI,” and then they start arguing with the camp who claims:
Technology will create the end of the world as we know it (but it might feel fine). We’ve mentioned Moore’s Law, the idea that computing power doubles every couple years. To people in the present, it’s the miracle that gave us smartphones so soon after PCs—but to futurists, Moore’s Law is really intense stuff because (A) it’s held true in reality all this time and (B) compounding exponential growth can have sudden and explosive consequences.
Here’s a question most of us aren’t ready to answer: what would be possible if you had infinite computing power? Well, so many other sciences (and technologies) would develop faster—including AI—and once we develop intelligent AI with near-infinite computing power, which can improve itself ad infinitum, we’ve come full circle. This hypothetical moment in the future is called the Technological Singularity and whatever happens next, we’re no longer in control; it’s like we’re putting all six Infinity Stones in a box on the street and waiting to see what happens.
Proponents of this theory don’t agree whether this would be good or bad—but they do agree that the outcome will be a complete extreme of the spectrum. Either our new AI overlord will sustain an indefinite human Golden Age where all worldly needs are managed for us (because it can)—or it will snap its fingers and wipe us from the face of the earth before we have any idea what’s happening (because it can).
Technology has no limits except the ones we can’t surpass—and we’ve found a few we can’t surpass. There are some technological roadblocks that cannot be solved with “more horsepower.”
- Battery Technology — We’ll skip the physics class and cut straight to the point: according to science as we know it, humans have already maxed out the energy storage of small batteries (think AAs, cell phones, and laptops). There’s more potential to improve battery tech the larger the batteries get—which is the space Elon is working in—but for handheld stuff, all we can do is increase the efficiency of the devices using the batteries.
- Conceptual Challenges — There’s an old saying you’ve heard: that the human brain is so complex that it cannot understand itself (and if it were simple enough to understand itself, it couldn’t). That’s a good way of capturing just how steep a challenge currently faces AI researchers: how can you teach a machine “human intelligence,” with all its nuance and fluidity, when we barely understand that faculty from within? Computers can beat humans at plenty of tasks (like multiplying large numbers) and even mimic fluid intelligence within narrow contexts (like chess), but it’s still just parlor tricks compared to the living intelligence of an average person.
- Random Convergence — There’s an apocryphal claim that the size of our spacefaring rockets is ultimately constrained by the width of a horse’s ass—since that’s the basis for chariot sizes, thereby road widths, thereby the maximum width of anything you can transport. Even if this claim isn’t entirely valid, it illustrates a valid point: technology is often constrained by factors it does not control, even if those factors are completely unrelated to the technology itself. (Crossing over with Example #1: imagine how many new inventions are impossible when existing small-battery tech is the best we can do.)