are bits of writing from many sources such as personal correspondence,
posts to on-line discussion groups, notes, and occasionally even some journaling.
All of this is informal in nature, but contains some interesting and/or
I'm forever thinking
about the integration of humans and computers, which is basically at the
heart of a lot of cyberpunk ideas. Personally, one of the areas I'm working
in right now is neuroscience that uses an interface between electronics
and nervous tissue. Of course, we just use the electronics to see what
the neurons are doing, but you can also stimulate them through essentially
the same apparatus. Unfortunately, no one in my lab has been doing that
lately. However, a lot of the publications I read delve into more cognitive
applications of this and similar technology.
I'm fully in favor
of cranial implants. There is a symposium every year on brain-computer
interfaces at the Society for Neuroscience conference. That's one of the
highlights of the meeting for me, even though it really isn't my area of
research (although multi-channel signals are employed, and that's what
we use), the technology overlaps to some extent.
Future, Part I
[from a correspondence
with my uncle]
>... This says a
lot about our impatience with everything since the dawn of instant communication
and, in particular, the Internet.
It's funny how the
things you lived without for all of your life up to this point suddenly
seem indispensable as soon as they are introduced.
>Where do we go
looking forward to Wi-Fi cranial implants, but I guess those are another
50+ years away. Ever see "The Matrix"? At the neuroscience conference last
November, I attended one of the symposia on computer/brain interfaces.
We're only getting a foot in the door by having patients with ALS or other
forms of global paralysis move a mouse cursor via implanted electrodes.
I honestly have little doubt that my great-grandchildren's generation will
routinely have a "port" installed at birth.
Think this is a
bit far-fetched? Consider it from the perspective that the miniaturization
of technology basically pushes us toward being cyborgs. Something like
a wrist watch becomes *part* of your body, so it may as well be an implant.
Technically, the only thing separating us from that definition is a layer
of skin. Similarly, we surround ourselves with an ever-increasing array
of ever-smaller devices: PDAs, mp3 players, cell phones, text messagers,
digital cameras, GPS receivers, etc. These devices are slowly being incorporated
into one another (and wrist watches, incidentally). However, they haven't
been incorporated into *us* yet. "Implanting" them isn't what I'm getting
at here though; I expect there to ultimately be (sorry about the split
infinitive) some sort of interface directly with the central nervous system.
Imagine the things
that could be incorporated into a "cyborg" with sufficiently advanced hardware
of the variety above. For example, all sorts of reference materials could
be accessed directly. Imagine the potential of immediate access to, say,
multiple language translators, telephone directories, the periodic table
of elements, and so on plus constant updates from other information sources
that might be relevant to a given problem. Think of a mental version of
what Neo could do physically inside the Matrix.
That's kind of what
I want: expanded mental awareness on demand. Sure, physical superpowers
would be great as well, but for some reason I prize this idea of virtual
omniscience through information technologies over most versions of omnipotence.
Future, Part II
[from a correspondence
with my uncle]
>I wouldn't be surprised
either, although I'm certainly glad I won't be around to see it.
Personally, I think
I was born too soon. We're still in the infancy of our ability to navigate
information technology. Only a small part of that is the issue of interfacing
more efficiently. There are some unusual directions that computing will
take at some point in the future that will make still more advances possible.
>Think about this:
Even then: What next?
There's a really
good book out right now called "The Tipping Point" by Malcolm Gladwell,
a staff writer for the New Yorker (although he was on leave working on
a new book during much of your subscription, so it is possible you missed
out on his articles). In his book, he examines social trends from the perspective
of epidemiology. The interesting thing about the spread of any idea or
new piece of technology is that it reaches a certain point and then "explodes."
It is difficult to predict when this will occur, although Gladwell explores
some mechanisms that contribute to this phenomenon that I won't go into
However, an important
point is that some of these shifts do more than spread the idea itself;
they actually induce paradigm shifts in their wake. In the last decade
the internet has followed that trend. In a matter of a two or three year
period, this obscure text-based network went from the past-time of college
students with impaired social skills (read: Asperger's syndrome) to a core
medium in the popular culture. Note that this was not a simple, linear
development. Almost overnight it went from a few thousand users to millions.
The fascinating and frustrating thing about rapid changes like this is
that we can't answer the question, "What next?" because the nature of a
paradigm shift means that all our assumptions up to that point were based
on outdated principles. Looking back, some of the prognostications of global
economic and social boon or doom were fairly laughable. Exhibit A: The
Of course, such
a shift isn't that difficult for the generation that grows up with it;
everything is new to them, so no balance has been upset in their lives.
For the older generations, something new and radically different violates
"natural law." Robert Anton Wilson challenges people to show him a violation
of natural law whenever he hears that expression. Of course, I am something
of an "early adopter," one of those people who says, "Cool! Sign me up,"
so nothing has yet phased me. But we’ll see.
While there may
ultimately be unforeseen (and unforeseeable) negative implications to the
advances I described above and in my last message, I guarantee you the
benefits will outweigh the drawbacks. Consider this, when you're gone from
this life, the only record of you will be whatever information you stored
in your lifetime: photos, letters, etc. We don't have a "back-up" of a
single person on this planet. Just imagine where we would be today in the
realm of physics if we had a "rescue disc" to "restore" Einstein? Consider
what he could do with the data from today's telescopes and particle accelerators.
Or what would art be like if we had "saved" Leonardo and had his insights
on how to apply Photoshop and Lightwave for 2D and 3D art, respectively.
This is indeed a
paradigm shift. Try to get your head around the consequences of this inevitable
(albeit still distant) leap in the human experience.
From your earlier
back from you and I, almost everyone was into letter-writing.
A few years ago
a speaker on the role of technology in education came to the district where
I was teaching. He pointed out that whenever a new technology came along,
people expressed fears that it would supplant earlier ways of doing things.
For example, one idea was that photography would replace painting. Instead,
the exact reverse occurred. Painting dramatically increased in popularity
because it was possible to capture any still-life or model to work on at
the artist's leisure.
The internet "broke"
into the mainstream in its still-nascent form while I was at LSU. Suddenly,
email became preferable to expensive phone calls home, especially for foreign
students. At that point, most of the internet was still text-based, but
that led to a proliferation of not only letter-writing, but also popular
literature. While the Web has largely supplanted other portions of the
internet (indeed, most people assume they are one and the same), back then
the Usenet was the main forum, and every variety of text was available
on it: forum discussions, essays, jokes, stories, etc. I'm sorry to see
that portion of the web fading away as increasingly more "sensory" media
(static graphics -> animations -> streaming audio -> real video) propagate
over ever-faster connections.
Future, Part III
[from a correspondence
with my uncle]
stuff. This is really getting out to "the edge".
You might be surprised
how common this area of research is. Like I mentioned earlier, there was
a whole symposium on it at the last neuroscience conference. There are
a number of studies that are running in parallel as well as additional,
unrelated lines of research that are converging from radically different
angles (e.g., cellular approaches, microelectronics, etc.).
>I would think the
EEG approach has the best chance. Drilling holes in the skull? Wow.
For any negatives
in the more invasive approach (e.g., expense, possible infections and other
risks to patients, etc.), implanted electrodes have the obvious advantage
that they are in communication with a smaller number of cells, so less
learning is required to effect the changes to produce the ability to, say,
play "Pong" with one's mind. By contrast, EEG "hears" signals from tremendous
populations of neurons and through the comparatively interminable distance
of the thickness of the skull and multiple layers of protective tissue.
To make use of such "muddy" signals is analogous to trying to play a piano
with boxing gloves on.
Granted, in a very
literal sense, even going through life with the functional equivalent of
boxing gloves would be more ability than many of these patients have presently,
so even these less efficient approaches are being explored. In fact, because
EEG is vastly more accessible, a number of researchers and even some amateurs
have learned how to utilize conscious changes in their brainwaves to manipulate
their environment. Naturally, this is still somewhat limited, but with
training, one could, say, steer a sailboat that was properly modified to
transduce such signals into action. I saw a demonstration of this last
example on a television program about a year ago.
That being said,
imagine the possibility of a generation of humans growing up with this
kind of interface. Whereas the pioneers of this technology require several
weeks or even months of training in order to achieve useful effects, children
raised to manipulate their brainwaves would have little trouble navigating
a world where this technology has grown ubiquitous... much in the manner
computer savvy kids of the computer age know their way around a keyboard
with what we might regard as a precocious degree of skill. And the generation
with the cranial implants will think there's nothing more extraordinary
about their enhanced abilities than most people do today about digital
watches, contact lenses, or organ transplants.
>Thanks. I'll try
the Internet site tonite.
I think you'll enjoy
his material. There is a lot of applied neuroscience in many of his articles,
sometimes only by implication but more overtly in other articles (e.g.,
"The Naked Face," "The Art of Failure," among others).
next book is called "Blink: The Power of Thinking Without Thinking," although
he has not yet updated his site to promote it. It is scheduled to be released
in January and expands on a lot of the themes addressed in his shorter
Blink has since been released and sample chapters are available on-line
at his site as of this writing.]
Future, Part IV
[from a correspondence
with my uncle]
of science getting involved - say, with a fighter pilot, and with the human
mind playing tricks, could this ever be a fully controlled situation?
Well, like anything
else this requires practice. Consider other activities in which you could
conceivably make the wrong "move" in a very literal sense. For example,
think about playing a musical instrument. At the start, there are a lot
of "bad notes" and false moves. However, with practice, the player grows
more skilled and has no problem executing increasingly more sophisticated
moves, particularly if they are an established sequence (e.g., a scale
run on a guitar). The aforementioned Gladwell article ("The Art of Failure")
about "choking" under pressure gives some insight into the neuroscience
Of course, there
are multiple applications of this technology, divided primarily between
input and output approaches. The area I am more interested in is the under-explored
of the two: sending information into the brain. So far the closest we have
come to this are sense organ prosthetics such as cochlear and retinal implants.
These actually talk directly to the nervous system rather than letting
the original (now absent or severely damaged) sense organs perform this
function. Presently, these fall far short of the original sense organ in
terms of their resolution (e.g., in the case of hearing, they discriminate
only dozens of frequencies rather than the millions we can hear). However,
the expectation is that they will someday surpass human senses. An analogous
outcome would be the character Geordi from Star Trek: The Next Generation
who, through the use of a prosthetic visor transducing into a pair of cranial
implants, could perceive the world in multiple spectra including thermal
(IR) and high frequency bands.
of the "input" approach is in communicating with the spinal cord and peripheral
nervous system. This includes various methods of activating muscles to
grant movement to paralyzed individuals. This is still an awkward area
primarily because there is no feedback to this input to help steady the
individual. For example, if you stand up from a seated position too quickly
and start to fall forward, you "notice" your motion immediately and send
compensatory signals to activate muscles to push you in the opposite direction.
I say "notice" in quotes because a lot of this happens in your spinal cord,
completely outside of conscious awareness. As a result, this method still
has a long way to go.
The other, more
researched approach in brain/computer interfacing is in transducing mental
signals as an output. This is an easier approach than the "input" version
for a number of somewhat technical reasons that I won't go into. In additional
to the article I forwarded you on human research, there is a lot of work
being conducted in this area with primates. One very interesting example
as well as an earlier
report I can across while looking up that one:
Most of these applications
are directed toward restoring abilities to paralyzed individuals. While
we tend to imagine this category as including quadriplegics, there are
far worse cases of disability such as complete paralysis through severe
strokes and ALS (aka Lou Gehrig's disease). Stephen Hawking continues to
be productive because he is able to communicate through the use of a single
finger which allows him to manipulate a computer on which he selects words
to form into sentences which are then sent to a voice synthesizer. This
is a relatively time-consuming process, but it is nothing compared to the
hardship of individuals who lack even this degree of mobility.
are exploring several approaches to allowing severely impaired individuals
to move a mouse cursor. This takes some time for the nervous system to
get its head around, if you'll pardon the pun, but eventually the mouse
movements grow less erratic and can be controlled to navigate a keyboard
on the screen to type and send email, browse the web, and to play computer
chess. Still, we're only seeing an approach that substitutes "virtual"
approximation for physical movement. The next step would be to think thoughts
that translate into action. It would be nice to be able to type at the
speed of thought, to borrow from Bill Gates' catch phrase.
So far these approaches
have been confined to small numbers of neurons, typically in the motor
cortex. There are several reasons for that, notably that the motor cortex
is better understood than other regions. Presently, the areas involved
in language are very incompletely understood, so we cannot tap into them
as effectively as we might at some point in the future. When we do, we
will have a tremendously more efficient interface with computers than our
hands over keys and a mouse.
There are indications
that users are already itching for this, given the proliferation of multi-button
mouses (or mice, depending on your preference) with scroll wheels and the
option to customize "hot keys" for routine tasks. Similarly, look at what
has happened to our keyboards! Compare an old Underwood manual typewriter
with to the modern electronic incarnation. Today's version comes complete
with all sorts of extras including the "Windows" key, the virtually obsolete
back-slash key, a row of "F" keys, the arrow keys (along with the "page
up/down" family of navigators), the numeric keypad (in addition to the
original row at the top), as well as the "internet keys" on higher-end
keyboards. This does not even address combinations of simultaneous keystokes
(we could almost call them words) such as Ctrl+X or the infamous Ctrl+Alt+Delete.
Even with all of
this, our techniques of operating in 3D are woefully inadequate. When computer
games started pushing into true three dimensional space, gamers had to
hybridize the traditional controllers. The most popular compromise has
been to use a three or four button mouse to switch weapons, shoot, look
up and down, and rapidly strafe (i.e., move from side to side), while the
keyboard's arrow keys were designated as turn left/right and move forward/backward.
And this is just "play"!
There are serious
applications that require navigation in 3D (e.g., molecular modeling in
drug development is the first one that comes to mind), so there is an economic
to push this beyond our present conceptions in this area. Either we explore
the alternate approaches outlined above or we end up with keyboards the
size of a grand piano while our CPUs shrink to resemble an iPod. My bet
is an ever-closer relationship with our nervous systems.
Hawking has become a proponent of this position and, indeed, sees it as
an inevitability. I'm the last one who would argue with him on this point.
I guess the question
of how computers would change your brain would depend on the methodology
of the interface. By that, I mean that you could "burn" memories in at
several different levels of processing. For example, the least invasive
would be something like a wearable computer that fed data directly into
your retina or optic nerve. Such a device would just be nothing more than
a more integrated version of sitting in front of a monitor and looking
at the screen (just skipping the screen). On the other hand, you could
have, say, nanobots crawling around inside your skull physically rearranging
your synapses. That would almost certainly result in changes far beyond
the effect on the single memory. Of course, no one understands how memories
are stored as yet, so this is highly speculative.
As far as computer
interfacing with my brain, in addition to instant access to the web, I
would like it for a lot of other things as well, like data displays and
manipulation (i.e., analysis) and expanded capabilities like that. Our
brains are eventually just going to become indices of data stored externally.
As it is now, I will typically remember what someone said, but if I need
to know how they said it, I will have to look that up. Wherever I need
to be precise, I have to go searching, so the instant access would come