The following is a modest summary of a single chapter of David Deutsch’s book on reason and knowledge, The Beginning of Infinity, and several adjacent concepts. The specific chapter, entitled “Artificial Creativity”, examines a source of possible confusion surrounding artificial intelligence. If this simple summary piques your interest, do yourself a favor and read Deutsch’s full work.

In this blog post, I’ll first summarize David Deutsch’s analysis of the generation of knowledge and all its inherent complexities. After that and with it as a foundation, I’ll pivot into Deutsch’s commentary of artificial intelligence.

Part 1: The Infinite Reach of Knowledge

One of David Deutsch’s main theses in The Beginning of Infinity is that knowledge is the currency of the universe. With sufficient knowledge, he argues, a person can accomplish any task that does not violate the laws of physics (such as traveling faster than the speed of light). This is true because, as Deutsch writes, "There can be no regularity in nature without an explanation, for to believe in an explanation-less regularity is to invoke the supernatural."

Upon first reading, I had some difficulty parsing this claim. However, with a bit of additional thought I was able to work through it. First, imagine being an early human and stumbling upon some new "regularity" in the world — say, that water seems to always run downhill, or that the four seasons follow a predictable schedule. Such observations may seem hopelessly mysterious while brand new, but with hindsight we know that those two particular regularities each have concrete explanations (gravity1, and the orbit and tilt of Earth, respectively). Deutsch's point, which almost seems too simple to state, is that all regularities will have such explanations (whether we immediately know them or not!). This extrapolation is what unlocks the full power of explanation, and thus knowledge.

Imagining the opposite of this assertion should further crystalize its fundamental role. Imagine some regularity in nature — any predictable cause-effect relationship — that is that particular way for absolutely no reason. In this thought experiment, there is no crystal ball or magic spell powerful enough to ever reveal the constituent rules creating the regularity. Because those rules flat out don't exist.

Obviously, this thought experiment is deeply unsettling. It profoundly makes no sense, and I trust that most of us will agree it is not the way our universe works. Thankfully, in reality, each regularity has its particular details because of a certain set of preconditions, and with sufficient understanding and knowledge, human beings can change those conditions, and thus change their environments. Fascinatingly, this argument brings all imaginable problems, and their solutions, into potential human reach.

The only limiting factor is knowledge.

The Pattern that Creates Knowledge

If knowledge is the currency of the universe, a critical reader should immediately inquire as to the source of such knowledge. Infinite regress answers, such as "God did it", don't even approach the question, for if a God-like entity did in fact sprinkle the first drops of knowledge into our physical realm, it would certainly have accomplished this task with a vast amount of its own pre-existing knowledge. Thus, the proposed answer contains the original problem. Producing an answer that avoids this pitfall, which is to say it explains how knowledge came to be without invoking an even more knowledgable entity, turns out to be a real challenge.

At this time, there is only one known pattern that solves the problem of true knowledge creation: Alternating variation and selection. (If your mind went straight to Darwinian evolution, you're on the right track. Keeping that example in mind will make the following paragraphs more clear.)

The first step, variation, takes place on some pre-existing material in a way that puts both the pre and post-variation versions side by side, competing for the same resources. The standard way this comes to pass is when the original material is copied, with rare errors in the copying process supplying variation.

The second step, selection, must occur on our pool of variants. This is usually easy to imagine, as all resources are limited. Thus, the strongest version from the pool consumes the most resources and is able to copy itself again, likely this time without any new errors (since, in practice, copying errors are rare, otherwise we couldn't even call the process copying). This uneven consumption of resources begins to propagate the change our original, unpredictable copying error produced.

That edge, that heightened ability to consume resources and replicate itself, constitutes the knowledge that our physical material has created.

The Two Sources of Knowledge Creation

At present, there are two known instances of the above pattern in the universe actually creating knowledge. The first, foreshadowed above, is Darwinian evolution, whose physical material undergoing alternating variation and selection is the organic molecule, DNA. Imperfect replication during sexual or asexual reproduction starts the process and constitutes our needed variation. Such copying errors are perfectly aimless, rendering Darwinian evolution spectacularly slow at producing replication improvements. However, when it does, these replication improvements spread by edging rivals for premium resources. This satisfies the selection part of our equation, and taken together the two halves create new knowledge encoded in DNA molecules.

The second source of knowledge creation is human creativity, which encodes its information in books, websites, speeches, and the brain states of human beings. Of this list, only the brain states of human beings vary in the way we need to begin our knowledge creation pattern. The fundamentals of this variation are far from understood, but we're all comfortable with the output. We call them "original thoughts", and these constitute variation of a physical material. Note that whereas perfectly aimless copying errors satisfied the variation portion of Darwinian evolution, human creativity rides the wave of intention and so is able to iterate many thousands of times more quickly.

The infancy of our understanding of human brains obscures their adherence to this fundamental pattern of knowledge creation. Consider a static brain, with no variation whatsoever. Such a person is not thinking, dreaming, or even likely alive. On the contrary, a typical living brain's neurons are constantly firing to process memories and sensory inputs. This self-driven variation lies far outside our current understanding, but the thoughts it kicks up from the depths of consciousness absolutely count for our needed variation of a physical medium.

As was true for Darwinian evolution, the selection phase of human creativity is in plain sight. If I get the bright idea to fly by jumping off my roof and flapping my arms, I will either be talked out of it or quickly demonstrate how poor of an idea it was. Conversely, if I get the bright idea to create a novel business, I will no sooner deposit my first check before other entrepreneurs mimic my business model. Human ideas, just like optimized DNA, are self-replicators that face strong selection pressure immediately after inception.

Part 2: Confusion Surrounding Artificial Intelligence

At the risk of being cliché and using a dictionary, the word "intelligence" is defined as "the ability to acquire and apply knowledge". That definition is perfect for this conversation. The key word is "acquire", for it is the creation of new knowledge that is most critical.

Ongoing efforts most often described as artificial intelligence surround a school of patterns known as "machine learning". For the uninitiated, machine learning essentially boils down to this process:

First, human beings define a goal. For this example, let's consider the task of recognizing faces in a photograph.

Second, humans manually comb a large dataset, determining correct answers for each item. In our example, this means identifying the center of each face in each photograph.

Third, human programmers do their best to break the problem down into its simplest components. Some such components might include:

  1. Identifying pairs of nearby dots which may be eyes
  2. From each possible set of eyes, identifying other facial structures (noses, mouthes, etc) in expected physical relation to the eyes
  3. Considering lighting and shadows, which may obscure various facial features
  4. Considering modifications to facial structure from items such as sunglasses or hats
  5. Adjusting physical relation expectations if the face was photographed at an angle

Fourth, humans define rubrics for each component problem, awarding appropriate “Face Points” to various outputs from each section. Once done analyzing a particular photograph, any area that earned a sufficient amount of “Face Points” is said to contain a human face.

This fourth part constitutes almost pure guesswork. Who can predict how heavily the algorithm should weigh a darker section extending down from between the eyes (possibly a nose) even if it can't subsequently detect a mouth? Similarly, the appropriate weight on a mouth's size is almost impossible to guess up front, as people in photographs could be doing anything from smiling, talking, looking away, or eating. The breadth of reasonable variation here is staggering, which is when the raw speed of computers takes over.

Fifth, human programmers write additional software that is able to analyze the human-curated dataset over and over and over again. Each time, it makes small variations to its grading rubric, and presumably each time it produces different answers for face locations inside the same photographs.

All of the above counts as the variation portion of our knowledge creation pattern. Thankfully, imagining selection is almost too easy: Individual rubrics’ answers are compared against the correct answers as outlined by humans in Step 2, and higher performing rubrics are retained for further mutation, while underperforming variations are intentionally discarded.

If the programmers run this application for long enough, it will continually get better and better at detecting faces in photographs. Once a pre-determined accuracy rating is achieved, the programmers will have taught a computer how to teach itself how to solve a problem.

The machine will have learned.

What We Just Accomplished

So far, our hypothetical programmers have closely approximated Darwinian evolution. They alternated between variation (random changes to the “Face Point” rubric) and selection (only high-performing rubrics were retained) and in doing so created the knowledge for how to detect faces in photographs.

In terms of utility and power, the programmers have achieved a real marvel. Facial recognition is a computationally intensive task. Healthy human brains complete it without conscious effort, but slowly and at great risk of fatigue. Computers' speed and endurance, on the other hand, open up a wealth of possibilities. It would take human Facebook employees years to go back through all the early photos uploaded by users, before they introduced friend tagging, and propose friends to tag for each photo. Similarly, the rest of mankind's photographic catalog would take another few years. Luckily none of this soul-crushing work will ever be required of a human brain, as a single sufficiently powerful server could finish each task in number of days. Similar computational brute force is currently being applied to medical research, statistical data mining, self-driving cars, and many other life-saving tasks. Make no mistake about it — the success of machine learning has greatly improved the quality of human life on this planet.

In terms of flexibility, however, the programmers have only created something on par with earth’s simple-minded species. Like a wild animal, this program is not capable of arbitrary behavior, nor is it capable of adapting to arbitrary inputs. Relevantly to the programmers, it is not capable of solving arbitrary problems. Just like squirrels can never learn to conduct a business meeting, so too can our facial recognition software never teach itself how to play chess.

Put another way, our program is a lot like a muscle. Only three things are encoded into it:

  1. The knowledge of how to complete specific tasks.
  2. The knowledge of how to detect patterns of inputs that require improvement at said tasks.
  3. The knowledge of implementing said improvement.

Critically, note that all the knowledge in this process originated inside the brains of its human programmers through the process we're calling "human creativity".

A bicep cannot lend a hand during a heart attack and start pumping blood because it lacks the specific instructions to do so. It can only do exactly what its DNA spells out, which is to contract when asked and, if asked enough, to grow in size to contract more powerfully tomorrow.

What We Did Not Just Accomplish

Knowledge on Darwinian scales is impressive, but it lacks the wide applicability humans imagine when we dream about true artificial intelligence. When people imagine artificial intelligence, we think of fluid conversations with robots and computer programs that determine their own desires. This, of course, is knowledge typical of human creativity, but our machine learning programmers have no more achieved it than squirrels have achieved the ability to hold democratic elections.

Common Mistakes

Google has an initiative to learn arbitrary 80s arcade games, and it is currently beating humans at nearly every one. At a glance, this might look like the Holy Grail of creating artificial human-like creativity. However, the details reveal that Google’s programmers just took their abstraction one step back. 80s arcade games share enough common ground to be wrapped up into a single meta-machine learning program.

Here too, the critical detail is that all the new knowledge originated in the brains of Google programmers. Until a program like this learns a radically different task (say, accounting), for which its programmers gave no thought, only Darwinian-scale knowledge creation has been achieved.

Artificial Creativity

The concept of generating new knowledge typical of human creativity has been on the table since 1936 when Alan Turing correctly argued that it was technically possible. However, in the 80 years since he published his conclusions, all human efforts in the name of AI have made essentially zero progress. Chat programs (the gold standard for detecting victory in this domain) are still painfully robotic, and we still can’t so much as begin to explain how we might program general intelligence into a computer such that it can go off on its own and create entirely new knowledge.

David Deutsch’s argument in his chapter "Artificial Creativity" is that until we understand the deterministic underpinnings of how human brains achieve this marvelous task, we're not likely to make any progress programming it for ourselves. And, as he states to end the chapter, "Once we do understand that, I expect programming it will be no great challenge."

Moving Forward

The quest for true AI chugs onward despite slow progress. Machine-learning offered a titanic improvement in our ability to instruct computers to improve our lives, but only scant gains toward genuine AI. There is hope, however, because as we stated above, regularities cannot exist without an explanation. That means that the individual components of human creativity, however complicated, are knowable.

Taken together, this means we can be certain that we're on the right path. Just like DNA is all that is required to both program wild animals' stiff, inflexible behavior and human creativity, Alan Turing argued in 1936 that general computers can be used to program anything, including the brain processes that coalesce into human creativity.

Any programmer will tell you that a problem's solution must be deeply understood before it can be programmed. Had the computer revolution preceded Darwin's On the Origin of the Species, we can be almost certain that modern machine learning techniques, which so closely mimic Darwinian evolution, would have been on hold, for no programmers would have been likely to even conceive of that type of solution. This is why we are almost certain to fail to manufacture artificial human creativity without first unlocking the engine of it we already have in our heads. Solutions' implementations rarely precede their explanations.

Defining and Predicting the Endzone

If we are truly no closer to programming genuine AI than squirrels are to holding democratic elections, or to how close we were in 1936, some goal re-evaluation may be in order.

The imagined dividends of unlocking true AI would seem to be the unification of human brain software and silicon hardware. Common belief is that this tandem, once finally united, will blaze through the universe's remaining mysteries with all the speed we're used to seeing in supercomputers. "Give it three hours to read all of Wikipedia and every textbook ever written and it will come back with cures for every disease and the secret to cold fusion", we say. Then, nervously, we add that it might also come back with a secret plot to destroy mankind once and for all.

This is why genuine AI is simultaneously so exciting and so terrifying for those thinking about it. The output of combining human creativity with silicon circuits is, by definition, more than we can imagine in both its potential for creation and destruction.

However, a best-of-both-worlds possibility may exist. The aforementioned human software and silicon hardware unification is already underway with brain-controlled robot arms good enough to grab a beer. If we focus on the tandem as the goal, and not the tandem's physical location, it could be true that re-instantiating human creativity inside machines is a bit like going around the block to get next door. Instead, it may be better to arm human minds with microprocessors, wireless ethernet connections, and ten terabyte hard drives. Rather than bring our creativity to machines, we can bring the power of silicon to our brains.

It's worth stating that we're not particularly close to this achievement, either. I imagine the difficulty here will emerge from inherent anatomical differences between body control and thought generation. Whereas the brain is already rigged up to export signals for limb control (otherwise, how would your arm ever get the memo that it's beer time?), no such obvious exposure point seems to exist for internal thoughts. Thus, the physical connections that allow robotic arms to receive and interpret signals for movement may be woefully insufficient for plugging into our brains' creativity engines.

Interestingly, I would not be surprised if we simultaneously unlocked the mechanisms to sync with the creativity hubs within human brains and understanding of the means by which they achieve this. In such a scenario, nothing could stop independent programmers from moonlighting and cooking up their own genuine AI programs, despite the inherent risk in them immediately deciding to kill us all. It is my hope that we bridge the connection gap first, and thus arm human brains with hard drives and microprocessors before we discover how to achieve the inverse.

Another Common Mistake

In thinking about this problem, I periodically fall back into the same mental trap that looks something like "Yeah but what if programmers align the expertise of, say, 5,000 highly optimized machine-learning programs, coalesce outputs, and that happens to produce genuine AI?"

Though tempting, I suspect that this is nothing more than the 80s arcade game mistake on steroids. In this situation, the 5,000 machine learning programs all represent knowledge that human programmers created and painstakingly codified into software. Additionally, the blank stares any programmer would give you when asked how to write the output-merging portion of the code is the problem. Once that chunk is understood, genuine AI will be in our laps.

The 80-20 Rule

Something like the above has been implemented in IBM's Watson machine. The output-merging software is primitive compared to our brains' own versions, but the depth of the knowledge encoded into it is vast enough to still render Watson highly impressive and useful. However, note that Watson's principle achievement was in global domination of the game Jeopardy!, not in knowledge creation.

Part 3: I'm Finally Going to Stop Writing

My best distillation of the previous 3,000 words would look something like this:

Artificial intelligence is a vast arena with layers of achievement awaiting us. Current programming patterns are able to successfully imitate Darwinian evolution, but are no closer to imitating human creativity than squirrels are to holding democratic elections. This means that the lion's share of the payoff — that is to say, a similar proportion of payoff that our cognitive abilities bestow on us over what squirrels' cognitive abilities bestow on them — yet awaits. However, in the quest to unify the software found in human brains with the durable hardware built into computers, we may be looking at the problem from the wrong perspective. Instead of shoving our creativity into computers, we might sooner achieve victory by shoving silicon into our brains. This would also likely mitigate the risk of brilliant, autonomous machines' goals diverging from our own goals.

Footnotes:

[1] When I say "gravity", I mean the gravity-like concept with which cutting edge quantum physics has replaced Newtonian gravity.


comments powered by Disqus