More fundamentally, the reason why the cyclical view is false is that the universe itself has existed for only a finite amount of time. The history of the universe has its own directionality: During its process of entropy increase, the universe has progressed through a sequence of distinct stages.
In the eventful first three seconds, a number of transitions occurred, including probably a period of inflation, reheating, and symmetry breaking. These were followed, later, by nucleosynthesis, expansion, cooling, and formation of galaxies, stars, and planets, including Earth circa 4.
The oldest undisputed fossils are about 3. Evolution of more complex organisms was a slow process. It took some 1. The agricultural revolution began in the Fertile Crescent of the Middle East 10, years ago, and the rest is history. The size of the human population, which was about 5 million when we were living as hunter-gatherers 10, years ago, had grown to about million by the year 1; it reached one billion in AD; and today over 6. All techno-hype aside, it is striking how recent many of the events are that define what we take to be the modern human condition. If compress the time scale such that the Earth formed one year ago, then Homo sapiens evolved less than 12 minutes ago, agriculture began a little over one minute ago, the Industrial Revolution took place less than 2 seconds ago, the electronic computer was invented 0.
Almost all the volume of the universe is ultra-high vacuum, and almost all of the tiny material specks in this vacuum are so hot or so cold, so dense or so dilute, as to be utterly inhospitable to organic life. Spatially as well as temporally, our situation is an anomaly. Given the technocentric perspective adopted here, and in light of our incomplete but substantial knowledge of human history and its place in the universe, how might we structure our expectations of things to come?
Unless the human species lasts literally forever, it will some time cease to exist. In that case, the long-term future of humanity is easy to describe: There are two different ways in which the human species could become extinct: Of course, a transformed continuant of the human species might itself eventually terminate, and perhaps there will be a point where all life comes to an end; so scenarios involving the first type of extinction may eventually converge into the second kind of scenario of complete annihilation.
We postpone discussion of transformation scenarios to a later section, and we shall not here discuss the possible existence of fundamental physical limitations to the survival of intelligent life in the universe. This section focuses on the direct form of extinction annihilation occurring within any very long, but not astronomically long, time horizon — we could say one hundred thousand years for specificity.
Human extinction risks have received less scholarly attention than they deserve. In recent years, there have been approximately three serious books and one major paper on this topic. As I introduced the term, an existential disaster is one that causes either the annihilation of Earth-originating intelligent life or the permanent and drastic curtailment of its potential for future desirable development. It is possible that a publication bias is responsible for the alarming picture presented by these opinions.
Scholars who believe that the threats to human survival are severe might be more likely to write books on the topic, making the threat of extinction seem greater than it really is. The greatest extinction risks and existential risks more generally arise from human activity. Our species has survived volcanic eruptions, meteoric impacts, and other natural hazards for tens of thousands of years. It seems unlikely that any of these old risks should exterminate us in the near future. By contrast, human civilization is introducing many novel phenomena into the world, ranging from nuclear weapons to designer pathogens to high-energy particle colliders.
The most severe existential risks of this century derive from expected technological developments. Advances in biotechnology might make it possible to design new viruses that combine the easy contagion and mutability of the influenza virus with the lethality of HIV. Molecular nanotechnology might make it possible to create weapons systems with a destructive power dwarfing that of both thermonuclear bombs and biowarfare agents.
The same technologies that will pose these risks will also help us to mitigate some risks. Biotechnology can help us develop better diagnostics, vaccines, and anti-viral drugs. Molecular nanotechnology could offer even stronger prophylactics. Extinction risks constitute an especially severe subset of what could go badly wrong for humanity. There are many possible global catastrophes that would cause immense worldwide damage, maybe even the collapse of modern civilization, yet fall short of terminating the human species.
An all-out nuclear war between Russia and the United States might be an example of a global catastrophe that would be unlikely to result in extinction. What distinguishes extinction and other existential catastrophes is that a comeback is impossible.
A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: This takes us to the second family of scenarios: Environmental threats seem to have displaced nuclear holocaust as the chief specter haunting the public imagination. Current-day pessimists about the future often focus on the environmental problems facing the growing world population, worrying that our wasteful and polluting ways are unsustainable and potentially ruinous to human civilization.
The credit for having handed the environmental movement its initial impetus is often given to Rachel Carson, whose book Silent Spring sounded the alarm on pesticides and synthetic chemicals that were being released into the environment with allegedly devastating effects on wildlife and human health. In recent years, the spotlight of environmental concern has shifted to global climate change.
The final estimate is fraught with uncertainty because of uncertainty about what the default rate of emissions of greenhouse gases will be over the century, uncertainty about the climate sensitivity parameter, and uncertainty about other factors. The IPCC therefore expresses its assessment in terms of six different climate scenarios based on different models and different assumptions. How Societies Choose to Fail or Succeed. Tainter notes that societies need to secure certain resources such as food, energy, and natural resources in order to sustain their populations.
At some point, Tainter argues, the marginal returns on these investments in social complexity become unfavorable, and societies that do not manage to scale back when their organizational overheads become too large eventually face collapse. Diamond argues that many past cases of societal collapse have involved environmental factors such as deforestation and habitat destruction, soil problems, water management problems, overhunting and overfishing, the effects of introduced species, human population growth, and increased per-capita impact of people. We need to distinguish different classes of scenarios involving societal collapse.
First, we may have a merely local collapse: All historical examples of collapse have been of this kind. Second, we might suppose that new kinds of threat e. Suppose that a global societal collapse were to occur. If the collapse is of such a nature that a new advanced global civilization can never be rebuilt, the outcome would qualify as an existential disaster. However, it is hard to think of a plausible collapse which the human species survives but which nevertheless makes it permanently impossible to rebuild civilization.
Supposing, therefore, that a new technologically advanced civilization is eventually rebuilt, what is the fate of this resurgent civilization? Again, there are two possibilities. The new civilization might avoid collapse; and in the following two sections we will examine what could happen to such a sustainable global civilization.
Alternatively, the new civilization collapses again, and the cycle repeats. If eventually a sustainable civilization arises, we reach the kind of scenario that the following sections will discuss. If instead one of the collapses leads to extinction, then we have the kind of scenario that was discussed in the previous section. The remaining case is that we face a cycle of indefinitely repeating collapse and regeneration see figure 1. While there are many conceivable explanations for why an advanced society might collapse, only a subset of these explanations could plausibly account for an unending pattern of collapse and regeneration.
An explanation for such a cycle could not rely on some contingent factor that would apply to only some advanced civilizations and not others, or to a factor that an advanced civilization would have a realistic chance of counteracting; for if such a factor were responsible, one would expect that the collapse-regeneration pattern would at some point be broken when the right circumstances finally enabled an advanced civilization to overcome the obstacles to sustainability.
Yet at the same time, the postulated cause for collapse could not be so powerful as to cause the extinction of the human species. A recurrent collapse scenario consequently requires a carefully calibrated homeostatic mechanism that keeps the level of civilization confined within a relatively narrow interval, as illustrated in figure 1. We turn now to the second of these possibilities, that the human condition will reach a kind of stasis, either immediately or after undergoing one of more cycles of collapse-regeneration. Figure 2 depicts two possible trajectories, one representing an increase followed by a permanent plateau, the other representing stasis at or close to the current status quo.
The static view is implausible. It would imply that we have recently arrived at the final human condition even at a time when change is exceptionally rapid: If the world economy continues to grow at the same pace as in the last half century, then by the world will be seven times richer than it is today. World population is predicted to increase to just over 9 billion in , so average wealth would also increase dramatically. A single modest-sized country might then have as much wealth as the entire world has at the present. Over the course of human history, the doubling time of the world economy has been drastically reduced on several occasions, such as in the agricultural transition and the Industrial Revolution.
Should another such transition should occur in this century, the world economy might be several orders of magnitudes larger by the end of the century.
Another reason for assigning a low probability to the static view is that we can foresee various specific technological advances that will give humans important new capacities. Virtual reality environments will constitute an expanding fraction of our experience. The capability of recording, surveillance, biometrics, and data mining technologies will grow, making it increasingly feasible to keep track of where people go, whom they meet, what they do, and what goes on inside their bodies.
Among the most important potential developments are ones that would enable us to alter our biology directly through technological means. If we learn to control the biochemical processes of human senescence, healthy lifespan could be radically prolonged. A person with the age-specific mortality of a year-old would have a life expectancy of about a thousand years. The ancient but hitherto mostly futile quest for happiness could meet with success if scientists could develop safe and effective methods of controlling the brain circuitry responsible for subjective well-being.
Nanotechnology will have wide-ranging consequences for manufacturing, medicine, and computing. Institutional innovations such as prediction markets might improve the capability of human groups to forecast future developments, and other technological or institutional developments might lead to new ways for humans to organize more effectively. Those who believe that developments such as those listed will not occur should consider whether their skepticism is really about ultimate feasibility or merely about timescales. Some of these technologies will be difficult to develop.
Does that give us reason to think that they will never be developed? Not even in 50 years? Looking back, developments such as language, agriculture, and perhaps the Industrial Revolution may be said to have significantly changed the human condition. There are at least a thousand times more of us now; and with current world average life expectancy at 67 years, we live perhaps three times longer than our Pleistocene ancestors. The mental life of human beings has been transformed by developments such as language, literacy, urbanization, division of labor, industrialization, science, communications, transport, and media technology.
The other trajectory in figure 2 represents scenarios in which technological capability continues to grow significantly beyond the current level before leveling off below the level at which a fundamental alteration of the human condition would occur. This trajectory avoids the implausibility of postulating that we have just now reached a permanent plateau of technological development. Nevertheless, it does propose that a permanent plateau will be reached not radically far above the current level.
We must ask what could cause technological development to level off at that stage. One conceptual possibility is that development beyond this level is impossible because of limitation imposed by fundamental natural laws. It appears, however, that the physical laws of our universe permit forms of organization that would qualify as a posthuman condition to be discussed further in the next section. Moreover, there appears to be no fundamental obstacle to the development of technologies that would make it possible to build such forms of organization.
Another potential explanation is that while theoretically possible, a posthuman condition is just too difficult to attain for humanity ever to be able to get there. For this explanation to work, the difficulty would have to be of a certain kind. If the difficulty consisted merely of there being a large number of technologically challenging steps that would be required to reach the destination, then the argument would at best suggest that it will take a long time to get there, not that we never will.
Provided the challenge can be divided into a sequence of individually feasible steps, it would seem that humanity could eventually solve the challenge given enough time. Since at this point we are not so concerned with timescales, it does not appear that technological difficulty of this kind would make any of the trajectories in figure 2 a plausible scenario for the future of humanity.
In order for technological difficulty to account for one of the trajectories in figure 2, the difficulty would have to be of a sort that is not reducible to a long sequence of individually feasible steps. If all the pathways to a posthuman condition required technological capabilities that could be attained only by building enormously complex, error-intolerant systems of a kind which could not be created by trial-and-error or by assembling components that could be separately tested and debugged, then the technological difficulty argument would have legs to stand on. Charles Perrow argued in Normal Accidents that efforts to make complex systems safer often backfire because the added safety mechanisms bring with them additional complexity which creates additional opportunities for things to go wrong when parts and processes interact in unexpected ways.
Each of these arguments about complexity barriers is problematic. Rather, it would have to be shown that all technologies that would enable a posthuman condition biotechnology, nanotechnology, artificial intelligence, etc. That seems an unlikely proposition. In order to produce the trajectories in figure 2, however, the explanation would have to be modified to allow for stagnation and plateauing rather than collapse.
One problem with this hypothesis is that it is unclear that the development of the technologies requisite to reach a posthuman condition would necessarily require a significant increase in the complexity of social organization beyond its present level. One could imagine systems, institutions, or attitudes emerging which would have the effect of blocking further development, whether by design or as an unintended consequence. Yet an explanation rooted in unwillingness for technological advancement would have to overcome several challenges.
First, how does enough unwillingness arise to overcome what at the present appears like an inexorable process of technological innovation and scientific research? Second, how does a decision to relinquish development get implemented globally in a way that leaves no country and no underground movement able to continue technological research? Third, how does the policy of relinquishment avoid being overturned, even on timescales extending over tens of thousands of years and beyond? Relinquishment would have to be global and permanent in order to account for a trajectory like one of those represented in figure 2.
A fourth difficulty emerges out of the three already mentioned: To argue that stasis and plateau are relatively unlikely scenarios is not inconsistent with maintaining that some aspects of the human condition will remain unchanged. In his more recent book Our Posthuman Future , he adds an important qualification to his earlier thesis, namely that direct technological modification of human nature could undermine the foundations of liberal democracy.
In this paper, the term is used to refer to a condition which has at least one of the following characteristics:. A singularity scenario, and a more incremental ascent into a posthuman condition. The two dashed lines in figure 3 differ in steepness. One of them depicts slow gradual growth that in the fullness of time rises into the posthuman level and beyond. The other depicts a period of extremely rapid growth in which humanity abruptly transitions into a posthuman condition.
- Poetry by Tamara Wilhite.
- Florida Pioneer Woman.
- Accessibility Navigation.
- Is humanity just a phase in a robotic evolution? | World Economic Forum.
This latter possibility can be referred to as the singularity hypothesis. Logically, these two contentions are quite distinct. One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
- When Did the Human Mind Evolve to What It is Today? | Science | Smithsonian.
- Emerald Eyes Mist (Emerald Eyes Trilogy Book 2).
- The Toughest Indian in the World!
- Cake pops - Mini gourmands (French Edition).
- The Walker Behind (Lythande)!
- The Nano-Reef Handbook.
- Great Mysteries of Human Evolution | afijusokuz.cf;
- Exam Success: 2357 NVQ Diploma in Electrotechnical Technology?
The idea of a technological singularity tied specifically to artificial intelligence was perhaps first clearly articulated by the statistician I. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever.
Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended. Skeptics of the singularity hypothesis can object that while ceteris paribus greater intelligence would lead to faster technological progress, there is an additional factor at play which may slow things down, namely that the easiest improvements will be made first, and that after the low-hanging fruits have all been picked, each subsequent improvement will be more difficult and require a greater amount of intellectual capability and labor to achieve.
The mere existence of positive feedback, therefore, is not sufficient to establish that an intelligence explosion would occur once intelligence reaches some critical magnitude. To assess the singularity hypothesis one must consider more carefully what kinds of intelligence-increasing interventions might be feasible and how closely stacked these interventions are in terms of their difficulty.
Only if intelligence growth could exceed the growth in difficulty level for each subsequent improvement could there be a singularity. The period of rapid intelligence growth would also have to last long enough to usher in a posthuman era before running out of steam. It might be easiest to assess the prospect for an intelligence explosion if we focus on the possibility of quantitative rather than qualitative improvements in intelligence.
One interesting pathway to greater intelligence illustrating such quantitative growth — and one that Vinge did not discuss — is uploading. Uploading refers to the use of technology to transfer a human mind to a computer. This would involve the following steps: First, create a sufficiently detailed scan of a particular human brain, perhaps by feeding vitrified brain tissue into an array of powerful microscopes for automatic slicing and scanning. Second, from this scanning data, use automatic image processing to reconstruct the 3-dimensional neuronal network that implemented cognition in the original brain, and combine this map with neurocomputational models of the different types of neurons contained in the network.
Third, emulate the whole computational structure on a powerful supercomputer or cluster. If successful, the procedure would a qualitative reproduction of the original mind, with memory and personality intact, onto a computer where it would now exist as software. In determining the prerequisites for uploading, a tradeoff exists between the power of the scanning and simulation technology, on the one hand, and the degree of neuroscience insight on the other.
The worse the resolution of the scan, and the lower the computing power available to simulate functionally possibly irrelevant features, the more scientific insight would be needed to make the procedure work. Conversely, with sufficiently advanced scanning technology and enough computing power, it might be possible to brute-force an upload even with fairly limited understanding of how the brain works — perhaps a level of understanding representing merely an incremental advance over the current state of the art.
One obvious consequence of uploading is that many copies could be created of one uploaded mind. The limiting resource is computing power to store and run the upload minds. If enough computing hardware already exists or could rapidly be built, the upload population could undergo explosive growth: And the upload replica would be an exact copy, possessing from birth all the skills and knowledge of the original. This could result in rapidly exponential growth in the supply of highly skilled labor. Such improvements would make it possible to create faster-thinking uploads, running perhaps at speeds thousands or millions times that of an organic brain.
If uploading is technologically feasible, therefore, a singularity scenario involving an intelligence explosion and very rapid change seems realistic based only on the possibility of quantitative growth in machine intelligence. A human upload could have an indefinitely long lifespan as it would not be subject to biological senescence, and periodic backup copies could be created for additional security. Further changes would likely follow swiftly from the productivity growth brought about by the population expansion. These further changes may include qualitative improvements in the intelligence of uploads, other machine intelligences, and remaining biological human beings.
Inventor and futurist Ray Kurzweil has argued for the singularity hypothesis on somewhat different grounds. His most recent book, The Singularity is Near , is an update of his earlier writings. Extrapolating these trend lines, Kurzweil infers that a technological singularly is due around the year First, one might of course doubt that present exponential trends will continue for another four decades.
Second, while it is possible to identify certain fast-growing areas, such as IT and biotech, there are many other technology areas where progress is much slower. One could argue that to get an index of the overall pace of technological development, we should look not at a hand-picked portfolio of hot technologies; but instead at economic growth, which implicitly incorporates all productivity-enhancing technological innovations, weighted by their economic significance.
In fact, the world economy has also been growing at a roughly exponential rate since the Industrial Revolution; but the doubling time is much longer, approximately 20 years. But it is far from clear that this is so. Vaclav Smil — the historian of technology who, as we saw, has argued that the past six generations have seen the most rapid and profound change in recorded history — maintains that the s was the most innovative decade of human history. The four families of scenarios we have considered — extinction, recurrent collapse, plateau, and posthumanity — could be modulated by varying the timescale over which they are hypothesized to occur.
A few hundred years or a few thousand years might already be ample time for the scenarios to have an opportunity to play themselves out. Yet such an interval is a blip compared to the lifetime of the universe. Let us therefore zoom out and consider the longer term prospects for humanity. We can illustrate this point graphically by redrawing the earlier diagrams using an expanded scale on the two axes figure 4. The graph is still a mere schematic, not a strictly quantitative representation. Note how the scenarios that postulate that the human condition will continue to hold indefinitely begin to look increasingly peculiar as we adjust the scales to reveal more of the larger picture.
The extinction scenario is perhaps the one least affected by extending the timeframe of consideration. If humanity goes extinct, it stays extinct. One might argue, however, that the current century, or the next few centuries, will be a critical phase for humanity, such that if we make it through this period then the life expectancy of human civilization could become extremely high. Several possible lines of argument would support this view.
For example, one might believe that superintelligence will be developed within a few centuries, and that, while the creation of superintelligence will pose grave risks, once that creation and its immediate aftermath have been survived, the new civilization would have vastly improved survival prospects since it would be guided by superintelligent foresight and planning. Furthermore, one might believe that self-sustaining space colonies may have been established within such a timeframe, and that once a human or posthuman civilization becomes dispersed over multiple planets and solar systems, the risk of extinction declines.
One might also believe that many of the possible revolutionary technologies not only superintelligence that can be developed will be developed within the next several hundred years; and that if these technological revolutions are destined to cause existential disaster, they would already have done so by then. The recurrent collapse scenario becomes increasingly unlikely the longer the timescale, for reasons that are apparent from figure 4. The scenario postulates that technological civilization will oscillate continuously within a relatively narrow band of development.
If there is any chance that a cycle will either break through to the posthuman level or plummet into extinction, then there is for each period a chance that the oscillation will end. Unless the chance of such a breakout converges to zero at a sufficiently rapid rate, then with probability one the pattern will eventually be broken. At that point the pattern might degenerate into one of the other ones we have considered. The plateau scenarios are similar to the recurrent collapse scenario in that the level of civilization is hypothesized to remain confined within a narrow range; and the longer the timeframe considered, the smaller the probability that the level of technological development will remain within this range.
Such an idea seems simple only to our modern minds, which can see new possibilities in the world, discover hidden connections, and think and communicate with symbols. Scientists don't yet know how that modern mind came into existence. The question is particularly hard to answer because they can't get into the brain of H. Instead, they have to infer what those ancient minds were like by looking at the things they made.
The people who painted pictures of mammoths and woolly rhinos in French caves almost 32, years ago must have already had minds much like our own. Archaeologists have documented an explosion of expressions of the modern mind after roughly 50, years ago, in the form of jewelry, elaborate graves, bone-tipped spears, and other new kinds of tools. The bones of the people who made these things look like our own. They were members of Homo sapiens, complete with long, slender arms and legs, a flat face, a jutting chin, and a high forehead that fronted a big brain.
But they were hardly the first people with our anatomy. Richard Klein, a paleoanthropologist at Stanford University, has offered a controversial theory: The modern mind is the result of a rapid genetic change. He puts the date of the change at around 50, years ago, pointing out that the rise of cultural artifacts comes after that date, as does the spread of modern humans from Africa.
Was the evolution of our species inevitable or a matter of luck? | Aeon Essays
The evolution of the modern mind allowed humans to thrive as never before, Klein argues, and soon even a continent as huge as Africa could not contain their expanding population. Many other paleoanthropologists beg to differ. Sally McBrearty, an archaeologist at the University of Connecticut, believes the evidence shows that the technology and artistic expression of modern humans emerged slowly over hundreds of thousands of years, as humans gradually moved into new habitats and increased their population.
She points to a long list of tantalizing clues in Africa that predate Klein's 50,year milestone. Humans may have been grinding pigments , years ago, for example, and researchers have found barbed bone fishing hooks in Central Africa that they estimate are 90, years old. Last year scientists in South Africa discovered stones covered with geometrical cross-hatching dating back 77, years.
Klein dismisses the evidence for such slow-fuse change as paltry and misleading. Most sites don't have anything like this at all, but when you get to 50, years ago, they all do. But the evidence from fossils suggests that this wave of extinctions has been rising for thousands of years. And there's a grim irony in the possibility that two of the first species to fall victim to us may have been our closest relatives. Studies on human mitochondrial DNA indicate that all humans alive today can trace their ancestry back to members of Homo sapiens who lived in Africa roughly , years ago.
At the time, there were two other hominid species. Members of Homo neanderthalensis Neanderthals , who lived in Europe, have a reputation as lumbering brutes, but they had brains as big as or bigger than those of humans and awesome hunting skills that helped them survive cyclic ice ages for half a million years or more. In Asia, Homo erectus survived for about 1. And yet not long after H. Our close kinship with these hominids makes their disappearance all the more puzzling.
It wasn't very long ago, geologically speaking, when our ancestors came face to face with these other species, and yet scientists still know little about the encounter. Neanderthals appear to have clung to existence for 15, years after encountering our own species in Europe. But over time they became rarer and rarer, until they could be found only in isolated mountain valleys. And then they could be found nowhere at all. Over the years, scientists have tried to explain the disappearance of Neanderthals and H. But the cause of their demise could have been far more subtle.
Even if our species had just a slight evolutionary edge over the other hominids, the effect could have been devastating, given enough time. It's possible, for example, that humans benefited from long-distance trade and better tools, allowing them to withstand droughts, ice ages, and other hard times better than their competitors. Our ancestors may have had just a few more children in each generation, and gradually they took over the best places for hunting and living.
After a few hundred generations, they unwittingly squeezed their cousins out of existence. In April , geneticists finished sequencing the human genome, and now they're well on their way to decoding the genome of one of our closest relatives, the common chimpanzee. The sight of these two sequences placed side by side is astonishing.
For thousands of positions at a stretch, their codes are identical. Recently Morris Goodman, a biologist at the Wayne State University School of Medicine, and his colleagues analyzed the portions of DNA that are responsible for the structure of proteins. In this crucial part of the genome, humans and chimps were In other words, much of what makes us uniquely human may be found in just. That tiny fraction will be the focus of a huge amount of research in years to come. As the differences between humans and chimps come to light, for instance, medicine will be revolutionized.
Scientists hope to find the genetic differences that explain why chimpanzees don't get AIDS, Alzheimer's, and other diseases that plague humans. Scientists will also be searching the two genomes for clues to how and why humans evolved traits that distinguish us from chimpanzees, including a bipedal body, a big brain, and language. A taste of things to come is the recent study of a gene called FOXP2. People who inherit mutant forms of FOXP2 have trouble speaking and understanding grammar. Scientists have reconstructed the evolutionary history of the gene by comparing the subtle variations in FOXP2 that different people carry.
The researchers found that in the past , years, the gene underwent an intense burst of evolutionary selection. It's possible that changes to this gene may have helped prompt the transformation of simple apelike grunts into language. But it would be a mistake to think that any single gene will tell us much about human nature, or even just the ability to talk.
And those genes can only build a modern human being by cooperating with one another rather than working alone. This comes as no surprise to scientists who have studied the evolution of other animals. It has been an amazing run: Over 7 million years our lineage has evolved from diminutive apes to the planet's dominant species. We've evolved brains that are capable of things never achieved on our planet, and perhaps in the universe.
Why shouldn't we continue evolving more powerful brains? It's easy to think that we'll just keep marching ahead, that in another million years we'll have gigantic brains like out of some episode of Star Trek. But scientists can't say where we're headed. It's even possible that we've reached an evolutionary dead end. Consider the fact that the human brain hasn't expanded all that much in at least , years. You might think that if bigger brains meant more intelligence, natural selection would still be inflating them today.
But big brains have their drawbacks. Like an expanding computer network, a growing brain needs more and more wiring to connect its processors together. The human brain may be reaching the edge of this computational limit. A woman's birth canal has to be wide enough for a big-brained baby to get out.
Is humanity just a phase in a robotic evolution?
But there's a limit to how wide the female pelvis can become: If it became too wide, women would struggle to walk upright. That constraint may make it impossible for the human brain to get any bigger. The only way to know the answer to this particular question, however, may be to wait for the future to become the past. It's too much of a lottery.
Over the past 7 million years, both the fingers and the palms of our hominid ancestors became shorter, and their thumbs became more flexible. These changes, along with the greatly expanded motor and sensory capacity in our brains, allow us to use a wide range of power, precision, and hook grips and hence an infinite variety of tools.
But the story of hand evolution is still a murky one. Despite the difference in the way its hands are shaped, a chimp has considerable dexterity. It can flex or fold its fingers in a hook position or grasp small objects between its thumb and the side of its index finger. And fossils of hominid hands from 3. Cambridge University Press, Reprinted with the permission of Cambridge University Press. You might also like. The Latest on Lucy: Where's Our Hangover Pill? Rare Crocs in an Unholy River. Flex your cortex with Discover.
Discover Magazine on Facebook Discover Magazine.